Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
1k
labels
stringlengths
4
1.38k
body
stringlengths
1
262k
index
stringclasses
16 values
text_combine
stringlengths
96
262k
label
stringclasses
2 values
text
stringlengths
96
252k
binary_label
int64
0
1
289,063
8,854,501,838
IssuesEvent
2019-01-09 01:39:09
Polymer/lit-element
https://api.github.com/repos/Polymer/lit-element
closed
[docs] Add info on testing
Area: docs Priority: Critical Status: Available Type: Maintenance
<!-- If you are asking a question rather than filing a bug, try one of these instead: - StackOverflow (https://stackoverflow.com/questions/tagged/polymer) - Polymer Slack Channel (https://bit.ly/polymerslack) - Mailing List (https://groups.google.com/forum/#!forum/polymer-dev) --> <!-- Instructions For Filing a Bug: https://github.com/Polymer/lit-element/blob/master/CONTRIBUTING.md#filing-bugs --> ### Description <!-- Example: Error thrown when calling `appendChild` on Lit element --> #### Live Demo <!-- Stackblitz starting point (fork and edit) --> https://stackblitz.com/edit/lit-element-example?file=index.js <!-- glitch.me starting point (remix and edit -- must be logged in to persist!) --> https://glitch.com/edit/#!/hello-lit-element?path=index.html:1:0 <!-- ...or provide your own repro URL --> #### Steps to Reproduce <!-- Example: 1. Create `my-element` 2. Append `my-element` to document.body 3. Create `div`. 4. Append `div` to `my-element` --> #### Expected Results <!-- Example: No error is throw --> #### Actual Results <!-- Example: Error is thrown --> ### Browsers Affected <!-- Check all that apply --> - [ ] Chrome - [ ] Firefox - [ ] Edge - [ ] Safari 11 - [ ] Safari 10 - [ ] IE 11 ### Versions <!-- `npm ls` will show the version of webcomponents.js and lit-element --> - lit-element: vX.X.X - webcomponents: vX.X.X I got a problem while testing my library (github.com/yanishoss/polymer-toolkit) with Jest. Jest doesn't support ES6 modules and @polymer/lit-element is not compiled, thereby i got this message: FAIL tests/redux.test.ts ● Test suite failed to run Jest encountered an unexpected token This usually means that you are trying to import a file which Jest cannot parse, e.g. it's not plain JavaScript. By default, if Jest sees a Babel config, it will use that to transform your files, ignoring "node_modules". Here's what you can do: • To have some of your "node_modules" files transformed, you can specify a custom "transformIgnorePatterns" in your config. • If you need a custom transformation specify a "transform" option in your config. • If you simply want to mock your non-JS modules (e.g. binary assets) you can stub them out with the "moduleNameMapper" config option. You'll find more details and examples of these config options in the docs: https://jestjs.io/docs/en/configuration.html Details: /home/travis/build/yanishoss/polymer-toolkit/node_modules/@polymer/lit-element/lit-element.js:1 ({"Object.<anonymous>":function(module,exports,require,__dirname,__filename,global,jest){import { PropertiesMixin } from '@polymer/polymer/lib/mixins/properties-mixin.js'; ^ SyntaxError: Unexpected token { > 1 | import { html } from '@polymer/lit-element'; | ^ 2 | import { Dispatch } from 'redux'; 3 | import { ReduxLitElement } from '../src'; 4 | import { IAction, IState, store, Type } from './store'; at ScriptTransformer._transformAndBuildScript (node_modules/jest-runtime/build/script_transformer.js:403:17) at Object.<anonymous> (tests/redux.test.ts:1:1)
1.0
[docs] Add info on testing - <!-- If you are asking a question rather than filing a bug, try one of these instead: - StackOverflow (https://stackoverflow.com/questions/tagged/polymer) - Polymer Slack Channel (https://bit.ly/polymerslack) - Mailing List (https://groups.google.com/forum/#!forum/polymer-dev) --> <!-- Instructions For Filing a Bug: https://github.com/Polymer/lit-element/blob/master/CONTRIBUTING.md#filing-bugs --> ### Description <!-- Example: Error thrown when calling `appendChild` on Lit element --> #### Live Demo <!-- Stackblitz starting point (fork and edit) --> https://stackblitz.com/edit/lit-element-example?file=index.js <!-- glitch.me starting point (remix and edit -- must be logged in to persist!) --> https://glitch.com/edit/#!/hello-lit-element?path=index.html:1:0 <!-- ...or provide your own repro URL --> #### Steps to Reproduce <!-- Example: 1. Create `my-element` 2. Append `my-element` to document.body 3. Create `div`. 4. Append `div` to `my-element` --> #### Expected Results <!-- Example: No error is throw --> #### Actual Results <!-- Example: Error is thrown --> ### Browsers Affected <!-- Check all that apply --> - [ ] Chrome - [ ] Firefox - [ ] Edge - [ ] Safari 11 - [ ] Safari 10 - [ ] IE 11 ### Versions <!-- `npm ls` will show the version of webcomponents.js and lit-element --> - lit-element: vX.X.X - webcomponents: vX.X.X I got a problem while testing my library (github.com/yanishoss/polymer-toolkit) with Jest. Jest doesn't support ES6 modules and @polymer/lit-element is not compiled, thereby i got this message: FAIL tests/redux.test.ts ● Test suite failed to run Jest encountered an unexpected token This usually means that you are trying to import a file which Jest cannot parse, e.g. it's not plain JavaScript. By default, if Jest sees a Babel config, it will use that to transform your files, ignoring "node_modules". Here's what you can do: • To have some of your "node_modules" files transformed, you can specify a custom "transformIgnorePatterns" in your config. • If you need a custom transformation specify a "transform" option in your config. • If you simply want to mock your non-JS modules (e.g. binary assets) you can stub them out with the "moduleNameMapper" config option. You'll find more details and examples of these config options in the docs: https://jestjs.io/docs/en/configuration.html Details: /home/travis/build/yanishoss/polymer-toolkit/node_modules/@polymer/lit-element/lit-element.js:1 ({"Object.<anonymous>":function(module,exports,require,__dirname,__filename,global,jest){import { PropertiesMixin } from '@polymer/polymer/lib/mixins/properties-mixin.js'; ^ SyntaxError: Unexpected token { > 1 | import { html } from '@polymer/lit-element'; | ^ 2 | import { Dispatch } from 'redux'; 3 | import { ReduxLitElement } from '../src'; 4 | import { IAction, IState, store, Type } from './store'; at ScriptTransformer._transformAndBuildScript (node_modules/jest-runtime/build/script_transformer.js:403:17) at Object.<anonymous> (tests/redux.test.ts:1:1)
priority
add info on testing if you are asking a question rather than filing a bug try one of these instead stackoverflow polymer slack channel mailing list description live demo steps to reproduce example create my element append my element to document body create div append div to my element expected results actual results browsers affected chrome firefox edge safari safari ie versions npm ls will show the version of webcomponents js and lit element lit element vx x x webcomponents vx x x i got a problem while testing my library github com yanishoss polymer toolkit with jest jest doesn t support modules and polymer lit element is not compiled thereby i got this message fail tests redux test ts ● test suite failed to run jest encountered an unexpected token this usually means that you are trying to import a file which jest cannot parse e g it s not plain javascript by default if jest sees a babel config it will use that to transform your files ignoring node modules here s what you can do • to have some of your node modules files transformed you can specify a custom transformignorepatterns in your config • if you need a custom transformation specify a transform option in your config • if you simply want to mock your non js modules e g binary assets you can stub them out with the modulenamemapper config option you ll find more details and examples of these config options in the docs details home travis build yanishoss polymer toolkit node modules polymer lit element lit element js object function module exports require dirname filename global jest import propertiesmixin from polymer polymer lib mixins properties mixin js syntaxerror unexpected token import html from polymer lit element import dispatch from redux import reduxlitelement from src import iaction istate store type from store at scripttransformer transformandbuildscript node modules jest runtime build script transformer js at object tests redux test ts
1
697,118
23,928,140,579
IssuesEvent
2022-09-10 05:27:24
kubernetes-sigs/cluster-api-provider-vsphere
https://api.github.com/repos/kubernetes-sigs/cluster-api-provider-vsphere
closed
Binary exposes testing related flags
kind/bug priority/critical-urgent
/kind bug **What steps did you take and what happened:** Currently the binary exposes the following flags: ```bash cluster-api-provider-vsphere on  main [$?] via 🏎💨 v1.18.5 ➜ ./binary --help Usage of ./binary: -add_dir_header If true, adds the file directory to the header of the log messages -alsologtostderr log to standard error as well as files -credentials-file string path to CAPV credentials file (default "/etc/capv/credentials.yaml") -enable-integration-tests Enables integration tests -enable-keep-alive DEPRECATED: feature to enable keep alive handler in vsphere sessions. This functionality is enabled by default now -enable-leader-election Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager. (default true) -enable-unit-tests Enables unit tests (default true) -ginkgo.debug If set, ginkgo will emit node output to files when running in parallel. -ginkgo.dryRun If set, ginkgo will walk the test hierarchy without actually running anything. Best paired with -v. -ginkgo.failFast If set, ginkgo will stop running a test suite after a failure occurs. -ginkgo.failOnPending If set, ginkgo will mark the test suite as failed if any specs are pending. -ginkgo.flakeAttempts int Make up to this many attempts to run each spec. Please note that if any of the attempts succeed, the suite will not be failed. But any failures will still be recorded. (default 1) -ginkgo.focus value If set, ginkgo will only run specs that match this regular expression. Can be specified multiple times, values are ORed. -ginkgo.noColor If set, suppress color output in default reporter. -ginkgo.noisyPendings If set, default reporter will shout about pending tests. (default true) -ginkgo.noisySkippings If set, default reporter will shout about skipping tests. (default true) -ginkgo.parallel.node int This worker nodes (one-indexed) node number. For running specs in parallel. (default 1) -ginkgo.parallel.streamhost string The address for the server that the running nodes should stream data to. -ginkgo.parallel.synchost string The address for the server that will synchronize the running nodes. -ginkgo.parallel.total int The total number of worker nodes. For running specs in parallel. (default 1) -ginkgo.progress If set, ginkgo will emit progress information as each spec runs to the GinkgoWriter. -ginkgo.randomizeAllSpecs If set, ginkgo will randomize all specs together. By default, ginkgo only randomizes the top level Describe, Context and When groups. -ginkgo.regexScansFilePath If set, ginkgo regex matching also will look at the file path (code location). -ginkgo.reportFile string Override the default reporter output file path. -ginkgo.reportPassed If set, default reporter prints out captured output of passed tests. -ginkgo.seed int The seed used to randomize the spec suite. (default 1662158123) -ginkgo.skip value If set, ginkgo will only run specs that do not match this regular expression. Can be specified multiple times, values are ORed. -ginkgo.skipMeasurements If set, ginkgo will skip any measurement specs. -ginkgo.slowSpecThreshold float (in seconds) Specs that take longer to run than this threshold are flagged as slow by the default reporter. (default 5) -ginkgo.succinct If set, default reporter prints out a very succinct report -ginkgo.trace If set, default reporter prints out the full stack trace when a failure occurs -ginkgo.v If set, default reporter print out all specs as they begin. -health-addr string The address the health endpoint binds to. (default ":9440") -keep-alive-duration duration idle time interval(minutes) in between send() requests in keepalive handler (default 5m0s) -kubeconfig string Paths to a kubeconfig. Only required if out-of-cluster. -leader-election-id string Name of the config map to use as the locking resource when configuring leader election. (default "capv-controller-manager-runtime") -log_backtrace_at value when logging hits line file:N, emit a stack trace -log_dir string If non-empty, write log files in this directory -log_file string If non-empty, use this log file -log_file_max_size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800) -logtostderr log to standard error instead of files (default true) -max-concurrent-reconciles int The maximum number of allowed, concurrent reconciles. (default 10) -metrics-addr string The address the metric endpoint binds to. (default "localhost:8080") -namespace string Namespace that the controller watches to reconcile cluster-api objects. If unspecified, the controller watches for cluster-api objects across all namespaces. -network-provider string network provider to be used by Supervisor based clusters. -one_output If true, only write logs to their native severity level (vs also writing to each lower severity level) -pod-name string The name of the pod running the controller manager. (default "capv-controller-manager") -profiler-address string Bind address to expose the pprof profiler (e.g. localhost:6060) -root-dir string Root project directory (default "../..") -skip_headers If true, avoid header prefixes in the log messages -skip_log_headers If true, avoid headers when opening log files -stderrthreshold value logs at or above this threshold go to stderr (default 2) -sync-period duration The interval at which cluster-api objects are synchronized (default 10m0s) -v value number for the log level verbosity -vmodule value comma-separated list of pattern=N settings for file-filtered logging -webhook-port int Webhook Server port (set to 0 to disable) ``` From the output, it is obvious most of the `gingko` prefixed flags are coming in from the ginkgo dependency since it is being referred to via a non-test file. A couple of other flags come from porting the legacy code from the CAPW codebase which exposed flags to choose between running and/or integration tests. **What did you expect to happen:** The `--help` section only shows flags relevant to the binary. **Anything else you would like to add:** This is not a breaking change since these flags are not used in the deployment files and in no way affect the functionality of the binary. **Environment:** - Cluster-api-provider-vsphere version: `v1.3.x`
1.0
Binary exposes testing related flags - /kind bug **What steps did you take and what happened:** Currently the binary exposes the following flags: ```bash cluster-api-provider-vsphere on  main [$?] via 🏎💨 v1.18.5 ➜ ./binary --help Usage of ./binary: -add_dir_header If true, adds the file directory to the header of the log messages -alsologtostderr log to standard error as well as files -credentials-file string path to CAPV credentials file (default "/etc/capv/credentials.yaml") -enable-integration-tests Enables integration tests -enable-keep-alive DEPRECATED: feature to enable keep alive handler in vsphere sessions. This functionality is enabled by default now -enable-leader-election Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager. (default true) -enable-unit-tests Enables unit tests (default true) -ginkgo.debug If set, ginkgo will emit node output to files when running in parallel. -ginkgo.dryRun If set, ginkgo will walk the test hierarchy without actually running anything. Best paired with -v. -ginkgo.failFast If set, ginkgo will stop running a test suite after a failure occurs. -ginkgo.failOnPending If set, ginkgo will mark the test suite as failed if any specs are pending. -ginkgo.flakeAttempts int Make up to this many attempts to run each spec. Please note that if any of the attempts succeed, the suite will not be failed. But any failures will still be recorded. (default 1) -ginkgo.focus value If set, ginkgo will only run specs that match this regular expression. Can be specified multiple times, values are ORed. -ginkgo.noColor If set, suppress color output in default reporter. -ginkgo.noisyPendings If set, default reporter will shout about pending tests. (default true) -ginkgo.noisySkippings If set, default reporter will shout about skipping tests. (default true) -ginkgo.parallel.node int This worker nodes (one-indexed) node number. For running specs in parallel. (default 1) -ginkgo.parallel.streamhost string The address for the server that the running nodes should stream data to. -ginkgo.parallel.synchost string The address for the server that will synchronize the running nodes. -ginkgo.parallel.total int The total number of worker nodes. For running specs in parallel. (default 1) -ginkgo.progress If set, ginkgo will emit progress information as each spec runs to the GinkgoWriter. -ginkgo.randomizeAllSpecs If set, ginkgo will randomize all specs together. By default, ginkgo only randomizes the top level Describe, Context and When groups. -ginkgo.regexScansFilePath If set, ginkgo regex matching also will look at the file path (code location). -ginkgo.reportFile string Override the default reporter output file path. -ginkgo.reportPassed If set, default reporter prints out captured output of passed tests. -ginkgo.seed int The seed used to randomize the spec suite. (default 1662158123) -ginkgo.skip value If set, ginkgo will only run specs that do not match this regular expression. Can be specified multiple times, values are ORed. -ginkgo.skipMeasurements If set, ginkgo will skip any measurement specs. -ginkgo.slowSpecThreshold float (in seconds) Specs that take longer to run than this threshold are flagged as slow by the default reporter. (default 5) -ginkgo.succinct If set, default reporter prints out a very succinct report -ginkgo.trace If set, default reporter prints out the full stack trace when a failure occurs -ginkgo.v If set, default reporter print out all specs as they begin. -health-addr string The address the health endpoint binds to. (default ":9440") -keep-alive-duration duration idle time interval(minutes) in between send() requests in keepalive handler (default 5m0s) -kubeconfig string Paths to a kubeconfig. Only required if out-of-cluster. -leader-election-id string Name of the config map to use as the locking resource when configuring leader election. (default "capv-controller-manager-runtime") -log_backtrace_at value when logging hits line file:N, emit a stack trace -log_dir string If non-empty, write log files in this directory -log_file string If non-empty, use this log file -log_file_max_size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800) -logtostderr log to standard error instead of files (default true) -max-concurrent-reconciles int The maximum number of allowed, concurrent reconciles. (default 10) -metrics-addr string The address the metric endpoint binds to. (default "localhost:8080") -namespace string Namespace that the controller watches to reconcile cluster-api objects. If unspecified, the controller watches for cluster-api objects across all namespaces. -network-provider string network provider to be used by Supervisor based clusters. -one_output If true, only write logs to their native severity level (vs also writing to each lower severity level) -pod-name string The name of the pod running the controller manager. (default "capv-controller-manager") -profiler-address string Bind address to expose the pprof profiler (e.g. localhost:6060) -root-dir string Root project directory (default "../..") -skip_headers If true, avoid header prefixes in the log messages -skip_log_headers If true, avoid headers when opening log files -stderrthreshold value logs at or above this threshold go to stderr (default 2) -sync-period duration The interval at which cluster-api objects are synchronized (default 10m0s) -v value number for the log level verbosity -vmodule value comma-separated list of pattern=N settings for file-filtered logging -webhook-port int Webhook Server port (set to 0 to disable) ``` From the output, it is obvious most of the `gingko` prefixed flags are coming in from the ginkgo dependency since it is being referred to via a non-test file. A couple of other flags come from porting the legacy code from the CAPW codebase which exposed flags to choose between running and/or integration tests. **What did you expect to happen:** The `--help` section only shows flags relevant to the binary. **Anything else you would like to add:** This is not a breaking change since these flags are not used in the deployment files and in no way affect the functionality of the binary. **Environment:** - Cluster-api-provider-vsphere version: `v1.3.x`
priority
binary exposes testing related flags kind bug what steps did you take and what happened currently the binary exposes the following flags bash cluster api provider vsphere on  main via 🏎💨 ➜ binary help usage of binary add dir header if true adds the file directory to the header of the log messages alsologtostderr log to standard error as well as files credentials file string path to capv credentials file default etc capv credentials yaml enable integration tests enables integration tests enable keep alive deprecated feature to enable keep alive handler in vsphere sessions this functionality is enabled by default now enable leader election enable leader election for controller manager enabling this will ensure there is only one active controller manager default true enable unit tests enables unit tests default true ginkgo debug if set ginkgo will emit node output to files when running in parallel ginkgo dryrun if set ginkgo will walk the test hierarchy without actually running anything best paired with v ginkgo failfast if set ginkgo will stop running a test suite after a failure occurs ginkgo failonpending if set ginkgo will mark the test suite as failed if any specs are pending ginkgo flakeattempts int make up to this many attempts to run each spec please note that if any of the attempts succeed the suite will not be failed but any failures will still be recorded default ginkgo focus value if set ginkgo will only run specs that match this regular expression can be specified multiple times values are ored ginkgo nocolor if set suppress color output in default reporter ginkgo noisypendings if set default reporter will shout about pending tests default true ginkgo noisyskippings if set default reporter will shout about skipping tests default true ginkgo parallel node int this worker nodes one indexed node number for running specs in parallel default ginkgo parallel streamhost string the address for the server that the running nodes should stream data to ginkgo parallel synchost string the address for the server that will synchronize the running nodes ginkgo parallel total int the total number of worker nodes for running specs in parallel default ginkgo progress if set ginkgo will emit progress information as each spec runs to the ginkgowriter ginkgo randomizeallspecs if set ginkgo will randomize all specs together by default ginkgo only randomizes the top level describe context and when groups ginkgo regexscansfilepath if set ginkgo regex matching also will look at the file path code location ginkgo reportfile string override the default reporter output file path ginkgo reportpassed if set default reporter prints out captured output of passed tests ginkgo seed int the seed used to randomize the spec suite default ginkgo skip value if set ginkgo will only run specs that do not match this regular expression can be specified multiple times values are ored ginkgo skipmeasurements if set ginkgo will skip any measurement specs ginkgo slowspecthreshold float in seconds specs that take longer to run than this threshold are flagged as slow by the default reporter default ginkgo succinct if set default reporter prints out a very succinct report ginkgo trace if set default reporter prints out the full stack trace when a failure occurs ginkgo v if set default reporter print out all specs as they begin health addr string the address the health endpoint binds to default keep alive duration duration idle time interval minutes in between send requests in keepalive handler default kubeconfig string paths to a kubeconfig only required if out of cluster leader election id string name of the config map to use as the locking resource when configuring leader election default capv controller manager runtime log backtrace at value when logging hits line file n emit a stack trace log dir string if non empty write log files in this directory log file string if non empty use this log file log file max size uint defines the maximum size a log file can grow to unit is megabytes if the value is the maximum file size is unlimited default logtostderr log to standard error instead of files default true max concurrent reconciles int the maximum number of allowed concurrent reconciles default metrics addr string the address the metric endpoint binds to default localhost namespace string namespace that the controller watches to reconcile cluster api objects if unspecified the controller watches for cluster api objects across all namespaces network provider string network provider to be used by supervisor based clusters one output if true only write logs to their native severity level vs also writing to each lower severity level pod name string the name of the pod running the controller manager default capv controller manager profiler address string bind address to expose the pprof profiler e g localhost root dir string root project directory default skip headers if true avoid header prefixes in the log messages skip log headers if true avoid headers when opening log files stderrthreshold value logs at or above this threshold go to stderr default sync period duration the interval at which cluster api objects are synchronized default v value number for the log level verbosity vmodule value comma separated list of pattern n settings for file filtered logging webhook port int webhook server port set to to disable from the output it is obvious most of the gingko prefixed flags are coming in from the ginkgo dependency since it is being referred to via a non test file a couple of other flags come from porting the legacy code from the capw codebase which exposed flags to choose between running and or integration tests what did you expect to happen the help section only shows flags relevant to the binary anything else you would like to add this is not a breaking change since these flags are not used in the deployment files and in no way affect the functionality of the binary environment cluster api provider vsphere version x
1
211,444
7,201,170,222
IssuesEvent
2018-02-05 21:36:31
buttercup/buttercup-desktop
https://api.github.com/repos/buttercup/buttercup-desktop
closed
Use non-minified buttercup core library (speed issue)
Effort: Low Priority: High Status: Abandoned Type: Bug
The core currently has an issue with minification, in that the `buttercup-web.min.js` asset is much slower than the non-minified copy when it comes to crypto (locking/unlocking archives via `ArchiveManager`). Issue: buttercup/buttercup-core#199 Use the non-minified copy for now.
1.0
Use non-minified buttercup core library (speed issue) - The core currently has an issue with minification, in that the `buttercup-web.min.js` asset is much slower than the non-minified copy when it comes to crypto (locking/unlocking archives via `ArchiveManager`). Issue: buttercup/buttercup-core#199 Use the non-minified copy for now.
priority
use non minified buttercup core library speed issue the core currently has an issue with minification in that the buttercup web min js asset is much slower than the non minified copy when it comes to crypto locking unlocking archives via archivemanager issue buttercup buttercup core use the non minified copy for now
1
612,622
19,027,017,916
IssuesEvent
2021-11-24 05:45:49
aitos-io/BoAT-X-Framework
https://api.github.com/repos/aitos-io/BoAT-X-Framework
closed
linux-default/src/port_crypto_default/boatplatform.c Compile failed
bug Severity/major Priority/P2
when " SOFT_CRYPTO ?= CRYPTO_DEFAULT" ,linux-default/src/port_crypto_default/boatplatform.c Compile failed。 error: implicit declaration of function 'keccak_256' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
1.0
linux-default/src/port_crypto_default/boatplatform.c Compile failed - when " SOFT_CRYPTO ?= CRYPTO_DEFAULT" ,linux-default/src/port_crypto_default/boatplatform.c Compile failed。 error: implicit declaration of function 'keccak_256' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
priority
linux default src port crypto default boatplatform c compile failed when soft crypto crypto default ,linux default src port crypto default boatplatform c compile failed。 error implicit declaration of function keccak is invalid in
1
264,745
8,319,194,918
IssuesEvent
2018-09-25 16:31:33
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.youtube.com - site is not usable
browser-firefox priority-critical
<!-- @browser: Firefox 63.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:63.0) Gecko/20100101 Firefox/63.0 --> <!-- @reported_with: desktop-reporter --> **URL**: https://www.youtube.com/watch?v=ME2sKsRYmoM&index=4&list=PL2mWImrXdC2sjH3ZO2LnC4pb3L4g1NBpk **Browser / Version**: Firefox 63.0 **Operating System**: Windows 8.1 **Tested Another Browser**: No **Problem type**: Site is not usable **Description**: I don't need this site but it's always the first to come out when I open Youtube. **Steps to Reproduce**: I tried to change it. [![Screenshot Description](https://webcompat.com/uploads/2018/9/9711bd09-44c3-4efc-9c74-108d8a91b1ce-thumb.jpg)](https://webcompat.com/uploads/2018/9/9711bd09-44c3-4efc-9c74-108d8a91b1ce.jpg) <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>buildID: 20180920135444</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.all: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>channel: beta</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.youtube.com - site is not usable - <!-- @browser: Firefox 63.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:63.0) Gecko/20100101 Firefox/63.0 --> <!-- @reported_with: desktop-reporter --> **URL**: https://www.youtube.com/watch?v=ME2sKsRYmoM&index=4&list=PL2mWImrXdC2sjH3ZO2LnC4pb3L4g1NBpk **Browser / Version**: Firefox 63.0 **Operating System**: Windows 8.1 **Tested Another Browser**: No **Problem type**: Site is not usable **Description**: I don't need this site but it's always the first to come out when I open Youtube. **Steps to Reproduce**: I tried to change it. [![Screenshot Description](https://webcompat.com/uploads/2018/9/9711bd09-44c3-4efc-9c74-108d8a91b1ce-thumb.jpg)](https://webcompat.com/uploads/2018/9/9711bd09-44c3-4efc-9c74-108d8a91b1ce.jpg) <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>buildID: 20180920135444</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.all: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>channel: beta</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
priority
site is not usable url browser version firefox operating system windows tested another browser no problem type site is not usable description i don t need this site but it s always the first to come out when i open youtube steps to reproduce i tried to change it browser configuration mixed active content blocked false buildid tracking content blocked false gfx webrender blob images true gfx webrender all false mixed passive content blocked false gfx webrender enabled false image mem shared true channel beta from with ❤️
1
93,357
26,933,358,767
IssuesEvent
2023-02-07 18:30:23
microsoft/fluentui
https://api.github.com/repos/microsoft/fluentui
opened
[Epic]: 1JS Visual Regression Tool feature suggestions
Area: Build System
## Story: - This issue will serve as the main hub for any 1JS VR tool feature requests that you may have. Please add a comment to this issue for any requests you may have and then update this issue. ### Feature Requests: - [ ] Have VR results be integrated as blocking github checks. - [ ] "Magnify" feature does not zoom in to the component in snapshot - Example: Open [link ](https://staticpagerelease.z16.web.core.windows.net/azurestaticapps/vr-fluent/index.html?organization=uifabric&project=fabricpublic&prId=26725&commitId=df778e39c0c3daaa507128377960e6a391372506&env=prod), click on VR result and click on magnify to observe issue. - [ ] A way to view ALL existing snapshots (changed and unchanged)
1.0
[Epic]: 1JS Visual Regression Tool feature suggestions - ## Story: - This issue will serve as the main hub for any 1JS VR tool feature requests that you may have. Please add a comment to this issue for any requests you may have and then update this issue. ### Feature Requests: - [ ] Have VR results be integrated as blocking github checks. - [ ] "Magnify" feature does not zoom in to the component in snapshot - Example: Open [link ](https://staticpagerelease.z16.web.core.windows.net/azurestaticapps/vr-fluent/index.html?organization=uifabric&project=fabricpublic&prId=26725&commitId=df778e39c0c3daaa507128377960e6a391372506&env=prod), click on VR result and click on magnify to observe issue. - [ ] A way to view ALL existing snapshots (changed and unchanged)
non_priority
visual regression tool feature suggestions story this issue will serve as the main hub for any vr tool feature requests that you may have please add a comment to this issue for any requests you may have and then update this issue feature requests have vr results be integrated as blocking github checks magnify feature does not zoom in to the component in snapshot example open click on vr result and click on magnify to observe issue a way to view all existing snapshots changed and unchanged
0
18,928
6,657,930,798
IssuesEvent
2017-09-30 12:50:05
servo/servo
https://api.github.com/repos/servo/servo
closed
Can't build servo - "Failed to run custom build command for `harfbuzz"
A-build A-platform/shaping
This is the issue i'm getting. I've searched around, but not found anything. Anyone got any ideas. Running latest version of OSX Yosemite. 10.10.1 ``` /Users/kevin.cannon/.cargo/git/checkouts/gcc-rs-769a83e2edfc991e/servo/src/lib.rs:1:12: 1:18 warning: feature has been added to Rust, directive not necessary /Users/kevin.cannon/.cargo/git/checkouts/gcc-rs-769a83e2edfc991e/servo/src/lib.rs:1 #![feature(if_let)] ^~~~~~ Compiling lazy_static v0.1.0 (https://github.com/Kimundi/lazy-static.rs#76f06e4f) Build failed, waiting for other jobs to finish... Failed to run custom build command for `harfbuzz v0.1.0 (https://github.com/servo/rust-harfbuzz#59b5b180)` Process didn't exit successfully: `make -f makefile.cargo` (status=2) --- stderr makefile.cargo:75: warning: overriding commands for target `/Users/kevin.cannon/Documents/05.' makefile.cargo:72: warning: ignoring old commands for target `/Users/kevin.cannon/Documents/05.' make: *** No rule to make target `harfbuzz/src/%.cc', needed by `/Users/kevin.cannon/Documents/05.'. Stop. ```
1.0
Can't build servo - "Failed to run custom build command for `harfbuzz" - This is the issue i'm getting. I've searched around, but not found anything. Anyone got any ideas. Running latest version of OSX Yosemite. 10.10.1 ``` /Users/kevin.cannon/.cargo/git/checkouts/gcc-rs-769a83e2edfc991e/servo/src/lib.rs:1:12: 1:18 warning: feature has been added to Rust, directive not necessary /Users/kevin.cannon/.cargo/git/checkouts/gcc-rs-769a83e2edfc991e/servo/src/lib.rs:1 #![feature(if_let)] ^~~~~~ Compiling lazy_static v0.1.0 (https://github.com/Kimundi/lazy-static.rs#76f06e4f) Build failed, waiting for other jobs to finish... Failed to run custom build command for `harfbuzz v0.1.0 (https://github.com/servo/rust-harfbuzz#59b5b180)` Process didn't exit successfully: `make -f makefile.cargo` (status=2) --- stderr makefile.cargo:75: warning: overriding commands for target `/Users/kevin.cannon/Documents/05.' makefile.cargo:72: warning: ignoring old commands for target `/Users/kevin.cannon/Documents/05.' make: *** No rule to make target `harfbuzz/src/%.cc', needed by `/Users/kevin.cannon/Documents/05.'. Stop. ```
non_priority
can t build servo failed to run custom build command for harfbuzz this is the issue i m getting i ve searched around but not found anything anyone got any ideas running latest version of osx yosemite users kevin cannon cargo git checkouts gcc rs servo src lib rs warning feature has been added to rust directive not necessary users kevin cannon cargo git checkouts gcc rs servo src lib rs compiling lazy static build failed waiting for other jobs to finish failed to run custom build command for harfbuzz process didn t exit successfully make f makefile cargo status stderr makefile cargo warning overriding commands for target users kevin cannon documents makefile cargo warning ignoring old commands for target users kevin cannon documents make no rule to make target harfbuzz src cc needed by users kevin cannon documents stop
0
603,579
18,669,316,778
IssuesEvent
2021-10-30 11:57:12
Corne2Plum3/fnf2osumania
https://api.github.com/repos/Corne2Plum3/fnf2osumania
closed
I can't hit the notes of a song
bug priority=1
**Describe the bug** When playing a song that I converted (Blocked by the Vs Dave mod) only the first notes are hittable for a strange reason.. **Error message** No error message **Screenshots:** I will attach a video if I can Update: no, I can't sorry, my pc lags af when trying to record **Program version:** v1.1.2 **Platform:** Windows 11 .exe file **Used files:** [Files.zip](https://github.com/Corne2Plum3/fnf2osumania/files/7444823/Files.zip) **Additional context:**
1.0
I can't hit the notes of a song - **Describe the bug** When playing a song that I converted (Blocked by the Vs Dave mod) only the first notes are hittable for a strange reason.. **Error message** No error message **Screenshots:** I will attach a video if I can Update: no, I can't sorry, my pc lags af when trying to record **Program version:** v1.1.2 **Platform:** Windows 11 .exe file **Used files:** [Files.zip](https://github.com/Corne2Plum3/fnf2osumania/files/7444823/Files.zip) **Additional context:**
priority
i can t hit the notes of a song describe the bug when playing a song that i converted blocked by the vs dave mod only the first notes are hittable for a strange reason error message no error message screenshots i will attach a video if i can update no i can t sorry my pc lags af when trying to record program version platform windows exe file used files additional context
1
623,374
19,666,271,172
IssuesEvent
2022-01-10 23:01:18
South-Carolina-Language-Map/South-Carolina-Language-Map
https://api.github.com/repos/South-Carolina-Language-Map/South-Carolina-Language-Map
opened
Snackbar throws occasional errors, but updates go through
bug Low Priority
I don't think this is breaking anything, just throwing an error, because technically the snackbar is rendering as a child of a table element
1.0
Snackbar throws occasional errors, but updates go through - I don't think this is breaking anything, just throwing an error, because technically the snackbar is rendering as a child of a table element
priority
snackbar throws occasional errors but updates go through i don t think this is breaking anything just throwing an error because technically the snackbar is rendering as a child of a table element
1
103,762
8,948,779,205
IssuesEvent
2019-01-25 04:10:08
67P/hyperchannel
https://api.github.com/repos/67P/hyperchannel
opened
Share pictures, files, code snippets, etc. via RS
feature remotestorage
This one needs multiple separate issues, because sharing a code snippet will need different UI and function calls than sharing an image file for example. Just adding this as a tracking issue, so we have it on the roadmap.
1.0
Share pictures, files, code snippets, etc. via RS - This one needs multiple separate issues, because sharing a code snippet will need different UI and function calls than sharing an image file for example. Just adding this as a tracking issue, so we have it on the roadmap.
non_priority
share pictures files code snippets etc via rs this one needs multiple separate issues because sharing a code snippet will need different ui and function calls than sharing an image file for example just adding this as a tracking issue so we have it on the roadmap
0
748,901
26,142,589,547
IssuesEvent
2022-12-29 21:01:31
ceph/ceph-csi
https://api.github.com/repos/ceph/ceph-csi
closed
Few E2E tests are getting skipped in devel branch
bug wontfix Priority-0
> @Madhu-1 this keeps failing, maybe there is an issue with it? I see a problem with tests in the devel branch here https://github.com/ceph/ceph-csi/blob/0f0957164ec882a91d11bf1c880e1c0777abfb01/e2e/rbd.go#L4592-L4610. You can see the ceph users are deleted. Now we are trying to run one more test, `pvcDeleteWhenPoolNotFound` which creates the PVC and deletes the PVC as the user is deleted. The test should never work. I feel somewhere something went wrong in the devel branch. A bunch of tests are getting skipped, and we are not running all the tests for sure. Because of some missing patch in the release-3.7 branch, this is getting caught now, I fixed this in the release-3.7 branch. We need to check why the full test suite is not running/why tests are getting skipped. _Originally posted by @Madhu-1 in https://github.com/ceph/ceph-csi/issues/3531#issuecomment-1321884300_
1.0
Few E2E tests are getting skipped in devel branch - > @Madhu-1 this keeps failing, maybe there is an issue with it? I see a problem with tests in the devel branch here https://github.com/ceph/ceph-csi/blob/0f0957164ec882a91d11bf1c880e1c0777abfb01/e2e/rbd.go#L4592-L4610. You can see the ceph users are deleted. Now we are trying to run one more test, `pvcDeleteWhenPoolNotFound` which creates the PVC and deletes the PVC as the user is deleted. The test should never work. I feel somewhere something went wrong in the devel branch. A bunch of tests are getting skipped, and we are not running all the tests for sure. Because of some missing patch in the release-3.7 branch, this is getting caught now, I fixed this in the release-3.7 branch. We need to check why the full test suite is not running/why tests are getting skipped. _Originally posted by @Madhu-1 in https://github.com/ceph/ceph-csi/issues/3531#issuecomment-1321884300_
priority
few tests are getting skipped in devel branch madhu this keeps failing maybe there is an issue with it i see a problem with tests in the devel branch here you can see the ceph users are deleted now we are trying to run one more test pvcdeletewhenpoolnotfound which creates the pvc and deletes the pvc as the user is deleted the test should never work i feel somewhere something went wrong in the devel branch a bunch of tests are getting skipped and we are not running all the tests for sure because of some missing patch in the release branch this is getting caught now i fixed this in the release branch we need to check why the full test suite is not running why tests are getting skipped originally posted by madhu in
1
15,566
3,475,711,754
IssuesEvent
2015-12-26 00:55:12
nnnick/Chart.js
https://api.github.com/repos/nnnick/Chart.js
closed
Donut chart active segment mouse over cursor style not applyed
Needs test case
Hi, i am using Donut chart ,i don't have any option to set like when we mouse over on active segment applying the cursor style to that current segment I have saw in stackoverflow link: http://stackoverflow.com/questions/30741085/mouse-over-event-and-cursor-pointer-on-the-line-series there mouse over in points in a line css style cursor getting applied that to it is kendo chartjs. is there any solution to resolve this to apply own style's in to segments where we mouse over in donut chart Best Regards, Nagendra
1.0
Donut chart active segment mouse over cursor style not applyed - Hi, i am using Donut chart ,i don't have any option to set like when we mouse over on active segment applying the cursor style to that current segment I have saw in stackoverflow link: http://stackoverflow.com/questions/30741085/mouse-over-event-and-cursor-pointer-on-the-line-series there mouse over in points in a line css style cursor getting applied that to it is kendo chartjs. is there any solution to resolve this to apply own style's in to segments where we mouse over in donut chart Best Regards, Nagendra
non_priority
donut chart active segment mouse over cursor style not applyed hi i am using donut chart i don t have any option to set like when we mouse over on active segment applying the cursor style to that current segment i have saw in stackoverflow link there mouse over in points in a line css style cursor getting applied that to it is kendo chartjs is there any solution to resolve this to apply own style s in to segments where we mouse over in donut chart best regards nagendra
0
711,984
24,481,129,480
IssuesEvent
2022-10-08 21:11:55
epicmaxco/vuestic-ui
https://api.github.com/repos/epicmaxco/vuestic-ui
closed
Bundlers tests should execute on any OS
good first issue LOW PRIORITY
* **What is the expected behavior?** Bundlers tests should execute on any OS. * **What is the current behavior?** Bundlers tests `package.json` commands work only for Linux, for example: `"build": "rm -rf ./dist && yarn build:vite && yarn build:vue-cli"` Looks like the best way to resolve this is by writing new node.js script that will handle it ( import `os` and `exec` ).
1.0
Bundlers tests should execute on any OS - * **What is the expected behavior?** Bundlers tests should execute on any OS. * **What is the current behavior?** Bundlers tests `package.json` commands work only for Linux, for example: `"build": "rm -rf ./dist && yarn build:vite && yarn build:vue-cli"` Looks like the best way to resolve this is by writing new node.js script that will handle it ( import `os` and `exec` ).
priority
bundlers tests should execute on any os what is the expected behavior bundlers tests should execute on any os what is the current behavior bundlers tests package json commands work only for linux for example build rm rf dist yarn build vite yarn build vue cli looks like the best way to resolve this is by writing new node js script that will handle it import os and exec
1
594,651
18,050,409,576
IssuesEvent
2021-09-19 16:44:53
rstemmer/musicdb
https://api.github.com/repos/rstemmer/musicdb
opened
fuzzywuzzy library is now called thefuzz
Low Priority Back-End
[fuzzywuzzy](https://github.com/seatgeek/fuzzywuzzy) is now called [thefuzz](https://github.com/seatgeek/thefuzz). This needs to be investigated. - [ ] Is this library still maintained or is it just being buried? - [ ] Updating the requirements - [ ] Update `import` statements
1.0
fuzzywuzzy library is now called thefuzz - [fuzzywuzzy](https://github.com/seatgeek/fuzzywuzzy) is now called [thefuzz](https://github.com/seatgeek/thefuzz). This needs to be investigated. - [ ] Is this library still maintained or is it just being buried? - [ ] Updating the requirements - [ ] Update `import` statements
priority
fuzzywuzzy library is now called thefuzz is now called this needs to be investigated is this library still maintained or is it just being buried updating the requirements update import statements
1
364,289
10,761,771,804
IssuesEvent
2019-10-31 21:34:37
apifytech/apify-js
https://api.github.com/repos/apifytech/apify-js
closed
Apify.call and Apify.callTask should support passing ad hoc webhooks
enhancement low priority
It is in the client already
1.0
Apify.call and Apify.callTask should support passing ad hoc webhooks - It is in the client already
priority
apify call and apify calltask should support passing ad hoc webhooks it is in the client already
1
100,758
16,490,391,912
IssuesEvent
2021-05-25 02:15:42
jinuem/Parse-SDK-JS
https://api.github.com/repos/jinuem/Parse-SDK-JS
opened
WS-2019-0493 (High) detected in handlebars-4.1.2.tgz
security vulnerability
## WS-2019-0493 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p> <p>Path to dependency file: /Parse-SDK-JS/package.json</p> <p>Path to vulnerable library: Parse-SDK-JS/node_modules/handlebars/package.json</p> <p> Dependency Hierarchy: - jest-cli-0.5.10.tgz (Root Library) - istanbul-0.3.22.tgz - :x: **handlebars-4.1.2.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> handlebars before 3.0.8 and 4.x before 4.5.2 is vulnerable to Arbitrary Code Execution. The package's lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript in the system. <p>Publish Date: 2019-11-14 <p>URL: <a href=https://github.com/handlebars-lang/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e>WS-2019-0493</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1316">https://www.npmjs.com/advisories/1316</a></p> <p>Release Date: 2019-11-14</p> <p>Fix Resolution: handlebars - 3.0.8,4.5.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2019-0493 (High) detected in handlebars-4.1.2.tgz - ## WS-2019-0493 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p> <p>Path to dependency file: /Parse-SDK-JS/package.json</p> <p>Path to vulnerable library: Parse-SDK-JS/node_modules/handlebars/package.json</p> <p> Dependency Hierarchy: - jest-cli-0.5.10.tgz (Root Library) - istanbul-0.3.22.tgz - :x: **handlebars-4.1.2.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> handlebars before 3.0.8 and 4.x before 4.5.2 is vulnerable to Arbitrary Code Execution. The package's lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript in the system. <p>Publish Date: 2019-11-14 <p>URL: <a href=https://github.com/handlebars-lang/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e>WS-2019-0493</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1316">https://www.npmjs.com/advisories/1316</a></p> <p>Release Date: 2019-11-14</p> <p>Fix Resolution: handlebars - 3.0.8,4.5.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_priority
ws high detected in handlebars tgz ws high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file parse sdk js package json path to vulnerable library parse sdk js node modules handlebars package json dependency hierarchy jest cli tgz root library istanbul tgz x handlebars tgz vulnerable library vulnerability details handlebars before and x before is vulnerable to arbitrary code execution the package s lookup helper fails to properly validate templates allowing attackers to submit templates that execute arbitrary javascript in the system publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource
0
409,036
11,955,756,593
IssuesEvent
2020-04-04 06:39:40
googleapis/google-cloud-dotnet
https://api.github.com/repos/googleapis/google-cloud-dotnet
closed
Synthesis failed for Google.Cloud.RecaptchaEnterprise.V1
autosynth failure priority: p1 type: bug
Hello! Autosynth couldn't regenerate Google.Cloud.RecaptchaEnterprise.V1. :broken_heart: Here's the output from running `synth.py`: ``` Cloning into 'working_repo'... Switched to a new branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1' Cloning into '/tmpfs/tmp/tmpsxx4dmx_/googleapis'... Note: checking out '9d4a3ad084027c414a5670ff358a65917c81cdeb'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at 9d4a3ad08 chore: set Ruby namespace in proto options Note: checking out 'd6cb4997910eda04c0c66c0f2fd043eeaa0f660d'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at d6cb4997 chore: enable gapic v2 and proto annotation for documentai API. Switched to a new branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1-60' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--'] 2020-04-03 22:49:39,650 synthtool > Executing /tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.RecaptchaEnterprise.V1/synth.py. Skipping microgenerator fetch/build: already built, and running on Kokoro Building existing version of Google.Cloud.RecaptchaEnterprise.V1 for compatibility checking Generating Google.Cloud.RecaptchaEnterprise.V1 Building new version of Google.Cloud.RecaptchaEnterprise.V1 for compatibility checking Changes in Google.Cloud.RecaptchaEnterprise.V1: Diff level: Identical 2020-04-03 22:49:51,597 synthtool > Wrote metadata to synth.metadata. Changed files: M apis/Google.Cloud.RecaptchaEnterprise.V1/Google.Cloud.RecaptchaEnterprise.V1/Recaptchaenterprise.cs M apis/Google.Cloud.RecaptchaEnterprise.V1/synth.metadata [autosynth-Google.Cloud.RecaptchaEnterprise.V1-60 9776cd766] ignored 2 files changed, 6 insertions(+), 4 deletions(-) HEAD is now at 9776cd766 ignored Switched to branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1' Previous HEAD position was d6cb4997 chore: enable gapic v2 and proto annotation for documentai API. HEAD is now at 7be2811a fix: Update gapic-generator version to pickup discogapic fixes Note: checking out '98024617efce32982bd763ad14f00c9bc0819bea'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at 98024617e Touch all the synth.metadata files again Switched to a new branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1-0' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--'] 2020-04-03 22:49:52,134 synthtool > Executing /tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.RecaptchaEnterprise.V1/synth.py. Skipping microgenerator fetch/build: already built, and running on Kokoro Building existing version of Google.Cloud.RecaptchaEnterprise.V1 for compatibility checking Generating Google.Cloud.RecaptchaEnterprise.V1 git detects no change in Google.Cloud.RecaptchaEnterprise.V1; skipping compatibility checking 2020-04-03 22:49:56,154 synthtool > Wrote metadata to synth.metadata. Changed files: M apis/Google.Cloud.RecaptchaEnterprise.V1/synth.metadata HEAD is now at 98024617e Touch all the synth.metadata files again Switched to branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1' Note: checking out '9d4a3ad084027c414a5670ff358a65917c81cdeb'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at 9d4a3ad08 chore: set Ruby namespace in proto options Previous HEAD position was 7be2811a fix: Update gapic-generator version to pickup discogapic fixes HEAD is now at 17cfae00 Add a new AuthorizationType for Data Source Definition. Switched to a new branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1-30' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--'] 2020-04-03 22:49:56,733 synthtool > Executing /tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.RecaptchaEnterprise.V1/synth.py. Skipping microgenerator fetch/build: already built, and running on Kokoro Building existing version of Google.Cloud.RecaptchaEnterprise.V1 for compatibility checking Generating Google.Cloud.RecaptchaEnterprise.V1 git detects no change in Google.Cloud.RecaptchaEnterprise.V1; skipping compatibility checking 2020-04-03 22:50:00,771 synthtool > Wrote metadata to synth.metadata. Changed files: M apis/Google.Cloud.RecaptchaEnterprise.V1/synth.metadata HEAD is now at 9d4a3ad08 chore: set Ruby namespace in proto options Switched to branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1' Previous HEAD position was 17cfae00 Add a new AuthorizationType for Data Source Definition. HEAD is now at 7be2811a fix: Update gapic-generator version to pickup discogapic fixes Note: checking out '5a41fb5f1c7f5329cc981b77cae1f4762d705002'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at 5a41fb5f1 Dialogflow weekly v2 library update Switched to a new branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1-15' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--'] 2020-04-03 22:50:01,280 synthtool > Executing /tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.RecaptchaEnterprise.V1/synth.py. Skipping microgenerator fetch/build: already built, and running on Kokoro Building existing version of Google.Cloud.RecaptchaEnterprise.V1 for compatibility checking Generating Google.Cloud.RecaptchaEnterprise.V1 git detects no change in Google.Cloud.RecaptchaEnterprise.V1; skipping compatibility checking 2020-04-03 22:50:05,329 synthtool > Wrote metadata to synth.metadata. Changed files: M apis/Google.Cloud.RecaptchaEnterprise.V1/synth.metadata HEAD is now at 5a41fb5f1 Dialogflow weekly v2 library update Switched to branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1' HEAD is now at 7be2811a fix: Update gapic-generator version to pickup discogapic fixes Note: checking out 'edebc2b7a22574b76412a9c1cf6832719b9c85e8'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at edebc2b7a fix Dataproc: add missing `REQUIRED` annotation. Switched to a new branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1-7' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--'] 2020-04-03 22:50:05,856 synthtool > Executing /tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.RecaptchaEnterprise.V1/synth.py. Skipping microgenerator fetch/build: already built, and running on Kokoro Building existing version of Google.Cloud.RecaptchaEnterprise.V1 for compatibility checking Generating Google.Cloud.RecaptchaEnterprise.V1 git detects no change in Google.Cloud.RecaptchaEnterprise.V1; skipping compatibility checking 2020-04-03 22:50:09,963 synthtool > Wrote metadata to synth.metadata. Changed files: M apis/Google.Cloud.RecaptchaEnterprise.V1/synth.metadata HEAD is now at edebc2b7a fix Dataproc: add missing `REQUIRED` annotation. Switched to branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1' HEAD is now at 7be2811a fix: Update gapic-generator version to pickup discogapic fixes Note: checking out '38fb38aeb841fb08e12ff366c2159fb6669b45d8'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at 38fb38aeb Touch synth.metadata to get SecretManager APIs building Switched to a new branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1-3' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--'] 2020-04-03 22:50:10,582 synthtool > Executing /tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.RecaptchaEnterprise.V1/synth.py. Skipping microgenerator fetch/build: already built, and running on Kokoro Building existing version of Google.Cloud.RecaptchaEnterprise.V1 for compatibility checking Generating Google.Cloud.RecaptchaEnterprise.V1 git detects no change in Google.Cloud.RecaptchaEnterprise.V1; skipping compatibility checking 2020-04-03 22:50:15,126 synthtool > Wrote metadata to synth.metadata. Changed files: M apis/Google.Cloud.RecaptchaEnterprise.V1/synth.metadata HEAD is now at 38fb38aeb Touch synth.metadata to get SecretManager APIs building Switched to branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1' HEAD is now at 7be2811a fix: Update gapic-generator version to pickup discogapic fixes Note: checking out '0c88ce0da0e66910735643832b237c9f873b05e6'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at 0c88ce0da Add a new AuthorizationType for Data Source Definition. Switched to a new branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1-1' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--'] 2020-04-03 22:50:15,674 synthtool > Executing /tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.RecaptchaEnterprise.V1/synth.py. Skipping microgenerator fetch/build: already built, and running on Kokoro Building existing version of Google.Cloud.RecaptchaEnterprise.V1 for compatibility checking Generating Google.Cloud.RecaptchaEnterprise.V1 git detects no change in Google.Cloud.RecaptchaEnterprise.V1; skipping compatibility checking 2020-04-03 22:50:19,780 synthtool > Wrote metadata to synth.metadata. Changed files: M apis/Google.Cloud.RecaptchaEnterprise.V1/synth.metadata HEAD is now at 0c88ce0da Add a new AuthorizationType for Data Source Definition. Switched to branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1' On branch autosynth-Google.Cloud.RecaptchaEnterprise.V1 nothing to commit, working tree clean Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 484, in <module> main() File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 370, in main return _inner_main(temp_dir) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 474, in _inner_main commit_count = synthesize_loop(x, multiple_prs, change_pusher, synthesizer) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 284, in synthesize_loop synthesize_range(toolbox, synthesizer) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 306, in synthesize_range toolbox.patch_merge_version(young) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 146, in patch_merge_version comment or self.versions[index].version.get_comment() File "/tmpfs/src/git/autosynth/autosynth/git.py", line 95, in commit_all_changes subprocess.check_call(["git", "commit", "-m", message]) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 311, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['git', 'commit', '-m', 'Add a new AuthorizationType for Data Source Definition.\n\nhttps://github.com/googleapis/google-cloud-dotnet/commit/0c88ce0da0e66910735643832b237c9f873b05e6\ncommit 0c88ce0da0e66910735643832b237c9f873b05e6\nAuthor: yoshi-automation <yoshi-automation@google.com>\nDate: Wed Apr 1 02:27:58 2020 -0700\n\n Add a new AuthorizationType for Data Source Definition.\n \n https://github.com/googleapis/googleapis/commit/17cfae00f2bb51cb1683f017da7e295a1b0f01a8\n commit 17cfae00f2bb51cb1683f017da7e295a1b0f01a8\n Author: Google APIs <noreply@google.com>\n Date: Tue Mar 31 10:21:11 2020 -0700\n \n Add a new AuthorizationType for Data Source Definition.\n \n PiperOrigin-RevId: 303992863']' returned non-zero exit status 1. ``` Google internal developers can see the full log [here](https://sponge/89ee1c7b-40f0-4f7a-a7cd-f68bdd6f31c1).
1.0
Synthesis failed for Google.Cloud.RecaptchaEnterprise.V1 - Hello! Autosynth couldn't regenerate Google.Cloud.RecaptchaEnterprise.V1. :broken_heart: Here's the output from running `synth.py`: ``` Cloning into 'working_repo'... Switched to a new branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1' Cloning into '/tmpfs/tmp/tmpsxx4dmx_/googleapis'... Note: checking out '9d4a3ad084027c414a5670ff358a65917c81cdeb'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at 9d4a3ad08 chore: set Ruby namespace in proto options Note: checking out 'd6cb4997910eda04c0c66c0f2fd043eeaa0f660d'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at d6cb4997 chore: enable gapic v2 and proto annotation for documentai API. Switched to a new branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1-60' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--'] 2020-04-03 22:49:39,650 synthtool > Executing /tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.RecaptchaEnterprise.V1/synth.py. Skipping microgenerator fetch/build: already built, and running on Kokoro Building existing version of Google.Cloud.RecaptchaEnterprise.V1 for compatibility checking Generating Google.Cloud.RecaptchaEnterprise.V1 Building new version of Google.Cloud.RecaptchaEnterprise.V1 for compatibility checking Changes in Google.Cloud.RecaptchaEnterprise.V1: Diff level: Identical 2020-04-03 22:49:51,597 synthtool > Wrote metadata to synth.metadata. Changed files: M apis/Google.Cloud.RecaptchaEnterprise.V1/Google.Cloud.RecaptchaEnterprise.V1/Recaptchaenterprise.cs M apis/Google.Cloud.RecaptchaEnterprise.V1/synth.metadata [autosynth-Google.Cloud.RecaptchaEnterprise.V1-60 9776cd766] ignored 2 files changed, 6 insertions(+), 4 deletions(-) HEAD is now at 9776cd766 ignored Switched to branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1' Previous HEAD position was d6cb4997 chore: enable gapic v2 and proto annotation for documentai API. HEAD is now at 7be2811a fix: Update gapic-generator version to pickup discogapic fixes Note: checking out '98024617efce32982bd763ad14f00c9bc0819bea'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at 98024617e Touch all the synth.metadata files again Switched to a new branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1-0' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--'] 2020-04-03 22:49:52,134 synthtool > Executing /tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.RecaptchaEnterprise.V1/synth.py. Skipping microgenerator fetch/build: already built, and running on Kokoro Building existing version of Google.Cloud.RecaptchaEnterprise.V1 for compatibility checking Generating Google.Cloud.RecaptchaEnterprise.V1 git detects no change in Google.Cloud.RecaptchaEnterprise.V1; skipping compatibility checking 2020-04-03 22:49:56,154 synthtool > Wrote metadata to synth.metadata. Changed files: M apis/Google.Cloud.RecaptchaEnterprise.V1/synth.metadata HEAD is now at 98024617e Touch all the synth.metadata files again Switched to branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1' Note: checking out '9d4a3ad084027c414a5670ff358a65917c81cdeb'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at 9d4a3ad08 chore: set Ruby namespace in proto options Previous HEAD position was 7be2811a fix: Update gapic-generator version to pickup discogapic fixes HEAD is now at 17cfae00 Add a new AuthorizationType for Data Source Definition. Switched to a new branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1-30' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--'] 2020-04-03 22:49:56,733 synthtool > Executing /tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.RecaptchaEnterprise.V1/synth.py. Skipping microgenerator fetch/build: already built, and running on Kokoro Building existing version of Google.Cloud.RecaptchaEnterprise.V1 for compatibility checking Generating Google.Cloud.RecaptchaEnterprise.V1 git detects no change in Google.Cloud.RecaptchaEnterprise.V1; skipping compatibility checking 2020-04-03 22:50:00,771 synthtool > Wrote metadata to synth.metadata. Changed files: M apis/Google.Cloud.RecaptchaEnterprise.V1/synth.metadata HEAD is now at 9d4a3ad08 chore: set Ruby namespace in proto options Switched to branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1' Previous HEAD position was 17cfae00 Add a new AuthorizationType for Data Source Definition. HEAD is now at 7be2811a fix: Update gapic-generator version to pickup discogapic fixes Note: checking out '5a41fb5f1c7f5329cc981b77cae1f4762d705002'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at 5a41fb5f1 Dialogflow weekly v2 library update Switched to a new branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1-15' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--'] 2020-04-03 22:50:01,280 synthtool > Executing /tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.RecaptchaEnterprise.V1/synth.py. Skipping microgenerator fetch/build: already built, and running on Kokoro Building existing version of Google.Cloud.RecaptchaEnterprise.V1 for compatibility checking Generating Google.Cloud.RecaptchaEnterprise.V1 git detects no change in Google.Cloud.RecaptchaEnterprise.V1; skipping compatibility checking 2020-04-03 22:50:05,329 synthtool > Wrote metadata to synth.metadata. Changed files: M apis/Google.Cloud.RecaptchaEnterprise.V1/synth.metadata HEAD is now at 5a41fb5f1 Dialogflow weekly v2 library update Switched to branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1' HEAD is now at 7be2811a fix: Update gapic-generator version to pickup discogapic fixes Note: checking out 'edebc2b7a22574b76412a9c1cf6832719b9c85e8'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at edebc2b7a fix Dataproc: add missing `REQUIRED` annotation. Switched to a new branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1-7' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--'] 2020-04-03 22:50:05,856 synthtool > Executing /tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.RecaptchaEnterprise.V1/synth.py. Skipping microgenerator fetch/build: already built, and running on Kokoro Building existing version of Google.Cloud.RecaptchaEnterprise.V1 for compatibility checking Generating Google.Cloud.RecaptchaEnterprise.V1 git detects no change in Google.Cloud.RecaptchaEnterprise.V1; skipping compatibility checking 2020-04-03 22:50:09,963 synthtool > Wrote metadata to synth.metadata. Changed files: M apis/Google.Cloud.RecaptchaEnterprise.V1/synth.metadata HEAD is now at edebc2b7a fix Dataproc: add missing `REQUIRED` annotation. Switched to branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1' HEAD is now at 7be2811a fix: Update gapic-generator version to pickup discogapic fixes Note: checking out '38fb38aeb841fb08e12ff366c2159fb6669b45d8'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at 38fb38aeb Touch synth.metadata to get SecretManager APIs building Switched to a new branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1-3' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--'] 2020-04-03 22:50:10,582 synthtool > Executing /tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.RecaptchaEnterprise.V1/synth.py. Skipping microgenerator fetch/build: already built, and running on Kokoro Building existing version of Google.Cloud.RecaptchaEnterprise.V1 for compatibility checking Generating Google.Cloud.RecaptchaEnterprise.V1 git detects no change in Google.Cloud.RecaptchaEnterprise.V1; skipping compatibility checking 2020-04-03 22:50:15,126 synthtool > Wrote metadata to synth.metadata. Changed files: M apis/Google.Cloud.RecaptchaEnterprise.V1/synth.metadata HEAD is now at 38fb38aeb Touch synth.metadata to get SecretManager APIs building Switched to branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1' HEAD is now at 7be2811a fix: Update gapic-generator version to pickup discogapic fixes Note: checking out '0c88ce0da0e66910735643832b237c9f873b05e6'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at 0c88ce0da Add a new AuthorizationType for Data Source Definition. Switched to a new branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1-1' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--'] 2020-04-03 22:50:15,674 synthtool > Executing /tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.RecaptchaEnterprise.V1/synth.py. Skipping microgenerator fetch/build: already built, and running on Kokoro Building existing version of Google.Cloud.RecaptchaEnterprise.V1 for compatibility checking Generating Google.Cloud.RecaptchaEnterprise.V1 git detects no change in Google.Cloud.RecaptchaEnterprise.V1; skipping compatibility checking 2020-04-03 22:50:19,780 synthtool > Wrote metadata to synth.metadata. Changed files: M apis/Google.Cloud.RecaptchaEnterprise.V1/synth.metadata HEAD is now at 0c88ce0da Add a new AuthorizationType for Data Source Definition. Switched to branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1' On branch autosynth-Google.Cloud.RecaptchaEnterprise.V1 nothing to commit, working tree clean Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 484, in <module> main() File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 370, in main return _inner_main(temp_dir) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 474, in _inner_main commit_count = synthesize_loop(x, multiple_prs, change_pusher, synthesizer) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 284, in synthesize_loop synthesize_range(toolbox, synthesizer) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 306, in synthesize_range toolbox.patch_merge_version(young) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 146, in patch_merge_version comment or self.versions[index].version.get_comment() File "/tmpfs/src/git/autosynth/autosynth/git.py", line 95, in commit_all_changes subprocess.check_call(["git", "commit", "-m", message]) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 311, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['git', 'commit', '-m', 'Add a new AuthorizationType for Data Source Definition.\n\nhttps://github.com/googleapis/google-cloud-dotnet/commit/0c88ce0da0e66910735643832b237c9f873b05e6\ncommit 0c88ce0da0e66910735643832b237c9f873b05e6\nAuthor: yoshi-automation <yoshi-automation@google.com>\nDate: Wed Apr 1 02:27:58 2020 -0700\n\n Add a new AuthorizationType for Data Source Definition.\n \n https://github.com/googleapis/googleapis/commit/17cfae00f2bb51cb1683f017da7e295a1b0f01a8\n commit 17cfae00f2bb51cb1683f017da7e295a1b0f01a8\n Author: Google APIs <noreply@google.com>\n Date: Tue Mar 31 10:21:11 2020 -0700\n \n Add a new AuthorizationType for Data Source Definition.\n \n PiperOrigin-RevId: 303992863']' returned non-zero exit status 1. ``` Google internal developers can see the full log [here](https://sponge/89ee1c7b-40f0-4f7a-a7cd-f68bdd6f31c1).
priority
synthesis failed for google cloud recaptchaenterprise hello autosynth couldn t regenerate google cloud recaptchaenterprise broken heart here s the output from running synth py cloning into working repo switched to a new branch autosynth google cloud recaptchaenterprise cloning into tmpfs tmp googleapis note checking out you are in detached head state you can look around make experimental changes and commit them and you can discard any commits you make in this state without impacting any branches by performing another checkout if you want to create a new branch to retain commits you create you may do so now or later by using b with the checkout command again example git checkout b head is now at chore set ruby namespace in proto options note checking out you are in detached head state you can look around make experimental changes and commit them and you can discard any commits you make in this state without impacting any branches by performing another checkout if you want to create a new branch to retain commits you create you may do so now or later by using b with the checkout command again example git checkout b head is now at chore enable gapic and proto annotation for documentai api switched to a new branch autosynth google cloud recaptchaenterprise running synthtool synthtool executing tmpfs src git autosynth working repo apis google cloud recaptchaenterprise synth py skipping microgenerator fetch build already built and running on kokoro building existing version of google cloud recaptchaenterprise for compatibility checking generating google cloud recaptchaenterprise building new version of google cloud recaptchaenterprise for compatibility checking changes in google cloud recaptchaenterprise diff level identical synthtool wrote metadata to synth metadata changed files m apis google cloud recaptchaenterprise google cloud recaptchaenterprise recaptchaenterprise cs m apis google cloud recaptchaenterprise synth metadata ignored files changed insertions deletions head is now at ignored switched to branch autosynth google cloud recaptchaenterprise previous head position was chore enable gapic and proto annotation for documentai api head is now at fix update gapic generator version to pickup discogapic fixes note checking out you are in detached head state you can look around make experimental changes and commit them and you can discard any commits you make in this state without impacting any branches by performing another checkout if you want to create a new branch to retain commits you create you may do so now or later by using b with the checkout command again example git checkout b head is now at touch all the synth metadata files again switched to a new branch autosynth google cloud recaptchaenterprise running synthtool synthtool executing tmpfs src git autosynth working repo apis google cloud recaptchaenterprise synth py skipping microgenerator fetch build already built and running on kokoro building existing version of google cloud recaptchaenterprise for compatibility checking generating google cloud recaptchaenterprise git detects no change in google cloud recaptchaenterprise skipping compatibility checking synthtool wrote metadata to synth metadata changed files m apis google cloud recaptchaenterprise synth metadata head is now at touch all the synth metadata files again switched to branch autosynth google cloud recaptchaenterprise note checking out you are in detached head state you can look around make experimental changes and commit them and you can discard any commits you make in this state without impacting any branches by performing another checkout if you want to create a new branch to retain commits you create you may do so now or later by using b with the checkout command again example git checkout b head is now at chore set ruby namespace in proto options previous head position was fix update gapic generator version to pickup discogapic fixes head is now at add a new authorizationtype for data source definition switched to a new branch autosynth google cloud recaptchaenterprise running synthtool synthtool executing tmpfs src git autosynth working repo apis google cloud recaptchaenterprise synth py skipping microgenerator fetch build already built and running on kokoro building existing version of google cloud recaptchaenterprise for compatibility checking generating google cloud recaptchaenterprise git detects no change in google cloud recaptchaenterprise skipping compatibility checking synthtool wrote metadata to synth metadata changed files m apis google cloud recaptchaenterprise synth metadata head is now at chore set ruby namespace in proto options switched to branch autosynth google cloud recaptchaenterprise previous head position was add a new authorizationtype for data source definition head is now at fix update gapic generator version to pickup discogapic fixes note checking out you are in detached head state you can look around make experimental changes and commit them and you can discard any commits you make in this state without impacting any branches by performing another checkout if you want to create a new branch to retain commits you create you may do so now or later by using b with the checkout command again example git checkout b head is now at dialogflow weekly library update switched to a new branch autosynth google cloud recaptchaenterprise running synthtool synthtool executing tmpfs src git autosynth working repo apis google cloud recaptchaenterprise synth py skipping microgenerator fetch build already built and running on kokoro building existing version of google cloud recaptchaenterprise for compatibility checking generating google cloud recaptchaenterprise git detects no change in google cloud recaptchaenterprise skipping compatibility checking synthtool wrote metadata to synth metadata changed files m apis google cloud recaptchaenterprise synth metadata head is now at dialogflow weekly library update switched to branch autosynth google cloud recaptchaenterprise head is now at fix update gapic generator version to pickup discogapic fixes note checking out you are in detached head state you can look around make experimental changes and commit them and you can discard any commits you make in this state without impacting any branches by performing another checkout if you want to create a new branch to retain commits you create you may do so now or later by using b with the checkout command again example git checkout b head is now at fix dataproc add missing required annotation switched to a new branch autosynth google cloud recaptchaenterprise running synthtool synthtool executing tmpfs src git autosynth working repo apis google cloud recaptchaenterprise synth py skipping microgenerator fetch build already built and running on kokoro building existing version of google cloud recaptchaenterprise for compatibility checking generating google cloud recaptchaenterprise git detects no change in google cloud recaptchaenterprise skipping compatibility checking synthtool wrote metadata to synth metadata changed files m apis google cloud recaptchaenterprise synth metadata head is now at fix dataproc add missing required annotation switched to branch autosynth google cloud recaptchaenterprise head is now at fix update gapic generator version to pickup discogapic fixes note checking out you are in detached head state you can look around make experimental changes and commit them and you can discard any commits you make in this state without impacting any branches by performing another checkout if you want to create a new branch to retain commits you create you may do so now or later by using b with the checkout command again example git checkout b head is now at touch synth metadata to get secretmanager apis building switched to a new branch autosynth google cloud recaptchaenterprise running synthtool synthtool executing tmpfs src git autosynth working repo apis google cloud recaptchaenterprise synth py skipping microgenerator fetch build already built and running on kokoro building existing version of google cloud recaptchaenterprise for compatibility checking generating google cloud recaptchaenterprise git detects no change in google cloud recaptchaenterprise skipping compatibility checking synthtool wrote metadata to synth metadata changed files m apis google cloud recaptchaenterprise synth metadata head is now at touch synth metadata to get secretmanager apis building switched to branch autosynth google cloud recaptchaenterprise head is now at fix update gapic generator version to pickup discogapic fixes note checking out you are in detached head state you can look around make experimental changes and commit them and you can discard any commits you make in this state without impacting any branches by performing another checkout if you want to create a new branch to retain commits you create you may do so now or later by using b with the checkout command again example git checkout b head is now at add a new authorizationtype for data source definition switched to a new branch autosynth google cloud recaptchaenterprise running synthtool synthtool executing tmpfs src git autosynth working repo apis google cloud recaptchaenterprise synth py skipping microgenerator fetch build already built and running on kokoro building existing version of google cloud recaptchaenterprise for compatibility checking generating google cloud recaptchaenterprise git detects no change in google cloud recaptchaenterprise skipping compatibility checking synthtool wrote metadata to synth metadata changed files m apis google cloud recaptchaenterprise synth metadata head is now at add a new authorizationtype for data source definition switched to branch autosynth google cloud recaptchaenterprise on branch autosynth google cloud recaptchaenterprise nothing to commit working tree clean traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src git autosynth autosynth synth py line in main file tmpfs src git autosynth autosynth synth py line in main return inner main temp dir file tmpfs src git autosynth autosynth synth py line in inner main commit count synthesize loop x multiple prs change pusher synthesizer file tmpfs src git autosynth autosynth synth py line in synthesize loop synthesize range toolbox synthesizer file tmpfs src git autosynth autosynth synth py line in synthesize range toolbox patch merge version young file tmpfs src git autosynth autosynth synth py line in patch merge version comment or self versions version get comment file tmpfs src git autosynth autosynth git py line in commit all changes subprocess check call file home kbuilder pyenv versions lib subprocess py line in check call raise calledprocesserror retcode cmd subprocess calledprocesserror command returned non zero exit status google internal developers can see the full log
1
205,540
15,646,064,209
IssuesEvent
2021-03-23 00:05:15
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
roachtest: tpce/c=5000/nodes=3 failed
C-test-failure O-roachtest O-robot branch-master
[(roachtest).tpce/c=5000/nodes=3 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2712399&tab=buildLog) on [master@ec011620c7cf299fdbb898db692b36454defc4a2](https://github.com/cockroachdb/cockroach/commits/ec011620c7cf299fdbb898db692b36454defc4a2): ``` | cd798458a46f: Pull complete | Digest: sha256:1e299df6b79d4630bdb394ed98211f9303292b35aff2555ba5b997f2f889fdea | Status: Downloaded newer image for cockroachdb/tpc-e:latest | Error: Error { kind: Db, cause: Some(DbError { severity: "ERROR", parsed_severity: None, code: SqlState("XXUUU"), message: "relation \"charge\" is offline: importing", detail: None, hint: None, position: None, where_: None, schema: None, table: None, column: None, datatype: None, constraint: None, file: Some("descriptor.go"), line: Some(525), routine: Some("FilterDescriptorState") }) } | Error: COMMAND_PROBLEM: exit status 1 | (1) COMMAND_PROBLEM | Wraps: (2) Node 4. Command with error: | | ``` | | sudo docker run cockroachdb/tpc-e:latest --customers=5000 --racks=3 --init --hosts=10.128.0.203 | | ``` | Wraps: (3) exit status 1 | Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError | | stdout: | Initializing schema... | Importing dataset... Wraps: (4) exit status 20 Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *exec.ExitError cluster.go:2688,tpce.go:96,tpce.go:113,test_runner.go:767: monitor failure: monitor task failed: t.Fatal() was called (1) attached stack trace -- stack trace: | main.(*monitor).WaitE | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2676 | main.(*monitor).Wait | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2684 | main.registerTPCE.func1 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpce.go:96 | main.registerTPCE.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpce.go:113 | main.(*testRunner).runTest.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:767 Wraps: (2) monitor failure Wraps: (3) attached stack trace -- stack trace: | main.(*monitor).wait.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2732 Wraps: (4) monitor task failed Wraps: (5) attached stack trace -- stack trace: | main.init | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2646 | runtime.doInit | /usr/local/go/src/runtime/proc.go:5652 | runtime.main | /usr/local/go/src/runtime/proc.go:191 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1374 Wraps: (6) t.Fatal() was called Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError ``` <details><summary>More</summary><p> Artifacts: [/tpce/c=5000/nodes=3](https://teamcity.cockroachdb.com/viewLog.html?buildId=2712399&tab=artifacts#/tpce/c=5000/nodes=3) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Atpce%2Fc%3D5000%2Fnodes%3D3.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
2.0
roachtest: tpce/c=5000/nodes=3 failed - [(roachtest).tpce/c=5000/nodes=3 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2712399&tab=buildLog) on [master@ec011620c7cf299fdbb898db692b36454defc4a2](https://github.com/cockroachdb/cockroach/commits/ec011620c7cf299fdbb898db692b36454defc4a2): ``` | cd798458a46f: Pull complete | Digest: sha256:1e299df6b79d4630bdb394ed98211f9303292b35aff2555ba5b997f2f889fdea | Status: Downloaded newer image for cockroachdb/tpc-e:latest | Error: Error { kind: Db, cause: Some(DbError { severity: "ERROR", parsed_severity: None, code: SqlState("XXUUU"), message: "relation \"charge\" is offline: importing", detail: None, hint: None, position: None, where_: None, schema: None, table: None, column: None, datatype: None, constraint: None, file: Some("descriptor.go"), line: Some(525), routine: Some("FilterDescriptorState") }) } | Error: COMMAND_PROBLEM: exit status 1 | (1) COMMAND_PROBLEM | Wraps: (2) Node 4. Command with error: | | ``` | | sudo docker run cockroachdb/tpc-e:latest --customers=5000 --racks=3 --init --hosts=10.128.0.203 | | ``` | Wraps: (3) exit status 1 | Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError | | stdout: | Initializing schema... | Importing dataset... Wraps: (4) exit status 20 Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *exec.ExitError cluster.go:2688,tpce.go:96,tpce.go:113,test_runner.go:767: monitor failure: monitor task failed: t.Fatal() was called (1) attached stack trace -- stack trace: | main.(*monitor).WaitE | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2676 | main.(*monitor).Wait | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2684 | main.registerTPCE.func1 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpce.go:96 | main.registerTPCE.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpce.go:113 | main.(*testRunner).runTest.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:767 Wraps: (2) monitor failure Wraps: (3) attached stack trace -- stack trace: | main.(*monitor).wait.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2732 Wraps: (4) monitor task failed Wraps: (5) attached stack trace -- stack trace: | main.init | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2646 | runtime.doInit | /usr/local/go/src/runtime/proc.go:5652 | runtime.main | /usr/local/go/src/runtime/proc.go:191 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1374 Wraps: (6) t.Fatal() was called Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError ``` <details><summary>More</summary><p> Artifacts: [/tpce/c=5000/nodes=3](https://teamcity.cockroachdb.com/viewLog.html?buildId=2712399&tab=artifacts#/tpce/c=5000/nodes=3) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Atpce%2Fc%3D5000%2Fnodes%3D3.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
non_priority
roachtest tpce c nodes failed on pull complete digest status downloaded newer image for cockroachdb tpc e latest error error kind db cause some dberror severity error parsed severity none code sqlstate xxuuu message relation charge is offline importing detail none hint none position none where none schema none table none column none datatype none constraint none file some descriptor go line some routine some filterdescriptorstate error command problem exit status command problem wraps node command with error sudo docker run cockroachdb tpc e latest customers racks init hosts wraps exit status error types errors cmd hintdetail withdetail exec exiterror stdout initializing schema importing dataset wraps exit status error types withstack withstack errutil withprefix main withcommanddetails exec exiterror cluster go tpce go tpce go test runner go monitor failure monitor task failed t fatal was called attached stack trace stack trace main monitor waite home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main registertpce home agent work go src github com cockroachdb cockroach pkg cmd roachtest tpce go main registertpce home agent work go src github com cockroachdb cockroach pkg cmd roachtest tpce go main testrunner runtest home agent work go src github com cockroachdb cockroach pkg cmd roachtest test runner go wraps monitor failure wraps attached stack trace stack trace main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go wraps monitor task failed wraps attached stack trace stack trace main init home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go runtime doinit usr local go src runtime proc go runtime main usr local go src runtime proc go runtime goexit usr local go src runtime asm s wraps t fatal was called error types withstack withstack errutil withprefix withstack withstack errutil withprefix withstack withstack errutil leaferror more artifacts powered by
0
11,275
14,074,388,014
IssuesEvent
2020-11-04 07:12:38
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Create Points Layer From Table algorithm does not work in processing modeler
Bug Feedback Modeller Processing
<!-- Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone. If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix Checklist before submitting - [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists - [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles). - [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue --> **Describe the bug** <!-- A clear and concise description of what the bug is. --> The algorithm `Create Points Layer From Table` works if used directly. But when using it in the processing modeler, geometries are created with `NULL` coordinates. **How to Reproduce** 100% 1. Extract and open the [project](https://github.com/qgis/QGIS/files/5473133/project.zip). 2. In the processing toolbox, Open `Project models` > `Test` > `Test` 3. Using the layer `data` as input, run the algorithm. You will see no geometries in the output layers although I'm not confident if I used the algorithm correctly in the modeller 🤔. **QGIS and OS versions** QGIS version | 3.17.0-Master | QGIS code branch | master 9935bbe05e6e8709bc3fc120c23c721e862726f5 -- | -- | -- | -- Compiled against Qt | 5.15.1 | Running against Qt | 5.15.1 Compiled against GDAL/OGR | 3.1.3 | Running against GDAL/OGR | 3.1.3 Compiled against GEOS | 3.8.1-CAPI-1.13.3 | Running against GEOS | 3.8.1-CAPI-1.13.3 Compiled against SQLite | 3.33.0 | Running against SQLite | 3.33.0 PostgreSQL Client Version | 12.4 | SpatiaLite Version | 5.0.0-beta0 QWT Version | 6.1.5 | QScintilla2 Version | 2.11.2 Compiled against PROJ | 6.3.2 | Running against PROJ | Rel. 6.3.2, May 1st, 2020 OS Version | Fedora 33 (Workstation Edition) Active python plugins | string_reader; quick_map_services; MetaSearch; db_manager; processing
1.0
Create Points Layer From Table algorithm does not work in processing modeler - <!-- Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone. If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix Checklist before submitting - [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists - [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles). - [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue --> **Describe the bug** <!-- A clear and concise description of what the bug is. --> The algorithm `Create Points Layer From Table` works if used directly. But when using it in the processing modeler, geometries are created with `NULL` coordinates. **How to Reproduce** 100% 1. Extract and open the [project](https://github.com/qgis/QGIS/files/5473133/project.zip). 2. In the processing toolbox, Open `Project models` > `Test` > `Test` 3. Using the layer `data` as input, run the algorithm. You will see no geometries in the output layers although I'm not confident if I used the algorithm correctly in the modeller 🤔. **QGIS and OS versions** QGIS version | 3.17.0-Master | QGIS code branch | master 9935bbe05e6e8709bc3fc120c23c721e862726f5 -- | -- | -- | -- Compiled against Qt | 5.15.1 | Running against Qt | 5.15.1 Compiled against GDAL/OGR | 3.1.3 | Running against GDAL/OGR | 3.1.3 Compiled against GEOS | 3.8.1-CAPI-1.13.3 | Running against GEOS | 3.8.1-CAPI-1.13.3 Compiled against SQLite | 3.33.0 | Running against SQLite | 3.33.0 PostgreSQL Client Version | 12.4 | SpatiaLite Version | 5.0.0-beta0 QWT Version | 6.1.5 | QScintilla2 Version | 2.11.2 Compiled against PROJ | 6.3.2 | Running against PROJ | Rel. 6.3.2, May 1st, 2020 OS Version | Fedora 33 (Workstation Edition) Active python plugins | string_reader; quick_map_services; MetaSearch; db_manager; processing
non_priority
create points layer from table algorithm does not work in processing modeler bug fixing and feature development is a community responsibility and not the responsibility of the qgis project alone if this bug report or feature request is high priority for you we suggest engaging a qgis developer or support organisation and financially sponsoring a fix checklist before submitting search through existing issue reports and gis stackexchange com to check whether the issue already exists test with a create a light and self contained sample dataset and project file which demonstrates the issue describe the bug the algorithm create points layer from table works if used directly but when using it in the processing modeler geometries are created with null coordinates how to reproduce extract and open the in the processing toolbox open project models test test using the layer data as input run the algorithm you will see no geometries in the output layers although i m not confident if i used the algorithm correctly in the modeller 🤔 qgis and os versions qgis version master qgis code branch master compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel may os version fedora workstation edition active python plugins string reader quick map services metasearch db manager processing
0
631,128
20,145,346,445
IssuesEvent
2022-02-09 06:39:24
DeFiCh/whale
https://api.github.com/repos/DeFiCh/whale
closed
Basic DEX PoolSwap & CompositeSwap Indexing
priority/urgent-now triage/accepted kind/feature area/module-database area/module-indexer
<!-- Please only use this template for submitting enhancement/feature requests --> #### What would you like to be added: Simplified indexing of PoolSwap and CompositeSwap to track volume and activity. For Volume and APR tracking only hence Add/Remove Pool DfTx is not required. /triage accepted /area module-indexer module-database /priority important-soon
1.0
Basic DEX PoolSwap & CompositeSwap Indexing - <!-- Please only use this template for submitting enhancement/feature requests --> #### What would you like to be added: Simplified indexing of PoolSwap and CompositeSwap to track volume and activity. For Volume and APR tracking only hence Add/Remove Pool DfTx is not required. /triage accepted /area module-indexer module-database /priority important-soon
priority
basic dex poolswap compositeswap indexing what would you like to be added simplified indexing of poolswap and compositeswap to track volume and activity for volume and apr tracking only hence add remove pool dftx is not required triage accepted area module indexer module database priority important soon
1
220,250
17,172,479,495
IssuesEvent
2021-07-15 07:14:01
submariner-io/submariner-operator
https://api.github.com/repos/submariner-io/submariner-operator
opened
`subctl benchmark throughput` appears to be stuck
0.10.0-testday enhancement
When running `subctl benchmark throughput` it appears as if it's stuck when it's running the test. ```bash [root@483221d9c954 submariner]# subctl benchmark throughput --kubecontexts cluster1,cluster2 Performing throughput tests from Gateway pod on cluster "cluster1" to Gateway pod on cluster "cluster2" ... TIME PASSES ... its/sec 161 53.0 KBytes [ 17] 7.00-8.00 sec 12.2 MBytes 102 Mbits/sec 173 60.8 KBytes [ 19] 7.00-8.00 sec 12.1 MBytes 101 Mbits/sec 209 45.3 KBytes [ 21] 7.00-8.00 sec 11.9 MBytes 100 Mbits/sec 105 58.2 KBytes ``` Form a UX perspective, it would be better to print out "spinner messages" indicating something is being done (and it would be in- line with other subctl commands). For example, describe pods being created, iperf running, etc. Optionally, we can provide a `--quiet` flag to surpress this output (for better consumption in scripts and such). <!-- Please only use this template for submitting enhancement requests --> **What would you like to be added**: **Why is this needed**:
1.0
`subctl benchmark throughput` appears to be stuck - When running `subctl benchmark throughput` it appears as if it's stuck when it's running the test. ```bash [root@483221d9c954 submariner]# subctl benchmark throughput --kubecontexts cluster1,cluster2 Performing throughput tests from Gateway pod on cluster "cluster1" to Gateway pod on cluster "cluster2" ... TIME PASSES ... its/sec 161 53.0 KBytes [ 17] 7.00-8.00 sec 12.2 MBytes 102 Mbits/sec 173 60.8 KBytes [ 19] 7.00-8.00 sec 12.1 MBytes 101 Mbits/sec 209 45.3 KBytes [ 21] 7.00-8.00 sec 11.9 MBytes 100 Mbits/sec 105 58.2 KBytes ``` Form a UX perspective, it would be better to print out "spinner messages" indicating something is being done (and it would be in- line with other subctl commands). For example, describe pods being created, iperf running, etc. Optionally, we can provide a `--quiet` flag to surpress this output (for better consumption in scripts and such). <!-- Please only use this template for submitting enhancement requests --> **What would you like to be added**: **Why is this needed**:
non_priority
subctl benchmark throughput appears to be stuck when running subctl benchmark throughput it appears as if it s stuck when it s running the test bash subctl benchmark throughput kubecontexts performing throughput tests from gateway pod on cluster to gateway pod on cluster time passes its sec kbytes sec mbytes mbits sec kbytes sec mbytes mbits sec kbytes sec mbytes mbits sec kbytes form a ux perspective it would be better to print out spinner messages indicating something is being done and it would be in line with other subctl commands for example describe pods being created iperf running etc optionally we can provide a quiet flag to surpress this output for better consumption in scripts and such what would you like to be added why is this needed
0
137,262
5,301,186,726
IssuesEvent
2017-02-10 08:43:54
Cadasta/cadasta-platform
https://api.github.com/repos/Cadasta/cadasta-platform
closed
API: unauthorizated users can edit organization
bug priority: high security
### Steps to reproduce the error __Anonymous User__ - Create an organization (name: 'Westeros') - Run `http PUT https://platform-staging.cadasta.org/api/v1/organizations/westeros/ name='Westeros (The Capital)'` OR __Non-org member__ - Create an organization (name: 'Westeros') - Run `http PUT https://platform-staging.cadasta.org/api/v1/organizations/westeros/ name='Westeros (The Capital)' AUTHORIZATION:'Token {token of non-org member}'` ### Actual behavior Returned JSON: ``` { ... "name": "Westeros (The Capital)", ... } ``` ### Expected behavior Permission denied.
1.0
API: unauthorizated users can edit organization - ### Steps to reproduce the error __Anonymous User__ - Create an organization (name: 'Westeros') - Run `http PUT https://platform-staging.cadasta.org/api/v1/organizations/westeros/ name='Westeros (The Capital)'` OR __Non-org member__ - Create an organization (name: 'Westeros') - Run `http PUT https://platform-staging.cadasta.org/api/v1/organizations/westeros/ name='Westeros (The Capital)' AUTHORIZATION:'Token {token of non-org member}'` ### Actual behavior Returned JSON: ``` { ... "name": "Westeros (The Capital)", ... } ``` ### Expected behavior Permission denied.
priority
api unauthorizated users can edit organization steps to reproduce the error anonymous user create an organization name westeros run http put name westeros the capital or non org member create an organization name westeros run http put name westeros the capital authorization token token of non org member actual behavior returned json name westeros the capital expected behavior permission denied
1
123,576
17,772,266,231
IssuesEvent
2021-08-30 14:54:48
kapseliboi/farmOS-client
https://api.github.com/repos/kapseliboi/farmOS-client
opened
CVE-2021-28092 (High) detected in is-svg-3.0.0.tgz
security vulnerability
## CVE-2021-28092 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>is-svg-3.0.0.tgz</b></p></summary> <p>Check if a string or buffer is SVG</p> <p>Library home page: <a href="https://registry.npmjs.org/is-svg/-/is-svg-3.0.0.tgz">https://registry.npmjs.org/is-svg/-/is-svg-3.0.0.tgz</a></p> <p>Path to dependency file: farmOS-client/package.json</p> <p>Path to vulnerable library: farmOS-client/node_modules/is-svg/package.json</p> <p> Dependency Hierarchy: - optimize-css-assets-webpack-plugin-5.0.3.tgz (Root Library) - cssnano-4.1.10.tgz - cssnano-preset-default-4.0.7.tgz - postcss-svgo-4.0.2.tgz - :x: **is-svg-3.0.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/kapseliboi/farmOS-client/commit/84d85973c6144586d046c1a0e273a352c6c58968">84d85973c6144586d046c1a0e273a352c6c58968</a></p> <p>Found in base branch: <b>develop</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The is-svg package 2.1.0 through 4.2.1 for Node.js uses a regular expression that is vulnerable to Regular Expression Denial of Service (ReDoS). If an attacker provides a malicious string, is-svg will get stuck processing the input for a very long time. <p>Publish Date: 2021-03-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-28092>CVE-2021-28092</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28092">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28092</a></p> <p>Release Date: 2021-03-12</p> <p>Fix Resolution: v4.2.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-28092 (High) detected in is-svg-3.0.0.tgz - ## CVE-2021-28092 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>is-svg-3.0.0.tgz</b></p></summary> <p>Check if a string or buffer is SVG</p> <p>Library home page: <a href="https://registry.npmjs.org/is-svg/-/is-svg-3.0.0.tgz">https://registry.npmjs.org/is-svg/-/is-svg-3.0.0.tgz</a></p> <p>Path to dependency file: farmOS-client/package.json</p> <p>Path to vulnerable library: farmOS-client/node_modules/is-svg/package.json</p> <p> Dependency Hierarchy: - optimize-css-assets-webpack-plugin-5.0.3.tgz (Root Library) - cssnano-4.1.10.tgz - cssnano-preset-default-4.0.7.tgz - postcss-svgo-4.0.2.tgz - :x: **is-svg-3.0.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/kapseliboi/farmOS-client/commit/84d85973c6144586d046c1a0e273a352c6c58968">84d85973c6144586d046c1a0e273a352c6c58968</a></p> <p>Found in base branch: <b>develop</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The is-svg package 2.1.0 through 4.2.1 for Node.js uses a regular expression that is vulnerable to Regular Expression Denial of Service (ReDoS). If an attacker provides a malicious string, is-svg will get stuck processing the input for a very long time. <p>Publish Date: 2021-03-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-28092>CVE-2021-28092</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28092">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28092</a></p> <p>Release Date: 2021-03-12</p> <p>Fix Resolution: v4.2.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_priority
cve high detected in is svg tgz cve high severity vulnerability vulnerable library is svg tgz check if a string or buffer is svg library home page a href path to dependency file farmos client package json path to vulnerable library farmos client node modules is svg package json dependency hierarchy optimize css assets webpack plugin tgz root library cssnano tgz cssnano preset default tgz postcss svgo tgz x is svg tgz vulnerable library found in head commit a href found in base branch develop vulnerability details the is svg package through for node js uses a regular expression that is vulnerable to regular expression denial of service redos if an attacker provides a malicious string is svg will get stuck processing the input for a very long time publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
265,892
8,360,136,505
IssuesEvent
2018-10-03 10:30:24
lendingblock/aio-openapi
https://api.github.com/repos/lendingblock/aio-openapi
closed
Add search by column
high priority
We want to be able to search (postgres `ilike` style). Let's start with one column with intent to extend to multicolumn search
1.0
Add search by column - We want to be able to search (postgres `ilike` style). Let's start with one column with intent to extend to multicolumn search
priority
add search by column we want to be able to search postgres ilike style let s start with one column with intent to extend to multicolumn search
1
82,329
7,837,774,550
IssuesEvent
2018-06-18 07:51:17
brave/browser-laptop
https://api.github.com/repos/brave/browser-laptop
closed
Do not update model from private tabs
initiative/bat-ads initiative/bat-ads/ads-test release/blocking
Currently, private tabs show up in the model log. They shouldn't. cf. https://github.com/brave-intl/internal/issues/57
1.0
Do not update model from private tabs - Currently, private tabs show up in the model log. They shouldn't. cf. https://github.com/brave-intl/internal/issues/57
non_priority
do not update model from private tabs currently private tabs show up in the model log they shouldn t cf
0
705,274
24,229,071,189
IssuesEvent
2022-09-26 16:36:01
googleapis/release-please
https://api.github.com/repos/googleapis/release-please
closed
Missing proxy configuration for the graphql client when use proxy configuration
priority: p2 type: bug
Thanks for stopping by to let us know something could be better! **PLEASE READ**: If you have a support contract with Google, please create an issue in the [support console](https://cloud.google.com/support/) instead of filing on GitHub. This will ensure a timely response. 1) Is this a client library issue or a product issue? With the #1639 , we are able to run the release-please-action in a self-hosted runner behind a proxy. However, we miss passing the same proxy configuration for the graphql client. As shown in the below image, the API call to rest HTTP endpoints works as expected but failed to call the graphql github apiendpoint. <img width="988" alt="Screen Shot 2022-09-26 at 11 03 24 am (1)" src="https://user-images.githubusercontent.com/7248260/192175559-a92ae491-063d-4824-b9ad-fee521c206df.png"> #### Environment details - OS: - Node.js version: - npm version: - `release-please` version: #### Steps to reproduce Modify the release-please-action to test proxy. #### The fix By adding the same proxy configuration to the graphql client will resolve this issue ``` graphql: graphql.defaults({ baseUrl: graphqlUrl, request: { agent: this.createDefaultAgent(graphqlUrl, options.proxy), }, headers: { 'user-agent': `release-please/${releasePleaseVersion}`, Authorization: `token ${options.token}`, 'content-type': 'application/vnd.github.v3+json', }, }), ``` With the above changes, I can see jobs were succeeded in our self-hosted runner. <img width="839" alt="Screen Shot 2022-09-26 at 11 16 19 am" src="https://user-images.githubusercontent.com/7248260/192175894-f81afd84-545a-4e2d-9f90-f75aad861fe5.png"> PR is ready: #1655
1.0
Missing proxy configuration for the graphql client when use proxy configuration - Thanks for stopping by to let us know something could be better! **PLEASE READ**: If you have a support contract with Google, please create an issue in the [support console](https://cloud.google.com/support/) instead of filing on GitHub. This will ensure a timely response. 1) Is this a client library issue or a product issue? With the #1639 , we are able to run the release-please-action in a self-hosted runner behind a proxy. However, we miss passing the same proxy configuration for the graphql client. As shown in the below image, the API call to rest HTTP endpoints works as expected but failed to call the graphql github apiendpoint. <img width="988" alt="Screen Shot 2022-09-26 at 11 03 24 am (1)" src="https://user-images.githubusercontent.com/7248260/192175559-a92ae491-063d-4824-b9ad-fee521c206df.png"> #### Environment details - OS: - Node.js version: - npm version: - `release-please` version: #### Steps to reproduce Modify the release-please-action to test proxy. #### The fix By adding the same proxy configuration to the graphql client will resolve this issue ``` graphql: graphql.defaults({ baseUrl: graphqlUrl, request: { agent: this.createDefaultAgent(graphqlUrl, options.proxy), }, headers: { 'user-agent': `release-please/${releasePleaseVersion}`, Authorization: `token ${options.token}`, 'content-type': 'application/vnd.github.v3+json', }, }), ``` With the above changes, I can see jobs were succeeded in our self-hosted runner. <img width="839" alt="Screen Shot 2022-09-26 at 11 16 19 am" src="https://user-images.githubusercontent.com/7248260/192175894-f81afd84-545a-4e2d-9f90-f75aad861fe5.png"> PR is ready: #1655
priority
missing proxy configuration for the graphql client when use proxy configuration thanks for stopping by to let us know something could be better please read if you have a support contract with google please create an issue in the instead of filing on github this will ensure a timely response is this a client library issue or a product issue with the we are able to run the release please action in a self hosted runner behind a proxy however we miss passing the same proxy configuration for the graphql client as shown in the below image the api call to rest http endpoints works as expected but failed to call the graphql github apiendpoint img width alt screen shot at am src environment details os node js version npm version release please version steps to reproduce modify the release please action to test proxy the fix by adding the same proxy configuration to the graphql client will resolve this issue graphql graphql defaults baseurl graphqlurl request agent this createdefaultagent graphqlurl options proxy headers user agent release please releasepleaseversion authorization token options token content type application vnd github json with the above changes i can see jobs were succeeded in our self hosted runner img width alt screen shot at am src pr is ready
1
218,011
24,351,727,114
IssuesEvent
2022-10-03 01:13:46
hygieia/hygieia-whitesource-collector
https://api.github.com/repos/hygieia/hygieia-whitesource-collector
opened
CVE-2022-42003 (Medium) detected in jackson-databind-2.8.11.3.jar
security vulnerability
## CVE-2022-42003 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.11.3.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.11.3/jackson-databind-2.8.11.3.jar</p> <p> Dependency Hierarchy: - core-3.15.42.jar (Root Library) - spring-boot-starter-web-1.5.22.RELEASE.jar - :x: **jackson-databind-2.8.11.3.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/hygieia/hygieia-whitesource-collector/commit/4b5ed1d2f3030d721692ff4f980e8d2467fde19b">4b5ed1d2f3030d721692ff4f980e8d2467fde19b</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In FasterXML jackson-databind before 2.14.0-rc1, resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting, when the UNWRAP_SINGLE_VALUE_ARRAYS feature is enabled. <p>Publish Date: 2022-10-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42003>CVE-2022-42003</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-42003 (Medium) detected in jackson-databind-2.8.11.3.jar - ## CVE-2022-42003 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.11.3.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.11.3/jackson-databind-2.8.11.3.jar</p> <p> Dependency Hierarchy: - core-3.15.42.jar (Root Library) - spring-boot-starter-web-1.5.22.RELEASE.jar - :x: **jackson-databind-2.8.11.3.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/hygieia/hygieia-whitesource-collector/commit/4b5ed1d2f3030d721692ff4f980e8d2467fde19b">4b5ed1d2f3030d721692ff4f980e8d2467fde19b</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In FasterXML jackson-databind before 2.14.0-rc1, resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting, when the UNWRAP_SINGLE_VALUE_ARRAYS feature is enabled. <p>Publish Date: 2022-10-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42003>CVE-2022-42003</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_priority
cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy core jar root library spring boot starter web release jar x jackson databind jar vulnerable library found in head commit a href found in base branch main vulnerability details in fasterxml jackson databind before resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting when the unwrap single value arrays feature is enabled publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with mend
0
124,362
12,229,417,604
IssuesEvent
2020-05-04 00:07:59
drakessn/Kaggle_Learn
https://api.github.com/repos/drakessn/Kaggle_Learn
closed
Geospatial Analysis
Learning documentation
Create interactive maps, and discover patterns in geospatial data - [x] Your First Map - [x] Coordinate Reference Systems - [ ] Interactive Maps - [ ] Manipulating Geospatial Data - [ ] Proximity Analysis
1.0
Geospatial Analysis - Create interactive maps, and discover patterns in geospatial data - [x] Your First Map - [x] Coordinate Reference Systems - [ ] Interactive Maps - [ ] Manipulating Geospatial Data - [ ] Proximity Analysis
non_priority
geospatial analysis create interactive maps and discover patterns in geospatial data your first map coordinate reference systems interactive maps manipulating geospatial data proximity analysis
0
600,579
18,345,450,156
IssuesEvent
2021-10-08 05:19:52
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
closed
Kubelet is reporting OutOfCpu on previously running workloads after restart
kind/bug priority/critical-urgent sig/scheduling sig/node triage/accepted
<!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! If the matter is security related, please disclose it privately via https://kubernetes.io/security/ --> #### What happened: Restart: Always pods went into OutOfCpu error state after kubelet restart. #### What you expected to happen: Pods should remain running as node had sufficient capacity for them. Completed, RestartNever pod should not revert from Completed to OutOfCpu. #### How to reproduce it (as minimally and precisely as possible): Launch kubelet with: ```bash # kube-reserved setting allows my 16-core machine to have 8 cores allocatable LOG_LEVEL=6 KUBELET_FLAGS="--kube-reserved=cpu=8000m" ./hack/local-up-cluster.sh ``` Run the following script against the local cluster: <details> ```bash #!/bin/bash i=1 create_completed() { local i=1 local podname status while test $i -le 8 do podname=complete$i cluster/kubectl.sh create -f- <<EOF apiVersion: v1 kind: Pod metadata: name: $podname spec: containers: - name: busybox image: busybox command: ["echo", "ok"] resources: limits: cpu: "1" memory: "40Mi" restartPolicy: Never EOF while true do status=$(cluster/kubectl.sh get pods complete$i -o json | jq '.status.phase') echo $status test "$status" = '"Succeeded"' && break sleep 1 done ((i++)) done } create_running() { local i=1 local podname status while test $i -le 6 do podname=running$i cluster/kubectl.sh create -f- <<EOF apiVersion: v1 kind: Pod metadata: name: $podname spec: containers: - name: busybox image: busybox command: ["sleep", "inf"] resources: limits: cpu: "1" memory: "40Mi" restartPolicy: Always EOF ((i++)) done } create_completed create_running ``` </details> Observe pod status: ``` ehashman@fedora:~/src/k8s$ cluster/kubectl.sh get pod NAME READY STATUS RESTARTS AGE complete1 0/1 Completed 0 80s complete2 0/1 Completed 0 78s complete3 0/1 Completed 0 75s complete4 0/1 Completed 0 71s complete5 0/1 Completed 0 68s complete6 0/1 Completed 0 66s complete7 0/1 Completed 0 62s complete8 0/1 Completed 0 59s running1 1/1 Running 0 56s running2 1/1 Running 0 56s running3 1/1 Running 0 56s running4 1/1 Running 0 55s running5 1/1 Running 0 55s running6 1/1 Running 0 55s ``` Restart the kubelet: ```bash kill `pidof kubelet` START_MODE=kubeletonly KUBELET_FLAGS="--kube-reserved=cpu=8000m" hack/local-up-cluster.sh ``` Observe pods: ``` ehashman@fedora:~/src/k8s$ cluster/kubectl.sh get pod NAME READY STATUS RESTARTS AGE complete1 0/1 Completed 0 2m30s complete2 0/1 Completed 0 2m28s complete3 0/1 Completed 0 2m25s complete4 0/1 Completed 0 2m21s complete5 0/1 Completed 0 2m18s complete6 0/1 Completed 0 2m16s complete7 0/1 Completed 0 2m12s complete8 0/1 OutOfcpu 0 2m9s running1 0/1 Error 1 2m6s running2 0/1 Error 1 (70s ago) 2m6s running3 0/1 Error 1 (70s ago) 2m6s running4 0/1 Error 1 (70s ago) 2m5s running5 1/1 OutOfcpu 1 (70s ago) 2m5s running6 0/1 Error 1 (70s ago) 2m5s ``` [kubelet.log](https://github.com/kubernetes/kubernetes/files/7296430/kubelet.log) #### Anything else we need to know?: xref https://bugzilla.redhat.com/show_bug.cgi?id=2011513 /sig node #### Environment: - Kubernetes version (use `kubectl version`): v1.23.0-alpha.3.183+a861de6d162f61 - a861de6d162f6197971dffbf09ae101545a9c2e7 - Cloud provider or hardware configuration: Local machine - OS (e.g: `cat /etc/os-release`) & Kernel (e.g. `uname -a`): ``` ehashman@fedora:~/src/k8s$ cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" NAME="Debian GNU/Linux" VERSION_ID="11" VERSION="11 (bullseye)" VERSION_CODENAME=bullseye ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" ehashman@fedora:~/src/k8s$ uname -a Linux fedora 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64 GNU/Linux ``` - Install tools: local-up-cluster.sh - Network plugin and version (if this is a network-related bug): N/A - Others:
1.0
Kubelet is reporting OutOfCpu on previously running workloads after restart - <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! If the matter is security related, please disclose it privately via https://kubernetes.io/security/ --> #### What happened: Restart: Always pods went into OutOfCpu error state after kubelet restart. #### What you expected to happen: Pods should remain running as node had sufficient capacity for them. Completed, RestartNever pod should not revert from Completed to OutOfCpu. #### How to reproduce it (as minimally and precisely as possible): Launch kubelet with: ```bash # kube-reserved setting allows my 16-core machine to have 8 cores allocatable LOG_LEVEL=6 KUBELET_FLAGS="--kube-reserved=cpu=8000m" ./hack/local-up-cluster.sh ``` Run the following script against the local cluster: <details> ```bash #!/bin/bash i=1 create_completed() { local i=1 local podname status while test $i -le 8 do podname=complete$i cluster/kubectl.sh create -f- <<EOF apiVersion: v1 kind: Pod metadata: name: $podname spec: containers: - name: busybox image: busybox command: ["echo", "ok"] resources: limits: cpu: "1" memory: "40Mi" restartPolicy: Never EOF while true do status=$(cluster/kubectl.sh get pods complete$i -o json | jq '.status.phase') echo $status test "$status" = '"Succeeded"' && break sleep 1 done ((i++)) done } create_running() { local i=1 local podname status while test $i -le 6 do podname=running$i cluster/kubectl.sh create -f- <<EOF apiVersion: v1 kind: Pod metadata: name: $podname spec: containers: - name: busybox image: busybox command: ["sleep", "inf"] resources: limits: cpu: "1" memory: "40Mi" restartPolicy: Always EOF ((i++)) done } create_completed create_running ``` </details> Observe pod status: ``` ehashman@fedora:~/src/k8s$ cluster/kubectl.sh get pod NAME READY STATUS RESTARTS AGE complete1 0/1 Completed 0 80s complete2 0/1 Completed 0 78s complete3 0/1 Completed 0 75s complete4 0/1 Completed 0 71s complete5 0/1 Completed 0 68s complete6 0/1 Completed 0 66s complete7 0/1 Completed 0 62s complete8 0/1 Completed 0 59s running1 1/1 Running 0 56s running2 1/1 Running 0 56s running3 1/1 Running 0 56s running4 1/1 Running 0 55s running5 1/1 Running 0 55s running6 1/1 Running 0 55s ``` Restart the kubelet: ```bash kill `pidof kubelet` START_MODE=kubeletonly KUBELET_FLAGS="--kube-reserved=cpu=8000m" hack/local-up-cluster.sh ``` Observe pods: ``` ehashman@fedora:~/src/k8s$ cluster/kubectl.sh get pod NAME READY STATUS RESTARTS AGE complete1 0/1 Completed 0 2m30s complete2 0/1 Completed 0 2m28s complete3 0/1 Completed 0 2m25s complete4 0/1 Completed 0 2m21s complete5 0/1 Completed 0 2m18s complete6 0/1 Completed 0 2m16s complete7 0/1 Completed 0 2m12s complete8 0/1 OutOfcpu 0 2m9s running1 0/1 Error 1 2m6s running2 0/1 Error 1 (70s ago) 2m6s running3 0/1 Error 1 (70s ago) 2m6s running4 0/1 Error 1 (70s ago) 2m5s running5 1/1 OutOfcpu 1 (70s ago) 2m5s running6 0/1 Error 1 (70s ago) 2m5s ``` [kubelet.log](https://github.com/kubernetes/kubernetes/files/7296430/kubelet.log) #### Anything else we need to know?: xref https://bugzilla.redhat.com/show_bug.cgi?id=2011513 /sig node #### Environment: - Kubernetes version (use `kubectl version`): v1.23.0-alpha.3.183+a861de6d162f61 - a861de6d162f6197971dffbf09ae101545a9c2e7 - Cloud provider or hardware configuration: Local machine - OS (e.g: `cat /etc/os-release`) & Kernel (e.g. `uname -a`): ``` ehashman@fedora:~/src/k8s$ cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" NAME="Debian GNU/Linux" VERSION_ID="11" VERSION="11 (bullseye)" VERSION_CODENAME=bullseye ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" ehashman@fedora:~/src/k8s$ uname -a Linux fedora 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64 GNU/Linux ``` - Install tools: local-up-cluster.sh - Network plugin and version (if this is a network-related bug): N/A - Others:
priority
kubelet is reporting outofcpu on previously running workloads after restart please use this template while reporting a bug and provide as much info as possible not doing so may result in your bug not being addressed in a timely manner thanks if the matter is security related please disclose it privately via what happened restart always pods went into outofcpu error state after kubelet restart what you expected to happen pods should remain running as node had sufficient capacity for them completed restartnever pod should not revert from completed to outofcpu how to reproduce it as minimally and precisely as possible launch kubelet with bash kube reserved setting allows my core machine to have cores allocatable log level kubelet flags kube reserved cpu hack local up cluster sh run the following script against the local cluster bash bin bash i create completed local i local podname status while test i le do podname complete i cluster kubectl sh create f eof apiversion kind pod metadata name podname spec containers name busybox image busybox command resources limits cpu memory restartpolicy never eof while true do status cluster kubectl sh get pods complete i o json jq status phase echo status test status succeeded break sleep done i done create running local i local podname status while test i le do podname running i cluster kubectl sh create f eof apiversion kind pod metadata name podname spec containers name busybox image busybox command resources limits cpu memory restartpolicy always eof i done create completed create running observe pod status ehashman fedora src cluster kubectl sh get pod name ready status restarts age completed completed completed completed completed completed completed completed running running running running running running restart the kubelet bash kill pidof kubelet start mode kubeletonly kubelet flags kube reserved cpu hack local up cluster sh observe pods ehashman fedora src cluster kubectl sh get pod name ready status restarts age completed completed completed completed completed completed completed outofcpu error error ago error ago error ago outofcpu ago error ago anything else we need to know xref sig node environment kubernetes version use kubectl version alpha cloud provider or hardware configuration local machine os e g cat etc os release kernel e g uname a ehashman fedora src cat etc os release pretty name debian gnu linux bullseye name debian gnu linux version id version bullseye version codename bullseye id debian home url support url bug report url ehashman fedora src uname a linux fedora smp debian gnu linux install tools local up cluster sh network plugin and version if this is a network related bug n a others
1
444,047
12,805,565,837
IssuesEvent
2020-07-03 07:47:14
acl-org/acl-2020-virtual-conference
https://api.github.com/repos/acl-org/acl-2020-virtual-conference
closed
Sponsors & Exhibitor page
priority:high
**Number of Volunteers**: 3 TODO (Updated after #41) - [x] Edit [sponsors.json](https://github.com/acl-org/acl-2020-virtual-conference-sitedata/blob/master/sponsors.json). The information below need to be provided by the sponsor. For demo purpose, just put some generic values. - [x] change the logos (needs to upload images to the folder `static/images/acl2020` - [x] add a `channel` field to record the RocketChat channel, currently set them to `example_sponsor` - [x] add a `description` field to record the sponsor descriptions - [x] add a `website` field to record the link to the external Sponsor's website - [x] add a `zoom_schedule` field (list) to record Zoom Room links and schedules for live QA sessions. - [x] add a `contact` field (list) to record contact person information. - [x] other, e.g., downloadable links - [x] Edit `main.py` [here](https://github.com/acl-org/acl-2020-virtual-conference/blob/ee76ec9dfa96812defbf60910aea8129f3db3f72/main.py#L36) to make sure the `sponsor` data is properly loaded. Note `site_data` stores the raw `sponsors.json`, whereas `by_uid["sponsors"]` record a uid-to-sponsor lookup dict. - [x] Edit [sponsors.html](https://github.com/acl-org/acl-2020-virtual-conference/blob/master/templates/sponsors.html) and `components.html` [here](https://github.com/acl-org/acl-2020-virtual-conference/blob/ee76ec9dfa96812defbf60910aea8129f3db3f72/templates/components.html#L228) to make sure the "Sponsors" page look nice. See [ICASSP 2020](https://2020.ieeeicassp-virtual.org/patron-exhibitor-virtual-space) for reference. - [x] Edit [sponsor.html](https://github.com/acl-org/acl-2020-virtual-conference/blob/master/templates/sponsor.html) to make sure the individual sponsor pages look nice. - [x] Should show the sponsor level. - [x] Should show the sponsor logo. - [x] It seems that the page is auto-focused on the RocketChat channel. Should disable this behavior. * It seems to work fine now??? Need more testing
1.0
Sponsors & Exhibitor page - **Number of Volunteers**: 3 TODO (Updated after #41) - [x] Edit [sponsors.json](https://github.com/acl-org/acl-2020-virtual-conference-sitedata/blob/master/sponsors.json). The information below need to be provided by the sponsor. For demo purpose, just put some generic values. - [x] change the logos (needs to upload images to the folder `static/images/acl2020` - [x] add a `channel` field to record the RocketChat channel, currently set them to `example_sponsor` - [x] add a `description` field to record the sponsor descriptions - [x] add a `website` field to record the link to the external Sponsor's website - [x] add a `zoom_schedule` field (list) to record Zoom Room links and schedules for live QA sessions. - [x] add a `contact` field (list) to record contact person information. - [x] other, e.g., downloadable links - [x] Edit `main.py` [here](https://github.com/acl-org/acl-2020-virtual-conference/blob/ee76ec9dfa96812defbf60910aea8129f3db3f72/main.py#L36) to make sure the `sponsor` data is properly loaded. Note `site_data` stores the raw `sponsors.json`, whereas `by_uid["sponsors"]` record a uid-to-sponsor lookup dict. - [x] Edit [sponsors.html](https://github.com/acl-org/acl-2020-virtual-conference/blob/master/templates/sponsors.html) and `components.html` [here](https://github.com/acl-org/acl-2020-virtual-conference/blob/ee76ec9dfa96812defbf60910aea8129f3db3f72/templates/components.html#L228) to make sure the "Sponsors" page look nice. See [ICASSP 2020](https://2020.ieeeicassp-virtual.org/patron-exhibitor-virtual-space) for reference. - [x] Edit [sponsor.html](https://github.com/acl-org/acl-2020-virtual-conference/blob/master/templates/sponsor.html) to make sure the individual sponsor pages look nice. - [x] Should show the sponsor level. - [x] Should show the sponsor logo. - [x] It seems that the page is auto-focused on the RocketChat channel. Should disable this behavior. * It seems to work fine now??? Need more testing
priority
sponsors exhibitor page number of volunteers todo updated after edit the information below need to be provided by the sponsor for demo purpose just put some generic values change the logos needs to upload images to the folder static images add a channel field to record the rocketchat channel currently set them to example sponsor add a description field to record the sponsor descriptions add a website field to record the link to the external sponsor s website add a zoom schedule field list to record zoom room links and schedules for live qa sessions add a contact field list to record contact person information other e g downloadable links edit main py to make sure the sponsor data is properly loaded note site data stores the raw sponsors json whereas by uid record a uid to sponsor lookup dict edit and components html to make sure the sponsors page look nice see for reference edit to make sure the individual sponsor pages look nice should show the sponsor level should show the sponsor logo it seems that the page is auto focused on the rocketchat channel should disable this behavior it seems to work fine now need more testing
1
798,621
28,291,539,316
IssuesEvent
2023-04-09 09:10:10
AY2223S2-CS2103T-T15-1/tp
https://api.github.com/repos/AY2223S2-CS2103T-T15-1/tp
closed
[PE-D][Tester F] Inappropriate error message by view feature while in view mode
type.Bug priority.Low
![image.png](https://raw.githubusercontent.com/wendy0107/ped/main/files/d696a573-096e-41ef-83da-db2ad65a16db.png) Command sequence: view item swords -> view item swords While the user guide doesn't contain information regarding the expected error message nor specify the need to use `back` `b` after entering the view mode, the error message is still misleading as it suggests to users that the classification or entity name is wrong Also, there is no such thing as `entity type` or `type` in the user guide <!--session: 1680242215083-9e0b5931-2395-4e41-ac28-afe10aab59f0--> <!--Version: Web v3.4.7--> ------------- Labels: `severity.Low` `type.FeatureFlaw` original: wendy0107/ped#6
1.0
[PE-D][Tester F] Inappropriate error message by view feature while in view mode - ![image.png](https://raw.githubusercontent.com/wendy0107/ped/main/files/d696a573-096e-41ef-83da-db2ad65a16db.png) Command sequence: view item swords -> view item swords While the user guide doesn't contain information regarding the expected error message nor specify the need to use `back` `b` after entering the view mode, the error message is still misleading as it suggests to users that the classification or entity name is wrong Also, there is no such thing as `entity type` or `type` in the user guide <!--session: 1680242215083-9e0b5931-2395-4e41-ac28-afe10aab59f0--> <!--Version: Web v3.4.7--> ------------- Labels: `severity.Low` `type.FeatureFlaw` original: wendy0107/ped#6
priority
inappropriate error message by view feature while in view mode command sequence view item swords view item swords while the user guide doesn t contain information regarding the expected error message nor specify the need to use back b after entering the view mode the error message is still misleading as it suggests to users that the classification or entity name is wrong also there is no such thing as entity type or type in the user guide labels severity low type featureflaw original ped
1
15,967
3,996,047,901
IssuesEvent
2016-05-10 17:27:51
snowplow/snowplow
https://api.github.com/repos/snowplow/snowplow
closed
Documentation capitalisation change for Python tracker
documentation
I think the code snippet on the Python tracker page should have `subject` capitalised. https://github.com/snowplow/snowplow/wiki/Python-Tracker ``` from snowplow_tracker import Subject s = Subject() ```
1.0
Documentation capitalisation change for Python tracker - I think the code snippet on the Python tracker page should have `subject` capitalised. https://github.com/snowplow/snowplow/wiki/Python-Tracker ``` from snowplow_tracker import Subject s = Subject() ```
non_priority
documentation capitalisation change for python tracker i think the code snippet on the python tracker page should have subject capitalised from snowplow tracker import subject s subject
0
36,060
4,712,187,099
IssuesEvent
2016-10-14 15:58:21
brave/browser-laptop
https://api.github.com/repos/brave/browser-laptop
opened
l10n of ledger advanced settings modal window
design feature/ledger l10n
**Describe the issue you encountered:** buttons on ledger advanced settings modal window are broken. ![clipboard01](https://cloud.githubusercontent.com/assets/3362943/19393880/a37ad584-9271-11e6-9fdb-d4f88ba2bb99.png) **Expected behavior:** Buttons should not word-wrap. - Platform (Win7, 8, 10? macOS? Linux distro?): Windows 10 - Brave Version: 0.12.5 RC1 - Any related issues: #2096
1.0
l10n of ledger advanced settings modal window - **Describe the issue you encountered:** buttons on ledger advanced settings modal window are broken. ![clipboard01](https://cloud.githubusercontent.com/assets/3362943/19393880/a37ad584-9271-11e6-9fdb-d4f88ba2bb99.png) **Expected behavior:** Buttons should not word-wrap. - Platform (Win7, 8, 10? macOS? Linux distro?): Windows 10 - Brave Version: 0.12.5 RC1 - Any related issues: #2096
non_priority
of ledger advanced settings modal window describe the issue you encountered buttons on ledger advanced settings modal window are broken expected behavior buttons should not word wrap platform macos linux distro windows brave version any related issues
0
31,272
11,903,303,730
IssuesEvent
2020-03-30 15:09:13
istio/istio
https://api.github.com/repos/istio/istio
opened
TLS handshake errors connecting to Istiod with low TTL
area/security
istiod logs: ``` 2020-03-30T15:06:05.093291Z info grpc: Server.Serve failed to complete security handshake from "10.28.1.200:53890": tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid ``` istio-agent logs: ``` [Envoy (Epoch 0)] [2020-03-30 15:03:37.052][26][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:92] StreamAggregatedResources gRPC config stream closed: 13, [Envoy (Epoch 0)] [2020-03-30 15:03:37.352][26][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:92] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure [Envoy (Epoch 0)] [2020-03-30 15:03:37.407][26][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:92] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure [Envoy (Epoch 0)] [2020-03-30 15:03:37.464][26][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:92] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure [Envoy (Epoch 0)] [2020-03-30 15:03:37.596][26][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:92] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure [Envoy (Epoch 0)] [2020-03-30 15:03:40.281][26][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:92] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure [Envoy (Epoch 0)] [2020-03-30 15:03:41.541][26][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:92] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure 2020-03-30T15:03:41.979425Z info sds resource:default connection is terminated: rpc error: code = Canceled desc = context canceled [Envoy (Epoch 0)] [2020-03-30 15:03:41.979][26][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:92] StreamSecrets gRPC config stream closed: 2, resource:default Close connection to proxy "sidecar~10.28.8.20~svc07-0-6-6ddb59546-9rp9n.service-graph07~service-graph07.svc.cluster.local-5" 2020-03-30T15:03:42.035457Z info sds resource:default new connection 2020-03-30T15:03:42.631652Z info cache GenerateSecret default 2020-03-30T15:03:42.631777Z info sds resource:default pushed key/cert pair to proxy ``` Basically the XDS connection is closed, fails to reconnect due to expired cert. Then we detect the cert is old, create a new one, and things go back to normal This is with a 10m TTL. I will investigate a slightly higher TTL Query `sum(irate(envoy_cluster_upstream_cx_connect_fail{cluster_name="xds-grpc"}[1m]))` can identify this easily
True
TLS handshake errors connecting to Istiod with low TTL - istiod logs: ``` 2020-03-30T15:06:05.093291Z info grpc: Server.Serve failed to complete security handshake from "10.28.1.200:53890": tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid ``` istio-agent logs: ``` [Envoy (Epoch 0)] [2020-03-30 15:03:37.052][26][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:92] StreamAggregatedResources gRPC config stream closed: 13, [Envoy (Epoch 0)] [2020-03-30 15:03:37.352][26][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:92] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure [Envoy (Epoch 0)] [2020-03-30 15:03:37.407][26][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:92] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure [Envoy (Epoch 0)] [2020-03-30 15:03:37.464][26][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:92] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure [Envoy (Epoch 0)] [2020-03-30 15:03:37.596][26][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:92] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure [Envoy (Epoch 0)] [2020-03-30 15:03:40.281][26][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:92] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure [Envoy (Epoch 0)] [2020-03-30 15:03:41.541][26][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:92] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure 2020-03-30T15:03:41.979425Z info sds resource:default connection is terminated: rpc error: code = Canceled desc = context canceled [Envoy (Epoch 0)] [2020-03-30 15:03:41.979][26][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:92] StreamSecrets gRPC config stream closed: 2, resource:default Close connection to proxy "sidecar~10.28.8.20~svc07-0-6-6ddb59546-9rp9n.service-graph07~service-graph07.svc.cluster.local-5" 2020-03-30T15:03:42.035457Z info sds resource:default new connection 2020-03-30T15:03:42.631652Z info cache GenerateSecret default 2020-03-30T15:03:42.631777Z info sds resource:default pushed key/cert pair to proxy ``` Basically the XDS connection is closed, fails to reconnect due to expired cert. Then we detect the cert is old, create a new one, and things go back to normal This is with a 10m TTL. I will investigate a slightly higher TTL Query `sum(irate(envoy_cluster_upstream_cx_connect_fail{cluster_name="xds-grpc"}[1m]))` can identify this easily
non_priority
tls handshake errors connecting to istiod with low ttl istiod logs info grpc server serve failed to complete security handshake from tls failed to verify client s certificate certificate has expired or is not yet valid istio agent logs streamaggregatedresources grpc config stream closed streamaggregatedresources grpc config stream closed upstream connect error or disconnect reset before headers reset reason connection failure streamaggregatedresources grpc config stream closed upstream connect error or disconnect reset before headers reset reason connection failure streamaggregatedresources grpc config stream closed upstream connect error or disconnect reset before headers reset reason connection failure streamaggregatedresources grpc config stream closed upstream connect error or disconnect reset before headers reset reason connection failure streamaggregatedresources grpc config stream closed upstream connect error or disconnect reset before headers reset reason connection failure streamaggregatedresources grpc config stream closed upstream connect error or disconnect reset before headers reset reason connection failure info sds resource default connection is terminated rpc error code canceled desc context canceled streamsecrets grpc config stream closed resource default close connection to proxy sidecar service service svc cluster local info sds resource default new connection info cache generatesecret default info sds resource default pushed key cert pair to proxy basically the xds connection is closed fails to reconnect due to expired cert then we detect the cert is old create a new one and things go back to normal this is with a ttl i will investigate a slightly higher ttl query sum irate envoy cluster upstream cx connect fail cluster name xds grpc can identify this easily
0
254,190
8,071,026,813
IssuesEvent
2018-08-06 11:49:28
FlowzPlatform/Sprint-User-Story-Board
https://api.github.com/repos/FlowzPlatform/Sprint-User-Story-Board
closed
Uploader-[feature]-Image Upload
Epic High Priority uploader - Images upload
### SUMMARY: customer can upload his product images ---- ### Prerequisite: - R&D of HTML Folder upload Ref : https://www.aurigma.com/docs/us8/uploading-folders-in-other-platforms-iuf.htm#setClientSide -subscription module have add-ons ### User Story Description: As a supplier, i want upload product image file. product image will appear in product listing page ### Acceptance Criteria: 1. file size should be limited by purchase subscription 2. Image will be stored on cloudinary ###See [Wiki](https://github.com/FlowzPlatform/Sprint-User-Story-Board/wiki/%5BUploader%5D-:-Product-Images-Upload)
1.0
Uploader-[feature]-Image Upload - ### SUMMARY: customer can upload his product images ---- ### Prerequisite: - R&D of HTML Folder upload Ref : https://www.aurigma.com/docs/us8/uploading-folders-in-other-platforms-iuf.htm#setClientSide -subscription module have add-ons ### User Story Description: As a supplier, i want upload product image file. product image will appear in product listing page ### Acceptance Criteria: 1. file size should be limited by purchase subscription 2. Image will be stored on cloudinary ###See [Wiki](https://github.com/FlowzPlatform/Sprint-User-Story-Board/wiki/%5BUploader%5D-:-Product-Images-Upload)
priority
uploader image upload summary customer can upload his product images prerequisite r d of html folder upload ref subscription module have add ons user story description as a supplier i want upload product image file product image will appear in product listing page acceptance criteria file size should be limited by purchase subscription image will be stored on cloudinary see
1
498,283
14,405,043,106
IssuesEvent
2020-12-03 18:07:12
timescale/promscale
https://api.github.com/repos/timescale/promscale
opened
Avoid possible loss of resolution when switching leaders
kind/enhancement priority/sev3
The current leader-election for HA configuration is such that once the leader stops receiving the samples from prometheus instance for a period of the timeout, it gives up the advisory-lock and the non-leader then after receiving its samples from its corresponding Prometheus attempts and takes the lock. This means the samples between the fall of leader to the rise of the non-leader are dropped. We should prevent this by buffering possibly the samples in the non-leader instance for the time that is (timeout + prometheus scrape interval (assuming 30 seconds as ideal in clusters)) the timeout given for a shift of leader. This will assure that no samples are dropped.
1.0
Avoid possible loss of resolution when switching leaders - The current leader-election for HA configuration is such that once the leader stops receiving the samples from prometheus instance for a period of the timeout, it gives up the advisory-lock and the non-leader then after receiving its samples from its corresponding Prometheus attempts and takes the lock. This means the samples between the fall of leader to the rise of the non-leader are dropped. We should prevent this by buffering possibly the samples in the non-leader instance for the time that is (timeout + prometheus scrape interval (assuming 30 seconds as ideal in clusters)) the timeout given for a shift of leader. This will assure that no samples are dropped.
priority
avoid possible loss of resolution when switching leaders the current leader election for ha configuration is such that once the leader stops receiving the samples from prometheus instance for a period of the timeout it gives up the advisory lock and the non leader then after receiving its samples from its corresponding prometheus attempts and takes the lock this means the samples between the fall of leader to the rise of the non leader are dropped we should prevent this by buffering possibly the samples in the non leader instance for the time that is timeout prometheus scrape interval assuming seconds as ideal in clusters the timeout given for a shift of leader this will assure that no samples are dropped
1
191,535
6,830,003,316
IssuesEvent
2017-11-09 03:43:42
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
services.gst.gov.in - site is not usable
browser-firefox-mobile priority-normal
<!-- @browser: Firefox Mobile 58.0 --> <!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:58.0) Gecko/58.0 Firefox/58.0 --> <!-- @reported_with: mobile-reporter --> **URL**: https://services.gst.gov.in/services/login **Browser / Version**: Firefox Mobile 58.0 **Operating System**: Android 7.0 **Tested Another Browser**: Yes **Problem type**: Site is not usable **Description**: the website works fine on all other modern browsers **Steps to Reproduce**: This website works on all browsers except Firefox [![Screenshot Description](https://webcompat.com/uploads/2017/11/b7eac4fd-bc1e-424b-8363-72f476a3668a-thumb.jpg)](https://webcompat.com/uploads/2017/11/b7eac4fd-bc1e-424b-8363-72f476a3668a.jpg) _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
services.gst.gov.in - site is not usable - <!-- @browser: Firefox Mobile 58.0 --> <!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:58.0) Gecko/58.0 Firefox/58.0 --> <!-- @reported_with: mobile-reporter --> **URL**: https://services.gst.gov.in/services/login **Browser / Version**: Firefox Mobile 58.0 **Operating System**: Android 7.0 **Tested Another Browser**: Yes **Problem type**: Site is not usable **Description**: the website works fine on all other modern browsers **Steps to Reproduce**: This website works on all browsers except Firefox [![Screenshot Description](https://webcompat.com/uploads/2017/11/b7eac4fd-bc1e-424b-8363-72f476a3668a-thumb.jpg)](https://webcompat.com/uploads/2017/11/b7eac4fd-bc1e-424b-8363-72f476a3668a.jpg) _From [webcompat.com](https://webcompat.com/) with ❤️_
priority
services gst gov in site is not usable url browser version firefox mobile operating system android tested another browser yes problem type site is not usable description the website works fine on all other modern browsers steps to reproduce this website works on all browsers except firefox from with ❤️
1
593,855
18,018,518,320
IssuesEvent
2021-09-16 16:22:09
brave/qa-resources
https://api.github.com/repos/brave/qa-resources
closed
C94 1.31.x (Nightly) manual pass/regression check
OS/Desktop QA Completed - Win x64 priority/P1 OS/Windows
### <img src="https://www.rebron.org/wordpress/wp-content/uploads/2019/06/release-1.png"> `Chromium 94 Nightly Manual Passes/Regression Check` #### Release Date/Target: * Release Date: **`ASAP`** #### Summary: QA needs to run through **`C94`** on Nightly before it's uplifted into the other channels (targeting `1.30.x`) #### Manual Passes/Documentation: * https://github.com/brave/brave-browser/issues/18084
1.0
C94 1.31.x (Nightly) manual pass/regression check - ### <img src="https://www.rebron.org/wordpress/wp-content/uploads/2019/06/release-1.png"> `Chromium 94 Nightly Manual Passes/Regression Check` #### Release Date/Target: * Release Date: **`ASAP`** #### Summary: QA needs to run through **`C94`** on Nightly before it's uplifted into the other channels (targeting `1.30.x`) #### Manual Passes/Documentation: * https://github.com/brave/brave-browser/issues/18084
priority
x nightly manual pass regression check img src chromium nightly manual passes regression check release date target release date asap summary qa needs to run through on nightly before it s uplifted into the other channels targeting x manual passes documentation
1
107,716
4,314,369,732
IssuesEvent
2016-07-22 14:16:56
citusdata/citus
https://api.github.com/repos/citusdata/citus
opened
PurgeConnection may segfault when re-raising error
1-2 days priority:high
We recently made this change https://github.com/citusdata/citus/commit/16fc92bf6b95eed0ee7e7097572b4337a27f0573 However, there are various callers to ReportRemoteError that use it on connections that are not from the connection cache (e.g. COPY, master_modify_multiple_shards, DDL), and this will segfault if the connection cache hasn't been initialized, meaning the those callers may segfault whenever there is a remote error.
1.0
PurgeConnection may segfault when re-raising error - We recently made this change https://github.com/citusdata/citus/commit/16fc92bf6b95eed0ee7e7097572b4337a27f0573 However, there are various callers to ReportRemoteError that use it on connections that are not from the connection cache (e.g. COPY, master_modify_multiple_shards, DDL), and this will segfault if the connection cache hasn't been initialized, meaning the those callers may segfault whenever there is a remote error.
priority
purgeconnection may segfault when re raising error we recently made this change however there are various callers to reportremoteerror that use it on connections that are not from the connection cache e g copy master modify multiple shards ddl and this will segfault if the connection cache hasn t been initialized meaning the those callers may segfault whenever there is a remote error
1
100,708
12,551,696,798
IssuesEvent
2020-06-06 15:38:01
mcneel/rhino.inside-revit
https://api.github.com/repos/mcneel/rhino.inside-revit
closed
Rhino.Inside Revit - Element - Delete Element
design enhancement ux
Would be possible to add true/ false input Toggle to` Delete Element` or make the second node. Everyone always forgets to detach this node and this cause issue. With `Button` component this will be the perfect solution. ![image](https://user-images.githubusercontent.com/26113670/81300467-a0193e80-906f-11ea-80b5-939871bf1780.png)
1.0
Rhino.Inside Revit - Element - Delete Element - Would be possible to add true/ false input Toggle to` Delete Element` or make the second node. Everyone always forgets to detach this node and this cause issue. With `Button` component this will be the perfect solution. ![image](https://user-images.githubusercontent.com/26113670/81300467-a0193e80-906f-11ea-80b5-939871bf1780.png)
non_priority
rhino inside revit element delete element would be possible to add true false input toggle to delete element or make the second node everyone always forgets to detach this node and this cause issue with button component this will be the perfect solution
0
161,462
25,343,617,100
IssuesEvent
2022-11-19 01:26:22
pulumi/pulumi-aws
https://api.github.com/repos/pulumi/pulumi-aws
closed
Changing aws.rds.Instance identifier results in replacement
kind/enhancement resolution/by-design
If I change the `identifier` of a `aws.rds.Instance`, Pulumi thinks it needs to replace the instance. This property is actually able to be changed in place.
1.0
Changing aws.rds.Instance identifier results in replacement - If I change the `identifier` of a `aws.rds.Instance`, Pulumi thinks it needs to replace the instance. This property is actually able to be changed in place.
non_priority
changing aws rds instance identifier results in replacement if i change the identifier of a aws rds instance pulumi thinks it needs to replace the instance this property is actually able to be changed in place
0
225,676
17,873,790,498
IssuesEvent
2021-09-06 21:24:08
sqlalchemy/sqlalchemy
https://api.github.com/repos/sqlalchemy/sqlalchemy
closed
Tests seem not to catch failures in `test/ext/asincio`
bug tests asyncio
### Describe the bug The current master https://github.com/sqlalchemy/sqlalchemy/commit/184e2da5992c55266b37bab5ce3a07e9dfb8caa1 fails a test in `test/ext/asyncio/test_session_py3k.py`: `FAILED test/ext/asyncio/test_session_py3k.py::OverrideSyncSession::test_init_class - AttributeError: 'AsyncSession' object attribute 'sync_session_class' is read-only` Looking at the error the current implementation is incompatible with `__slots__` in the `AsyncSession` class. > ### To Reproduce ```python pytest --db aiosqlite .\test\ext\asyncio\ ``` ### Error FAILED test/ext/asyncio/test_session_py3k.py::OverrideSyncSession::test_init_class - AttributeError: 'AsyncSession' object attribute 'sync_session_class' is read-only ### Versions - SQLAlchemy: 184e2da5992c55266b37bab5ce3a07e9dfb8caa1 ### Additional context _No response_
1.0
Tests seem not to catch failures in `test/ext/asincio` - ### Describe the bug The current master https://github.com/sqlalchemy/sqlalchemy/commit/184e2da5992c55266b37bab5ce3a07e9dfb8caa1 fails a test in `test/ext/asyncio/test_session_py3k.py`: `FAILED test/ext/asyncio/test_session_py3k.py::OverrideSyncSession::test_init_class - AttributeError: 'AsyncSession' object attribute 'sync_session_class' is read-only` Looking at the error the current implementation is incompatible with `__slots__` in the `AsyncSession` class. > ### To Reproduce ```python pytest --db aiosqlite .\test\ext\asyncio\ ``` ### Error FAILED test/ext/asyncio/test_session_py3k.py::OverrideSyncSession::test_init_class - AttributeError: 'AsyncSession' object attribute 'sync_session_class' is read-only ### Versions - SQLAlchemy: 184e2da5992c55266b37bab5ce3a07e9dfb8caa1 ### Additional context _No response_
non_priority
tests seem not to catch failures in test ext asincio describe the bug the current master fails a test in test ext asyncio test session py failed test ext asyncio test session py overridesyncsession test init class attributeerror asyncsession object attribute sync session class is read only looking at the error the current implementation is incompatible with slots in the asyncsession class to reproduce python pytest db aiosqlite test ext asyncio error failed test ext asyncio test session py overridesyncsession test init class attributeerror asyncsession object attribute sync session class is read only versions sqlalchemy additional context no response
0
755,717
26,437,808,584
IssuesEvent
2023-01-15 16:01:24
gamefreedomgit/Maelstrom
https://api.github.com/repos/gamefreedomgit/Maelstrom
closed
Just Broke Stormwind Royal Guards patrol and they cleared almost all mobs behind goldshire till murlock lil islands [PATROL BUG]
NPC Movement Priority: Low Status: Confirmed
[//]: # (REMBEMBER! Add links to things related to the bug using for example:) [//]: # (http://wowhead.com/) [//]: # (cata-twinhead.twinstar.cz) **Description:** So I Just Broke Stormwind Royal Guards patroling Stormwind > Goldshire Area and they cleared almost all mobs behind goldshire till murlock islands teleporting midair xd **How to reproduce:** I`ve used gryphon master from Goldshire to Stormwind and it broke. I was flying without Gryphon to Stormwind but it did not release me in SW city and went back to Goldshire throu all the textures then back to stormwind when it let me off. So i wanted to replicate this bug and I`ve reloaded game and just used multiple times those Gryphon masters and after few attemps i saw guards patrolling SW>Goldshire area bug out. Well they started to fly midair towards all the mobs and going towards Goldshire teleporting throu buildings and then killing all the mobs there till after murlock islands and later on as I was observing them they cleared almost whole forest from Goldshire till Tower of Azora. Seems like guards route is bugged and their agro too big. Please look first comment for some more screenshots **How it should work:** Guards should patrol from SW city > Goldshire > Eastvale Logging Camp area killing mobs that are close to road. **Database links:** https://cata-twinhead.twinstar.cz/?npc=42218 https://cata-twinhead.twinstar.cz/?npc=42983 https://cata-twinhead.twinstar.cz/?npc=352 ![WoWScrnShot_011323_023845](https://user-images.githubusercontent.com/122581454/212224363-d5ae605d-afa2-43cf-aa6e-1859186dc9f0.jpg) ![WoWScrnShot_011323_025320](https://user-images.githubusercontent.com/122581454/212224456-e21c8f6b-58d7-45e0-998b-aef6ada0f7a5.jpg) ![xdd](https://user-images.githubusercontent.com/122581454/212224545-3ad537ec-02aa-485a-8332-33b4e582a7bd.jpg) ![WoWScrnShot_011323_025756](https://user-images.githubusercontent.com/122581454/212224598-9557fcfc-d988-400e-94c9-9bf6f650ad40.jpg)
1.0
Just Broke Stormwind Royal Guards patrol and they cleared almost all mobs behind goldshire till murlock lil islands [PATROL BUG] - [//]: # (REMBEMBER! Add links to things related to the bug using for example:) [//]: # (http://wowhead.com/) [//]: # (cata-twinhead.twinstar.cz) **Description:** So I Just Broke Stormwind Royal Guards patroling Stormwind > Goldshire Area and they cleared almost all mobs behind goldshire till murlock islands teleporting midair xd **How to reproduce:** I`ve used gryphon master from Goldshire to Stormwind and it broke. I was flying without Gryphon to Stormwind but it did not release me in SW city and went back to Goldshire throu all the textures then back to stormwind when it let me off. So i wanted to replicate this bug and I`ve reloaded game and just used multiple times those Gryphon masters and after few attemps i saw guards patrolling SW>Goldshire area bug out. Well they started to fly midair towards all the mobs and going towards Goldshire teleporting throu buildings and then killing all the mobs there till after murlock islands and later on as I was observing them they cleared almost whole forest from Goldshire till Tower of Azora. Seems like guards route is bugged and their agro too big. Please look first comment for some more screenshots **How it should work:** Guards should patrol from SW city > Goldshire > Eastvale Logging Camp area killing mobs that are close to road. **Database links:** https://cata-twinhead.twinstar.cz/?npc=42218 https://cata-twinhead.twinstar.cz/?npc=42983 https://cata-twinhead.twinstar.cz/?npc=352 ![WoWScrnShot_011323_023845](https://user-images.githubusercontent.com/122581454/212224363-d5ae605d-afa2-43cf-aa6e-1859186dc9f0.jpg) ![WoWScrnShot_011323_025320](https://user-images.githubusercontent.com/122581454/212224456-e21c8f6b-58d7-45e0-998b-aef6ada0f7a5.jpg) ![xdd](https://user-images.githubusercontent.com/122581454/212224545-3ad537ec-02aa-485a-8332-33b4e582a7bd.jpg) ![WoWScrnShot_011323_025756](https://user-images.githubusercontent.com/122581454/212224598-9557fcfc-d988-400e-94c9-9bf6f650ad40.jpg)
priority
just broke stormwind royal guards patrol and they cleared almost all mobs behind goldshire till murlock lil islands rembember add links to things related to the bug using for example cata twinhead twinstar cz description so i just broke stormwind royal guards patroling stormwind goldshire area and they cleared almost all mobs behind goldshire till murlock islands teleporting midair xd how to reproduce i ve used gryphon master from goldshire to stormwind and it broke i was flying without gryphon to stormwind but it did not release me in sw city and went back to goldshire throu all the textures then back to stormwind when it let me off so i wanted to replicate this bug and i ve reloaded game and just used multiple times those gryphon masters and after few attemps i saw guards patrolling sw goldshire area bug out well they started to fly midair towards all the mobs and going towards goldshire teleporting throu buildings and then killing all the mobs there till after murlock islands and later on as i was observing them they cleared almost whole forest from goldshire till tower of azora seems like guards route is bugged and their agro too big please look first comment for some more screenshots how it should work guards should patrol from sw city goldshire eastvale logging camp area killing mobs that are close to road database links
1
44,066
2,899,099,610
IssuesEvent
2015-06-17 09:12:43
greenlion/PHP-SQL-Parser
https://api.github.com/repos/greenlion/PHP-SQL-Parser
closed
1. please turn off debug print_r() 2. probbaly, some bug in parser triggers it
bug imported Priority-Medium
_From [rel...@gmail.com](https://code.google.com/u/113326933444776031366/) on June 19, 2011 15:31:59_ WHAT STEPS WILL REPRODUCE THE PROBLEM? 1. $sql = "SELECT SQL_CALC_FOUND_ROWS SmTable.*, MATCH (SmTable.fulltextsearch_keyword) AGAINST ('google googles') AS keyword_score FROM SmTable WHERE SmTable.status = 'A' AND (SmTable.country_id = 1 AND SmTable.state_id = 10) AND MATCH (SmTable.fulltextsearch_keyword) AGAINST ('google googles') ORDER BY SmTable.level DESC, keyword_score DESC LIMIT 0,10" 2. $parser = new PHPSQLParser($sql); WHAT IS THE EXPECTED OUTPUT? WHAT DO YOU SEE INSTEAD? No output expected. Instead, it hits line 1242 and prints smth: if(!is_array($processed)) { print_r($processed); // 1242 $processed = false; } What version of the product are you using? On what operating system? http://php-sql-parser.googlecode.com/svn/trunk/php-sql-parser.php uname -a Linux ********* `#1` SMP Tue Sep 1 10:25:30 EDT 2009 x86_64 GNU/Linux _Original issue: http://code.google.com/p/php-sql-parser/issues/detail?id=12_
1.0
1. please turn off debug print_r() 2. probbaly, some bug in parser triggers it - _From [rel...@gmail.com](https://code.google.com/u/113326933444776031366/) on June 19, 2011 15:31:59_ WHAT STEPS WILL REPRODUCE THE PROBLEM? 1. $sql = "SELECT SQL_CALC_FOUND_ROWS SmTable.*, MATCH (SmTable.fulltextsearch_keyword) AGAINST ('google googles') AS keyword_score FROM SmTable WHERE SmTable.status = 'A' AND (SmTable.country_id = 1 AND SmTable.state_id = 10) AND MATCH (SmTable.fulltextsearch_keyword) AGAINST ('google googles') ORDER BY SmTable.level DESC, keyword_score DESC LIMIT 0,10" 2. $parser = new PHPSQLParser($sql); WHAT IS THE EXPECTED OUTPUT? WHAT DO YOU SEE INSTEAD? No output expected. Instead, it hits line 1242 and prints smth: if(!is_array($processed)) { print_r($processed); // 1242 $processed = false; } What version of the product are you using? On what operating system? http://php-sql-parser.googlecode.com/svn/trunk/php-sql-parser.php uname -a Linux ********* `#1` SMP Tue Sep 1 10:25:30 EDT 2009 x86_64 GNU/Linux _Original issue: http://code.google.com/p/php-sql-parser/issues/detail?id=12_
priority
please turn off debug print r probbaly some bug in parser triggers it from on june what steps will reproduce the problem sql select sql calc found rows smtable match smtable fulltextsearch keyword against google googles as keyword score from smtable where smtable status a and smtable country id and smtable state id and match smtable fulltextsearch keyword against google googles order by smtable level desc keyword score desc limit parser new phpsqlparser sql what is the expected output what do you see instead no output expected instead it hits line and prints smth if is array processed print r processed processed false what version of the product are you using on what operating system uname a linux smp tue sep edt gnu linux original issue
1
674,601
23,058,837,469
IssuesEvent
2022-07-25 08:05:03
Raid-Training-Initiative/RTIBot
https://api.github.com/repos/Raid-Training-Initiative/RTIBot
closed
Slash Command: /managetrainingrequest
feature high priority slash commands
**Details:** * `/managetrainingrequest add <member> <wings> <comment>` adds a training request for the given `<member>`, with the `<wings>` being a comma-separated list of wings (use `8` for EoD strikes) and `<comment>` being a comment that shows up in the training request. * `/managetrainingrequest remove <user>` removes any training request of a given `<member>` **Current functionality:** * `TrainingRequestAddCommand` * `TrainingRequestRemoveCommand` **Notes:** * Ensure that there's validation on the `<member>` parameters (i.e. it accepts only Discord users). * Validation on the `<wings>` parameter to via regex would be nice. * No autocomplete needed. * Error messages should be ephemeral.
1.0
Slash Command: /managetrainingrequest - **Details:** * `/managetrainingrequest add <member> <wings> <comment>` adds a training request for the given `<member>`, with the `<wings>` being a comma-separated list of wings (use `8` for EoD strikes) and `<comment>` being a comment that shows up in the training request. * `/managetrainingrequest remove <user>` removes any training request of a given `<member>` **Current functionality:** * `TrainingRequestAddCommand` * `TrainingRequestRemoveCommand` **Notes:** * Ensure that there's validation on the `<member>` parameters (i.e. it accepts only Discord users). * Validation on the `<wings>` parameter to via regex would be nice. * No autocomplete needed. * Error messages should be ephemeral.
priority
slash command managetrainingrequest details managetrainingrequest add adds a training request for the given with the being a comma separated list of wings use for eod strikes and being a comment that shows up in the training request managetrainingrequest remove removes any training request of a given current functionality trainingrequestaddcommand trainingrequestremovecommand notes ensure that there s validation on the parameters i e it accepts only discord users validation on the parameter to via regex would be nice no autocomplete needed error messages should be ephemeral
1
116,452
9,852,871,337
IssuesEvent
2019-06-19 13:44:48
kcigeospatial/Fred_Co_Land-Management
https://api.github.com/repos/kcigeospatial/Fred_Co_Land-Management
reopened
PreProd Mapping-Final Inspection Milestone
Ready for PreProd Env. Retest
Permit# 190619 (Non-Residential Tank) -MCoffman Permit Only has Final Inspection left to be completed, but is still in the "Inspections" Milestone not "Final Inspection"; This issue has been found on multiple permit types ![image](https://user-images.githubusercontent.com/47611580/59690410-63bac680-91af-11e9-96c9-9fba58f2b975.png) ![image](https://user-images.githubusercontent.com/47611580/59690574-9ebcfa00-91af-11e9-8c7d-ab4ed7a486c5.png)
1.0
PreProd Mapping-Final Inspection Milestone - Permit# 190619 (Non-Residential Tank) -MCoffman Permit Only has Final Inspection left to be completed, but is still in the "Inspections" Milestone not "Final Inspection"; This issue has been found on multiple permit types ![image](https://user-images.githubusercontent.com/47611580/59690410-63bac680-91af-11e9-96c9-9fba58f2b975.png) ![image](https://user-images.githubusercontent.com/47611580/59690574-9ebcfa00-91af-11e9-8c7d-ab4ed7a486c5.png)
non_priority
preprod mapping final inspection milestone permit non residential tank mcoffman permit only has final inspection left to be completed but is still in the inspections milestone not final inspection this issue has been found on multiple permit types
0
226,015
24,928,986,150
IssuesEvent
2022-10-31 09:59:54
redpanda-data/redpanda
https://api.github.com/repos/redpanda-data/redpanda
closed
Failure in AccessControlListTestUpgrade.test_describe_acls
kind/bug area/redpanda ci-failure area/security
FAIL test: AccessControlListTestUpgrade.test_describe_acls.use_tls=True.use_sasl=False.enable_authz=False.authn_method=sasl.client_auth=True (1/46 runs) failure at 2022-10-21T08:48:49.667Z: <BadLogLines nodes=ip-172-31-11-31(1) example="ERROR 2022-10-21 07:07:09,233 [shard 0] rpc - server.cc:127 - kafka rpc protocol - Error[applying protocol] remote address: 172.31.38.196:41106 - std::__1::system_error (error GnuTLS:-110, The TLS connection was non-properly terminated.)"> in job [https://buildkite.com/redpanda/vtools/builds/3849#0183f881-bab6-4f7c-8777-18d74061ac61](https://buildkite.com/redpanda/vtools/builds/3849#0183f881-bab6-4f7c-8777-18d74061ac61)
True
Failure in AccessControlListTestUpgrade.test_describe_acls - FAIL test: AccessControlListTestUpgrade.test_describe_acls.use_tls=True.use_sasl=False.enable_authz=False.authn_method=sasl.client_auth=True (1/46 runs) failure at 2022-10-21T08:48:49.667Z: <BadLogLines nodes=ip-172-31-11-31(1) example="ERROR 2022-10-21 07:07:09,233 [shard 0] rpc - server.cc:127 - kafka rpc protocol - Error[applying protocol] remote address: 172.31.38.196:41106 - std::__1::system_error (error GnuTLS:-110, The TLS connection was non-properly terminated.)"> in job [https://buildkite.com/redpanda/vtools/builds/3849#0183f881-bab6-4f7c-8777-18d74061ac61](https://buildkite.com/redpanda/vtools/builds/3849#0183f881-bab6-4f7c-8777-18d74061ac61)
non_priority
failure in accesscontrollisttestupgrade test describe acls fail test accesscontrollisttestupgrade test describe acls use tls true use sasl false enable authz false authn method sasl client auth true runs failure at in job
0
543,535
15,883,452,096
IssuesEvent
2021-04-09 17:24:39
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
[bug] [tests] cant run tests locally without setting the ENV variables
high priority module: ci triage review triaged
After the PR https://github.com/pytorch/pytorch/pull/55522, **cant run tests locally without setting the ENV variables** ``` $ pytest test/test_ops.py ======================================================================= test session starts ======================================================================== platform linux -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 rootdir: /home/kshiteej/Pytorch/pytorch_opinfo, configfile: pytest.ini plugins: hypothesis-5.38.1 collected 0 items ========================================================================= warnings summary ========================================================================= ======================================================================= 2 warnings in 2.85s ======================================================================== ``` Works with ENV variable specified ``` PYTORCH_TESTING_DEVICE_ONLY_FOR="cpu" pytest test/test_ops.py ======================================================================= test session starts ======================================================================== platform linux -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 rootdir: /home/kshiteej/Pytorch/pytorch_opinfo, configfile: pytest.ini plugins: hypothesis-5.38.1 collected 4443 items ``` cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @anjali411 @seemethere @malfet @walterddr @pytorch/pytorch-dev-infra
1.0
[bug] [tests] cant run tests locally without setting the ENV variables - After the PR https://github.com/pytorch/pytorch/pull/55522, **cant run tests locally without setting the ENV variables** ``` $ pytest test/test_ops.py ======================================================================= test session starts ======================================================================== platform linux -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 rootdir: /home/kshiteej/Pytorch/pytorch_opinfo, configfile: pytest.ini plugins: hypothesis-5.38.1 collected 0 items ========================================================================= warnings summary ========================================================================= ======================================================================= 2 warnings in 2.85s ======================================================================== ``` Works with ENV variable specified ``` PYTORCH_TESTING_DEVICE_ONLY_FOR="cpu" pytest test/test_ops.py ======================================================================= test session starts ======================================================================== platform linux -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 rootdir: /home/kshiteej/Pytorch/pytorch_opinfo, configfile: pytest.ini plugins: hypothesis-5.38.1 collected 4443 items ``` cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @anjali411 @seemethere @malfet @walterddr @pytorch/pytorch-dev-infra
priority
cant run tests locally without setting the env variables after the pr cant run tests locally without setting the env variables pytest test test ops py test session starts platform linux python pytest py pluggy rootdir home kshiteej pytorch pytorch opinfo configfile pytest ini plugins hypothesis collected items warnings summary warnings in works with env variable specified pytorch testing device only for cpu pytest test test ops py test session starts platform linux python pytest py pluggy rootdir home kshiteej pytorch pytorch opinfo configfile pytest ini plugins hypothesis collected items cc ezyang gchanan bdhirsh jbschlosser seemethere malfet walterddr pytorch pytorch dev infra
1
768,191
26,957,690,570
IssuesEvent
2023-02-08 15:57:17
biodiversitydata-se/biocollect
https://api.github.com/repos/biodiversitydata-se/biocollect
closed
When a new route is created and "sent in", give the person a note that "you have to wait"
2-High priority
When a new point count route is created, we as admin repeatedly get several notes by mail that a route is created. Not just one note. This is because the person who created the route doesn't understand that the route is in the system and the (s)he needs to wait for an OK from us. They therefore submit the route several times. It is in the instructions, but "not all read instructions".... We would like to have a function that, when the person has "sent in" the new route, says something like: "Stort tack! Din nya punktrutt är registrerad. Den skall nu godkännas av projektledningen. Du kommer att få ett mail när den är godkänd. Först då kan du rapportera data för rutten". We only need this function for point count routes. Åke
1.0
When a new route is created and "sent in", give the person a note that "you have to wait" - When a new point count route is created, we as admin repeatedly get several notes by mail that a route is created. Not just one note. This is because the person who created the route doesn't understand that the route is in the system and the (s)he needs to wait for an OK from us. They therefore submit the route several times. It is in the instructions, but "not all read instructions".... We would like to have a function that, when the person has "sent in" the new route, says something like: "Stort tack! Din nya punktrutt är registrerad. Den skall nu godkännas av projektledningen. Du kommer att få ett mail när den är godkänd. Först då kan du rapportera data för rutten". We only need this function for point count routes. Åke
priority
when a new route is created and sent in give the person a note that you have to wait when a new point count route is created we as admin repeatedly get several notes by mail that a route is created not just one note this is because the person who created the route doesn t understand that the route is in the system and the s he needs to wait for an ok from us they therefore submit the route several times it is in the instructions but not all read instructions we would like to have a function that when the person has sent in the new route says something like stort tack din nya punktrutt är registrerad den skall nu godkännas av projektledningen du kommer att få ett mail när den är godkänd först då kan du rapportera data för rutten we only need this function for point count routes åke
1
160,577
6,100,322,915
IssuesEvent
2017-06-20 12:20:33
javaee/glassfish
https://api.github.com/repos/javaee/glassfish
opened
Create a new test suite in CI pipeline for admin GUI tests
Component: admin_gui Priority: Critical
The task is to create a script and make necessary changes in the devtests to make the new test suite run on hudson.
1.0
Create a new test suite in CI pipeline for admin GUI tests - The task is to create a script and make necessary changes in the devtests to make the new test suite run on hudson.
priority
create a new test suite in ci pipeline for admin gui tests the task is to create a script and make necessary changes in the devtests to make the new test suite run on hudson
1
24,540
6,551,638,719
IssuesEvent
2017-09-05 15:22:50
rmap-project/rmap
https://api.github.com/repos/rmap-project/rmap
closed
Update the backend code to implement new user management approach
Code improvement and features Production-readiness User Management
Follow up work from issue #65 Relating to Workplan task 3.7.5
1.0
Update the backend code to implement new user management approach - Follow up work from issue #65 Relating to Workplan task 3.7.5
non_priority
update the backend code to implement new user management approach follow up work from issue relating to workplan task
0
598,474
18,245,938,555
IssuesEvent
2021-10-01 18:24:03
AXeL-dev/youtube-viewer
https://api.github.com/repos/AXeL-dev/youtube-viewer
closed
Refactor channel selection
refactor high priority
Switching between channels can be done through routes for each selection/channel instead of filtering videos cache state by the current selected channel.
1.0
Refactor channel selection - Switching between channels can be done through routes for each selection/channel instead of filtering videos cache state by the current selected channel.
priority
refactor channel selection switching between channels can be done through routes for each selection channel instead of filtering videos cache state by the current selected channel
1
15,308
19,400,850,809
IssuesEvent
2021-12-19 06:13:31
ethereum/EIPs
https://api.github.com/repos/ethereum/EIPs
closed
Add mission statement
type: Meta type: EIP1 (Process) stale
Presently the ethereum/EIPs project does not have a mission statement. --- <strike>Recently something changed and now the majority of EIPs here have no path to become "final" standards. Pull request #1100 addresses that issue.</strike> However, one of the EIP editors (the people with commit access here) mentioned that #1100 is not urgent. There are no remaining complaints on #1100, it has EIP editor endorsements, but it is not merged. I reviewed the project README.md and was hoping to find something like "our goal is to discuss and pass high-quality standards reflecting established best practices in the community." So I could tell this person that #1100 is urgent (because presently, standards are prevented from passing). Alas no such line exists, in fact, there is nothing in the README.md that explains why we are contributing here. **It is much easier to set expectations for each other in this project if we have a clearly defined goal. And we should state that goal in the README.md.**
1.0
Add mission statement - Presently the ethereum/EIPs project does not have a mission statement. --- <strike>Recently something changed and now the majority of EIPs here have no path to become "final" standards. Pull request #1100 addresses that issue.</strike> However, one of the EIP editors (the people with commit access here) mentioned that #1100 is not urgent. There are no remaining complaints on #1100, it has EIP editor endorsements, but it is not merged. I reviewed the project README.md and was hoping to find something like "our goal is to discuss and pass high-quality standards reflecting established best practices in the community." So I could tell this person that #1100 is urgent (because presently, standards are prevented from passing). Alas no such line exists, in fact, there is nothing in the README.md that explains why we are contributing here. **It is much easier to set expectations for each other in this project if we have a clearly defined goal. And we should state that goal in the README.md.**
non_priority
add mission statement presently the ethereum eips project does not have a mission statement recently something changed and now the majority of eips here have no path to become final standards pull request addresses that issue however one of the eip editors the people with commit access here mentioned that is not urgent there are no remaining complaints on it has eip editor endorsements but it is not merged i reviewed the project readme md and was hoping to find something like our goal is to discuss and pass high quality standards reflecting established best practices in the community so i could tell this person that is urgent because presently standards are prevented from passing alas no such line exists in fact there is nothing in the readme md that explains why we are contributing here it is much easier to set expectations for each other in this project if we have a clearly defined goal and we should state that goal in the readme md
0
149,135
23,432,756,880
IssuesEvent
2022-08-15 05:53:55
OneCoin-00/BRJM-iOS
https://api.github.com/repos/OneCoin-00/BRJM-iOS
closed
Button 및 TextField Extension 추가
✨ feature 🎨 design
## ⭐️ Issues ## 📌 To Do - [x] 버튼 클릭 효과 적용 - [x] 텍스트필드 하단 라인 및 삭제 버튼 추가
1.0
Button 및 TextField Extension 추가 - ## ⭐️ Issues ## 📌 To Do - [x] 버튼 클릭 효과 적용 - [x] 텍스트필드 하단 라인 및 삭제 버튼 추가
non_priority
button 및 textfield extension 추가 ⭐️ issues 📌 to do 버튼 클릭 효과 적용 텍스트필드 하단 라인 및 삭제 버튼 추가
0
44,731
2,911,361,124
IssuesEvent
2015-06-22 09:03:17
mintty/mintty
https://api.github.com/repos/mintty/mintty
closed
Italics support
auto-migrated Difficulty-Hard Priority-Low Type-Enhancement
``` I would like to use italics in my vim syntax files, prompt, etc. ``` Original issue reported on code.google.com by `randy.ha...@gmail.com` on 10 Nov 2009 at 3:31
1.0
Italics support - ``` I would like to use italics in my vim syntax files, prompt, etc. ``` Original issue reported on code.google.com by `randy.ha...@gmail.com` on 10 Nov 2009 at 3:31
priority
italics support i would like to use italics in my vim syntax files prompt etc original issue reported on code google com by randy ha gmail com on nov at
1
307,112
9,414,200,616
IssuesEvent
2019-04-10 09:35:36
StrangeLoopGames/EcoIssues
https://api.github.com/repos/StrangeLoopGames/EcoIssues
closed
"worldobject" ladders is not working
Low Priority
**Version:** 0.7.8.0 beta e.g. blast furnace, crane, oil refinery, excavator, etc. ![20181114084250_1](https://user-images.githubusercontent.com/4980243/48462518-82a44500-e7e9-11e8-8095-0e7454c64546.jpg)
1.0
"worldobject" ladders is not working - **Version:** 0.7.8.0 beta e.g. blast furnace, crane, oil refinery, excavator, etc. ![20181114084250_1](https://user-images.githubusercontent.com/4980243/48462518-82a44500-e7e9-11e8-8095-0e7454c64546.jpg)
priority
worldobject ladders is not working version beta e g blast furnace crane oil refinery excavator etc
1
269,297
28,960,074,561
IssuesEvent
2023-05-10 01:13:04
dpteam/RK3188_TABLET
https://api.github.com/repos/dpteam/RK3188_TABLET
reopened
CVE-2021-3483 (High) detected in linuxv3.0
Mend: dependency security vulnerability
## CVE-2021-3483 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.0</b></p></summary> <p> <p>miscellaneous core development</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/djbw/linux.git>https://git.kernel.org/pub/scm/linux/kernel/git/djbw/linux.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/dpteam/RK3188_TABLET/commit/0c501f5a0fd72c7b2ac82904235363bd44fd8f9e">0c501f5a0fd72c7b2ac82904235363bd44fd8f9e</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/firewire/nosy.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> A flaw was found in the Nosy driver in the Linux kernel. This issue allows a device to be inserted twice into a doubly-linked list, leading to a use-after-free when one of these devices is removed. The highest threat from this vulnerability is to confidentiality, integrity, as well as system availability. Versions before kernel 5.12-rc6 are affected <p>Publish Date: 2021-05-17 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-3483>CVE-2021-3483</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-3483">https://www.linuxkernelcves.com/cves/CVE-2021-3483</a></p> <p>Release Date: 2021-05-17</p> <p>Fix Resolution: v4.4.265, v4.9.265, v4.14.229, v4.19.185, v5.4.110, v5.10.28, v5.11.12</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-3483 (High) detected in linuxv3.0 - ## CVE-2021-3483 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.0</b></p></summary> <p> <p>miscellaneous core development</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/djbw/linux.git>https://git.kernel.org/pub/scm/linux/kernel/git/djbw/linux.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/dpteam/RK3188_TABLET/commit/0c501f5a0fd72c7b2ac82904235363bd44fd8f9e">0c501f5a0fd72c7b2ac82904235363bd44fd8f9e</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/firewire/nosy.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> A flaw was found in the Nosy driver in the Linux kernel. This issue allows a device to be inserted twice into a doubly-linked list, leading to a use-after-free when one of these devices is removed. The highest threat from this vulnerability is to confidentiality, integrity, as well as system availability. Versions before kernel 5.12-rc6 are affected <p>Publish Date: 2021-05-17 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-3483>CVE-2021-3483</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-3483">https://www.linuxkernelcves.com/cves/CVE-2021-3483</a></p> <p>Release Date: 2021-05-17</p> <p>Fix Resolution: v4.4.265, v4.9.265, v4.14.229, v4.19.185, v5.4.110, v5.10.28, v5.11.12</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_priority
cve high detected in cve high severity vulnerability vulnerable library miscellaneous core development library home page a href found in head commit a href found in base branch master vulnerable source files drivers firewire nosy c vulnerability details a flaw was found in the nosy driver in the linux kernel this issue allows a device to be inserted twice into a doubly linked list leading to a use after free when one of these devices is removed the highest threat from this vulnerability is to confidentiality integrity as well as system availability versions before kernel are affected publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
211,942
23,856,833,485
IssuesEvent
2022-09-07 01:08:00
HoangBachLeLe/TemplateRepository
https://api.github.com/repos/HoangBachLeLe/TemplateRepository
opened
CVE-2022-38751 (Medium) detected in snakeyaml-1.29.jar
security vulnerability
## CVE-2022-38751 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.29.jar</b></p></summary> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.yaml/snakeyaml/1.29/6d0cdafb2010f1297e574656551d7145240f6e25/snakeyaml-1.29.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-validation-2.6.4.jar (Root Library) - spring-boot-starter-2.6.4.jar - :x: **snakeyaml-1.29.jar** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stackoverflow. <p>Publish Date: 2022-09-05 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38751>CVE-2022-38751</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-38751 (Medium) detected in snakeyaml-1.29.jar - ## CVE-2022-38751 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.29.jar</b></p></summary> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.yaml/snakeyaml/1.29/6d0cdafb2010f1297e574656551d7145240f6e25/snakeyaml-1.29.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-validation-2.6.4.jar (Root Library) - spring-boot-starter-2.6.4.jar - :x: **snakeyaml-1.29.jar** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stackoverflow. <p>Publish Date: 2022-09-05 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38751>CVE-2022-38751</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_priority
cve medium detected in snakeyaml jar cve medium severity vulnerability vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file build gradle path to vulnerable library home wss scanner gradle caches modules files org yaml snakeyaml snakeyaml jar dependency hierarchy spring boot starter validation jar root library spring boot starter jar x snakeyaml jar vulnerable library found in base branch main vulnerability details using snakeyaml to parse untrusted yaml files may be vulnerable to denial of service attacks dos if the parser is running on user supplied input an attacker may supply content that causes the parser to crash by stackoverflow publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with mend
0
677,863
23,178,266,563
IssuesEvent
2022-07-31 18:50:21
chaotic-aur/packages
https://api.github.com/repos/chaotic-aur/packages
closed
[Request] sqlcl
request:new-pkg priority:low
### Link to the package(s) in the AUR https://aur.archlinux.org/packages/sqlcl ### Utility this package has for you Sqlcl is a command-line utility for Oracle, it is used for remotely connecting to oracle db. I use it to manage Oracle DB which reside in docker container. ### Do you consider the package(s) to be useful for every Chaotic-AUR user? No, but for a few. ### Do you consider the package to be useful for feature testing/preview? - [ ] Yes ### Have you tested if the package builds in a clean chroot? - [X] Yes ### Does the package's license allow redistributing it? YES! ### Have you searched the issues to ensure this request is unique? - [X] YES! ### Have you read the README to ensure this package is not banned? - [X] YES! ### More information _No response_
1.0
[Request] sqlcl - ### Link to the package(s) in the AUR https://aur.archlinux.org/packages/sqlcl ### Utility this package has for you Sqlcl is a command-line utility for Oracle, it is used for remotely connecting to oracle db. I use it to manage Oracle DB which reside in docker container. ### Do you consider the package(s) to be useful for every Chaotic-AUR user? No, but for a few. ### Do you consider the package to be useful for feature testing/preview? - [ ] Yes ### Have you tested if the package builds in a clean chroot? - [X] Yes ### Does the package's license allow redistributing it? YES! ### Have you searched the issues to ensure this request is unique? - [X] YES! ### Have you read the README to ensure this package is not banned? - [X] YES! ### More information _No response_
priority
sqlcl link to the package s in the aur utility this package has for you sqlcl is a command line utility for oracle it is used for remotely connecting to oracle db i use it to manage oracle db which reside in docker container do you consider the package s to be useful for every chaotic aur user no but for a few do you consider the package to be useful for feature testing preview yes have you tested if the package builds in a clean chroot yes does the package s license allow redistributing it yes have you searched the issues to ensure this request is unique yes have you read the readme to ensure this package is not banned yes more information no response
1
14,139
17,016,659,615
IssuesEvent
2021-07-02 13:01:23
docker/compose-cli
https://api.github.com/repos/docker/compose-cli
closed
No warning provided for empty environment variables
compatibility
**Description** Use this docker-compose.yaml ```yaml services: web: image: busybox user: "$UID:$GID" ``` `docker-compose up` In docker-compose v1 you get: ``` $ docker-compose up WARNING: The UID variable is not set. Defaulting to a blank string. WARNING: The GID variable is not set. Defaulting to a blank string. ``` In docker-compose v2 you get no warning: ``` $ docker-compose up [+] Running 1/1 ⠿ Container junk_web_1 Started 2.9s Attaching to web_1 web_1 exited with code 0 ``` Since v2 is intended to behave like v1, it should probably warn.
True
No warning provided for empty environment variables - **Description** Use this docker-compose.yaml ```yaml services: web: image: busybox user: "$UID:$GID" ``` `docker-compose up` In docker-compose v1 you get: ``` $ docker-compose up WARNING: The UID variable is not set. Defaulting to a blank string. WARNING: The GID variable is not set. Defaulting to a blank string. ``` In docker-compose v2 you get no warning: ``` $ docker-compose up [+] Running 1/1 ⠿ Container junk_web_1 Started 2.9s Attaching to web_1 web_1 exited with code 0 ``` Since v2 is intended to behave like v1, it should probably warn.
non_priority
no warning provided for empty environment variables description use this docker compose yaml yaml services web image busybox user uid gid docker compose up in docker compose you get docker compose up warning the uid variable is not set defaulting to a blank string warning the gid variable is not set defaulting to a blank string in docker compose you get no warning docker compose up running ⠿ container junk web started attaching to web web exited with code since is intended to behave like it should probably warn
0
270,149
23,494,004,595
IssuesEvent
2022-08-17 22:00:09
pandas-dev/pandas
https://api.github.com/repos/pandas-dev/pandas
closed
WARN,TST check stacklevel for all warnings
Testing Warnings
Currently, the stacklevel is only checked for two classes of warnings: https://github.com/pandas-dev/pandas/blob/14de3fd9ca4178bfce5dd681fa5d0925e057c04d/pandas/_testing/_warnings.py#L132-L134 it would be good to extend this to other (all?) classes of warnings, and fixing the parts of the codebase where this fails
1.0
WARN,TST check stacklevel for all warnings - Currently, the stacklevel is only checked for two classes of warnings: https://github.com/pandas-dev/pandas/blob/14de3fd9ca4178bfce5dd681fa5d0925e057c04d/pandas/_testing/_warnings.py#L132-L134 it would be good to extend this to other (all?) classes of warnings, and fixing the parts of the codebase where this fails
non_priority
warn tst check stacklevel for all warnings currently the stacklevel is only checked for two classes of warnings it would be good to extend this to other all classes of warnings and fixing the parts of the codebase where this fails
0
722,688
24,871,800,812
IssuesEvent
2022-10-27 15:44:18
windchime-yk/resources
https://api.github.com/repos/windchime-yk/resources
opened
`og-edge`によるOGP画像の動的配信
Type: Feature Priority: High
Deno環境移植の`vercel/og`として[og-edge](https://github.com/ascorbic/og-edge)がリリースされている。 サンプルを見る限りシンプルなので、deno_blog系のOGP画像を配信する機能を追加したい。
1.0
`og-edge`によるOGP画像の動的配信 - Deno環境移植の`vercel/og`として[og-edge](https://github.com/ascorbic/og-edge)がリリースされている。 サンプルを見る限りシンプルなので、deno_blog系のOGP画像を配信する機能を追加したい。
priority
og edge によるogp画像の動的配信 deno環境移植の vercel og として サンプルを見る限りシンプルなので、deno blog系のogp画像を配信する機能を追加したい。
1
31,923
26,246,523,421
IssuesEvent
2023-01-05 15:40:15
CDCgov/data-exchange-hl7
https://api.github.com/repos/CDCgov/data-exchange-hl7
closed
Create new Functions due to Renaming by OPS team
infrastructure waiting
App - #414 Ops - Functions: #371 - EventHubs: #343 Priority 1: Done 6 Dec 2022 Everything else: Scheduled for Thu 15 Dec Create brand new functions due to renaming: - [x] ocio-ede-dev-hl7-mmg-based-transformer (*) - [x] ocio-ede-dev-hl7-cache-loader - [x] extra config: - [x] MMG_AT_TIME_TRIGGER = "0 0 7 * * *" - [x] PHINVOCAB_TME_TRIGGER = "0 0 9 * * * " - [x] ocio-ede-dev-hl7-receiver-debatcher (*) - [x] extra config: - [x] BlobIngestConnectionString - [x] BlobIngestContainerName - [x] ocio-ede-dev-hl7-mmg-validator (*) - [x] ocio-ede-dev-hl7-structure-validator (*) - [x] ocio-ede-dev-hl7-mmg-sql-transformer (*) [Priority 1 - Done 2022-12-06] - [x] ocio-ede-dev-hl7-lake-segments-transformer (*) [ RENAMED from ocio-ede-dev-hl7-lake-**of**-segments-transformer] For the fn with asterisk above, make sure two Event Hub topics exists: - [x] {prefix}-{fn-name}-ok (ex.: ocio-ede-dev-hl7-mmg-based-transformer-ok) [Priority 1 - Done 2022-12-06] - [x] {prefix}-{fn-name}-err (ex.: ocio-ede-dev-hl7-mmg-based-transformer-err) [Priority 1 - Done 2022-12-06] Configuration. Each of the functions above should have the following configurations: * EventHubConnectionString = {event hub namespace connection string} * EventHubConsumerGroup = {fn-name}-cg-001 * EventHubReceiveName = {the previous OK event hub on the pipeline} * EventHubSendOkName = {prefix}-{fn-name}-ok * EventHubSendErrName = {prefix}-{fn-name}-err * REDIS_CACHE_KEY = {az redis cache key} * REDIS_CACHE_NAME = {az redis cache name} * FUNCTIONS_EXTENSION_VERSION = ~4 * FUNCTION_WORKER_RUNTIME = java * WEBSITE_RUN_FROM_PACKAGE = 1 Ops specific keys: * APPINSIGHTS_INSTRUMENTATIONKEY * APPLICATIONINSIGHTS_CONNECTION_STRING * AzureWebJobsDashboard * AzireWebJobsStorage
1.0
Create new Functions due to Renaming by OPS team - App - #414 Ops - Functions: #371 - EventHubs: #343 Priority 1: Done 6 Dec 2022 Everything else: Scheduled for Thu 15 Dec Create brand new functions due to renaming: - [x] ocio-ede-dev-hl7-mmg-based-transformer (*) - [x] ocio-ede-dev-hl7-cache-loader - [x] extra config: - [x] MMG_AT_TIME_TRIGGER = "0 0 7 * * *" - [x] PHINVOCAB_TME_TRIGGER = "0 0 9 * * * " - [x] ocio-ede-dev-hl7-receiver-debatcher (*) - [x] extra config: - [x] BlobIngestConnectionString - [x] BlobIngestContainerName - [x] ocio-ede-dev-hl7-mmg-validator (*) - [x] ocio-ede-dev-hl7-structure-validator (*) - [x] ocio-ede-dev-hl7-mmg-sql-transformer (*) [Priority 1 - Done 2022-12-06] - [x] ocio-ede-dev-hl7-lake-segments-transformer (*) [ RENAMED from ocio-ede-dev-hl7-lake-**of**-segments-transformer] For the fn with asterisk above, make sure two Event Hub topics exists: - [x] {prefix}-{fn-name}-ok (ex.: ocio-ede-dev-hl7-mmg-based-transformer-ok) [Priority 1 - Done 2022-12-06] - [x] {prefix}-{fn-name}-err (ex.: ocio-ede-dev-hl7-mmg-based-transformer-err) [Priority 1 - Done 2022-12-06] Configuration. Each of the functions above should have the following configurations: * EventHubConnectionString = {event hub namespace connection string} * EventHubConsumerGroup = {fn-name}-cg-001 * EventHubReceiveName = {the previous OK event hub on the pipeline} * EventHubSendOkName = {prefix}-{fn-name}-ok * EventHubSendErrName = {prefix}-{fn-name}-err * REDIS_CACHE_KEY = {az redis cache key} * REDIS_CACHE_NAME = {az redis cache name} * FUNCTIONS_EXTENSION_VERSION = ~4 * FUNCTION_WORKER_RUNTIME = java * WEBSITE_RUN_FROM_PACKAGE = 1 Ops specific keys: * APPINSIGHTS_INSTRUMENTATIONKEY * APPLICATIONINSIGHTS_CONNECTION_STRING * AzureWebJobsDashboard * AzireWebJobsStorage
non_priority
create new functions due to renaming by ops team app ops functions eventhubs priority done dec everything else scheduled for thu dec create brand new functions due to renaming ocio ede dev mmg based transformer ocio ede dev cache loader extra config mmg at time trigger phinvocab tme trigger ocio ede dev receiver debatcher extra config blobingestconnectionstring blobingestcontainername ocio ede dev mmg validator ocio ede dev structure validator ocio ede dev mmg sql transformer ocio ede dev lake segments transformer for the fn with asterisk above make sure two event hub topics exists prefix fn name ok ex ocio ede dev mmg based transformer ok prefix fn name err ex ocio ede dev mmg based transformer err configuration each of the functions above should have the following configurations eventhubconnectionstring event hub namespace connection string eventhubconsumergroup fn name cg eventhubreceivename the previous ok event hub on the pipeline eventhubsendokname prefix fn name ok eventhubsenderrname prefix fn name err redis cache key az redis cache key redis cache name az redis cache name functions extension version function worker runtime java website run from package ops specific keys appinsights instrumentationkey applicationinsights connection string azurewebjobsdashboard azirewebjobsstorage
0
661,283
22,046,310,901
IssuesEvent
2022-05-30 02:23:34
microsoft/Recognizers-Text
https://api.github.com/repos/microsoft/Recognizers-Text
closed
[NL DateTimeV2] Multiple issues from speech across date/time sub-types
bug Priority:P2
1- Space between the day and rd/th like “Ik ga 20 ste van de volgende maand terug” isn’t recognized as a date 2- The expression “Ik ga terug vier dagen gerekend vanaf gisteren” is resolved as four days from today, but replacing the text with the digit four is not detected “Ik ga terug 4 dagen gerekend vanaf gisteren”. 3- “Het zal 3 dagen vanaf dinsdag gebeuren”, “Het zal 3 dagen na de 12e januari gebeuren” not resolved correctly 4- Replacing 1 with een in “week 1” is not detected, also the expression “Het gebeurde door een tijdsverschil van een seconde” both “eens” are resolves to the number 1 while only the latter is a number. 5- Jan-feb detected as a range but not jan feb in the example “APEC zal in Korea plaatsvinden jan-feb 2017” 6- This afternoon not recognized, such as the statement “laten we koffie gaan drinken op gebouw 4 deze middag” 7- The pattern [dayoftheweek][relativeweek] is not captured, like “We ontmoetten elkaar op dinsdag van vorige week” resolves last week and Tuesday and not 8- Seconde is not captured in time, “Het gebeurde door een tijdsverschil van een seconde” doesn’t extract “een seconde” 9- Meals in the day not recognized “laten we rond het avondeten afspreken” “laten we na het ontbijt afspreken” or “laten we voor de lunch afspreken” 10- Repeat cycles such as “Om de week vrijdag” “plan alsjeblieft een meeting voor de week beginnend op 4 feb” not captured, also muiltiples in the monthly cycle is not captured like “Plan een halfjaarlijkse vergadering in” and “Plan alsjeblieft een meeting halfjaarlijks in”, “Laten we ervoor zorgen dat dat elke weekdag gebeurt” 11- Christmas not captured in the statement “welke beschikbare tijd dan ook om vandaag tijdens lunchtijd boodschappen te doen” , “welke films spelen er op Kerstavond om 18.00?” 12- “Boek iets van de 26e juni van 2020 tot de 28e juni van 2020” not captured as a range (but captured as individual dates and the later is with a modifier (before) 13- Relative time in the year not captured, like [beginning|late|middle] [Year] e.g. “Dit bedrijf werd gestart aan het begin van 2000” 14- Holidays like labor day not captured in the example “Laten we de Dag van de Arbeid vieren”, Juneteenth “Juneteenth, wat ook bekendstaat als Vrijheidsdag en Jubilee Day, dateert terug tot 1865 en valt elk jaar op 19 juni” 15- Weekend not captured at a datetime “Doe je dat elk weekend?” 16- [Next] Easter is not captured in the phrase “Wanneer was de eropvolgende Pasen” while it is when using volgende 17- Wrong range in the statements “Ik zal voor het einde van december teruggaan”, “ Ik zal teruggaan tegen het eind van deze maand”, and “Ik zal teruggaan voor het eind van dit jaar”, “De reeks is van 2007 tot het einde van 2008”, “Het is gesloten van april tot het einde van juni” 18- Working week in “Het zal deze werkweek gebeuren” not resolved
1.0
[NL DateTimeV2] Multiple issues from speech across date/time sub-types - 1- Space between the day and rd/th like “Ik ga 20 ste van de volgende maand terug” isn’t recognized as a date 2- The expression “Ik ga terug vier dagen gerekend vanaf gisteren” is resolved as four days from today, but replacing the text with the digit four is not detected “Ik ga terug 4 dagen gerekend vanaf gisteren”. 3- “Het zal 3 dagen vanaf dinsdag gebeuren”, “Het zal 3 dagen na de 12e januari gebeuren” not resolved correctly 4- Replacing 1 with een in “week 1” is not detected, also the expression “Het gebeurde door een tijdsverschil van een seconde” both “eens” are resolves to the number 1 while only the latter is a number. 5- Jan-feb detected as a range but not jan feb in the example “APEC zal in Korea plaatsvinden jan-feb 2017” 6- This afternoon not recognized, such as the statement “laten we koffie gaan drinken op gebouw 4 deze middag” 7- The pattern [dayoftheweek][relativeweek] is not captured, like “We ontmoetten elkaar op dinsdag van vorige week” resolves last week and Tuesday and not 8- Seconde is not captured in time, “Het gebeurde door een tijdsverschil van een seconde” doesn’t extract “een seconde” 9- Meals in the day not recognized “laten we rond het avondeten afspreken” “laten we na het ontbijt afspreken” or “laten we voor de lunch afspreken” 10- Repeat cycles such as “Om de week vrijdag” “plan alsjeblieft een meeting voor de week beginnend op 4 feb” not captured, also muiltiples in the monthly cycle is not captured like “Plan een halfjaarlijkse vergadering in” and “Plan alsjeblieft een meeting halfjaarlijks in”, “Laten we ervoor zorgen dat dat elke weekdag gebeurt” 11- Christmas not captured in the statement “welke beschikbare tijd dan ook om vandaag tijdens lunchtijd boodschappen te doen” , “welke films spelen er op Kerstavond om 18.00?” 12- “Boek iets van de 26e juni van 2020 tot de 28e juni van 2020” not captured as a range (but captured as individual dates and the later is with a modifier (before) 13- Relative time in the year not captured, like [beginning|late|middle] [Year] e.g. “Dit bedrijf werd gestart aan het begin van 2000” 14- Holidays like labor day not captured in the example “Laten we de Dag van de Arbeid vieren”, Juneteenth “Juneteenth, wat ook bekendstaat als Vrijheidsdag en Jubilee Day, dateert terug tot 1865 en valt elk jaar op 19 juni” 15- Weekend not captured at a datetime “Doe je dat elk weekend?” 16- [Next] Easter is not captured in the phrase “Wanneer was de eropvolgende Pasen” while it is when using volgende 17- Wrong range in the statements “Ik zal voor het einde van december teruggaan”, “ Ik zal teruggaan tegen het eind van deze maand”, and “Ik zal teruggaan voor het eind van dit jaar”, “De reeks is van 2007 tot het einde van 2008”, “Het is gesloten van april tot het einde van juni” 18- Working week in “Het zal deze werkweek gebeuren” not resolved
priority
multiple issues from speech across date time sub types space between the day and rd th like “ik ga ste van de volgende maand terug” isn’t recognized as a date the expression “ik ga terug vier dagen gerekend vanaf gisteren” is resolved as four days from today but replacing the text with the digit four is not detected “ik ga terug dagen gerekend vanaf gisteren” “het zal dagen vanaf dinsdag gebeuren” “het zal dagen na de januari gebeuren” not resolved correctly replacing with een in “week ” is not detected also the expression “het gebeurde door een tijdsverschil van een seconde” both “eens” are resolves to the number while only the latter is a number jan feb detected as a range but not jan feb in the example “apec zal in korea plaatsvinden jan feb ” this afternoon not recognized such as the statement “laten we koffie gaan drinken op gebouw deze middag” the pattern is not captured like “we ontmoetten elkaar op dinsdag van vorige week” resolves last week and tuesday and not seconde is not captured in time “het gebeurde door een tijdsverschil van een seconde” doesn’t extract “een seconde” meals in the day not recognized “laten we rond het avondeten afspreken” “laten we na het ontbijt afspreken” or “laten we voor de lunch afspreken” repeat cycles such as “om de week vrijdag” “plan alsjeblieft een meeting voor de week beginnend op feb” not captured also muiltiples in the monthly cycle is not captured like “plan een halfjaarlijkse vergadering in” and “plan alsjeblieft een meeting halfjaarlijks in” “laten we ervoor zorgen dat dat elke weekdag gebeurt” christmas not captured in the statement “welke beschikbare tijd dan ook om vandaag tijdens lunchtijd boodschappen te doen” “welke films spelen er op kerstavond om ” “boek iets van de juni van tot de juni van ” not captured as a range but captured as individual dates and the later is with a modifier before relative time in the year not captured like e g “dit bedrijf werd gestart aan het begin van ” holidays like labor day not captured in the example “laten we de dag van de arbeid vieren” juneteenth “juneteenth wat ook bekendstaat als vrijheidsdag en jubilee day dateert terug tot en valt elk jaar op juni” weekend not captured at a datetime “doe je dat elk weekend ” easter is not captured in the phrase “wanneer was de eropvolgende pasen” while it is when using volgende wrong range in the statements “ik zal voor het einde van december teruggaan” “ ik zal teruggaan tegen het eind van deze maand” and “ik zal teruggaan voor het eind van dit jaar” “de reeks is van tot het einde van ” “het is gesloten van april tot het einde van juni” working week in “het zal deze werkweek gebeuren” not resolved
1
65,410
3,228,109,250
IssuesEvent
2015-10-11 20:12:14
cylc/cylc
https://api.github.com/repos/cylc/cylc
opened
Dynamic family size?
priority low
There are some situations where it would be useful to determine the number of family members just before a family submits, e.g. a suite that spawns forecast jobs to model all of the storm systems currently active in some part of the world. **This can be done now by running a sub-suite that takes the number of family members as a Jinja2 input variable.** But it would be nicer to have the family in the main suite. Low priority, as the sub-suite implementation of this works fine.
1.0
Dynamic family size? - There are some situations where it would be useful to determine the number of family members just before a family submits, e.g. a suite that spawns forecast jobs to model all of the storm systems currently active in some part of the world. **This can be done now by running a sub-suite that takes the number of family members as a Jinja2 input variable.** But it would be nicer to have the family in the main suite. Low priority, as the sub-suite implementation of this works fine.
priority
dynamic family size there are some situations where it would be useful to determine the number of family members just before a family submits e g a suite that spawns forecast jobs to model all of the storm systems currently active in some part of the world this can be done now by running a sub suite that takes the number of family members as a input variable but it would be nicer to have the family in the main suite low priority as the sub suite implementation of this works fine
1
56,008
14,894,769,041
IssuesEvent
2021-01-21 08:06:32
SasView/sasview
https://api.github.com/repos/SasView/sasview
closed
ESS_GUI: Model label on plot keeps being reset
Hackathon: Plotting SasView Bug Fixing defect
Noticed by User EmilyE on Mac. Verified by @smk78 on Windows. Using 5.0.2. Matplotlib really, really, doesn't like long dataset names. If you display such a dataset the actual graph gets squashed up in favour of displaying the legend! Like this: ![image](https://user-images.githubusercontent.com/10938679/86573418-49938800-bf6c-11ea-80dd-b1e7809ae969.png) You can use Modify Plot Property for each dataset on the plot to change the label for that data to something more manageable, for example ![image](https://user-images.githubusercontent.com/10938679/86573633-91b2aa80-bf6c-11ea-8d10-2c925e18bcc3.png) But, if you then change any parameter in the model, forcing a re-draw, the _model label_ gets reset, like this: ![image](https://user-images.githubusercontent.com/10938679/86573851-dcccbd80-bf6c-11ea-9271-e5329e7235d5.png) Which is very irritating! Modify Plot Property doesn't seem to give any control of the font size of legend text? As a more general comment, I think the default font size of legend text could be reduced. Maybe by two font sizes?
1.0
ESS_GUI: Model label on plot keeps being reset - Noticed by User EmilyE on Mac. Verified by @smk78 on Windows. Using 5.0.2. Matplotlib really, really, doesn't like long dataset names. If you display such a dataset the actual graph gets squashed up in favour of displaying the legend! Like this: ![image](https://user-images.githubusercontent.com/10938679/86573418-49938800-bf6c-11ea-80dd-b1e7809ae969.png) You can use Modify Plot Property for each dataset on the plot to change the label for that data to something more manageable, for example ![image](https://user-images.githubusercontent.com/10938679/86573633-91b2aa80-bf6c-11ea-8d10-2c925e18bcc3.png) But, if you then change any parameter in the model, forcing a re-draw, the _model label_ gets reset, like this: ![image](https://user-images.githubusercontent.com/10938679/86573851-dcccbd80-bf6c-11ea-9271-e5329e7235d5.png) Which is very irritating! Modify Plot Property doesn't seem to give any control of the font size of legend text? As a more general comment, I think the default font size of legend text could be reduced. Maybe by two font sizes?
non_priority
ess gui model label on plot keeps being reset noticed by user emilye on mac verified by on windows using matplotlib really really doesn t like long dataset names if you display such a dataset the actual graph gets squashed up in favour of displaying the legend like this you can use modify plot property for each dataset on the plot to change the label for that data to something more manageable for example but if you then change any parameter in the model forcing a re draw the model label gets reset like this which is very irritating modify plot property doesn t seem to give any control of the font size of legend text as a more general comment i think the default font size of legend text could be reduced maybe by two font sizes
0
73,753
15,281,764,016
IssuesEvent
2021-02-23 08:37:36
tt9133github/kubernetes
https://api.github.com/repos/tt9133github/kubernetes
opened
CVE-2020-28500 (Medium) detected in multiple libraries
security vulnerability
## CVE-2020-28500 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-4.17.4.tgz</b>, <b>lodash-4.12.0.tgz</b>, <b>lodash-4.17.11.tgz</b></p></summary> <p> <details><summary><b>lodash-4.17.4.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.4.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.4.tgz</a></p> <p>Path to dependency file: kubernetes/staging/src/k8s.io/kubectl/docs/book/package.json</p> <p>Path to vulnerable library: kubernetes/staging/src/k8s.io/kubectl/docs/book/node_modules/gitbook-cli/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - gitbook-cli-2.3.2.tgz (Root Library) - :x: **lodash-4.17.4.tgz** (Vulnerable Library) </details> <details><summary><b>lodash-4.12.0.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.12.0.tgz">https://registry.npmjs.org/lodash/-/lodash-4.12.0.tgz</a></p> <p>Path to dependency file: kubernetes/staging/src/k8s.io/kubectl/docs/book/package.json</p> <p>Path to vulnerable library: kubernetes/staging/src/k8s.io/kubectl/docs/book/node_modules/gitbook-plugin-theme-api/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - gitbook-plugin-theme-api-1.1.2.tgz (Root Library) - :x: **lodash-4.12.0.tgz** (Vulnerable Library) </details> <details><summary><b>lodash-4.17.11.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p> <p>Path to dependency file: kubernetes/staging/src/k8s.io/kubectl/docs/book/package.json</p> <p>Path to vulnerable library: kubernetes/staging/src/k8s.io/kubectl/docs/book/node_modules/gitbook-plugin-mermaid/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - gitbook-plugin-mermaid-0.0.9.tgz (Root Library) - phantomjs-1.9.20.tgz - request-2.67.0.tgz - form-data-1.0.1.tgz - async-2.6.2.tgz - :x: **lodash-4.17.11.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/tt9133github/kubernetes/commit/6e30df8c07a51f900077867824d3807dffb18045">6e30df8c07a51f900077867824d3807dffb18045</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> All versions of package lodash; all versions of package org.fujion.webjars:lodash are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. Steps to reproduce (provided by reporter Liyuan Chen): var lo = require('lodash'); function build_blank (n) { var ret = "1" for (var i = 0; i < n; i++) { ret += " " } return ret + "1"; } var s = build_blank(50000) var time0 = Date.now(); lo.trim(s) var time_cost0 = Date.now() - time0; console.log("time_cost0: " + time_cost0) var time1 = Date.now(); lo.toNumber(s) var time_cost1 = Date.now() - time1; console.log("time_cost1: " + time_cost1) var time2 = Date.now(); lo.trimEnd(s) var time_cost2 = Date.now() - time2; console.log("time_cost2: " + time_cost2) <p>Publish Date: 2021-02-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-28500 (Medium) detected in multiple libraries - ## CVE-2020-28500 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-4.17.4.tgz</b>, <b>lodash-4.12.0.tgz</b>, <b>lodash-4.17.11.tgz</b></p></summary> <p> <details><summary><b>lodash-4.17.4.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.4.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.4.tgz</a></p> <p>Path to dependency file: kubernetes/staging/src/k8s.io/kubectl/docs/book/package.json</p> <p>Path to vulnerable library: kubernetes/staging/src/k8s.io/kubectl/docs/book/node_modules/gitbook-cli/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - gitbook-cli-2.3.2.tgz (Root Library) - :x: **lodash-4.17.4.tgz** (Vulnerable Library) </details> <details><summary><b>lodash-4.12.0.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.12.0.tgz">https://registry.npmjs.org/lodash/-/lodash-4.12.0.tgz</a></p> <p>Path to dependency file: kubernetes/staging/src/k8s.io/kubectl/docs/book/package.json</p> <p>Path to vulnerable library: kubernetes/staging/src/k8s.io/kubectl/docs/book/node_modules/gitbook-plugin-theme-api/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - gitbook-plugin-theme-api-1.1.2.tgz (Root Library) - :x: **lodash-4.12.0.tgz** (Vulnerable Library) </details> <details><summary><b>lodash-4.17.11.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p> <p>Path to dependency file: kubernetes/staging/src/k8s.io/kubectl/docs/book/package.json</p> <p>Path to vulnerable library: kubernetes/staging/src/k8s.io/kubectl/docs/book/node_modules/gitbook-plugin-mermaid/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - gitbook-plugin-mermaid-0.0.9.tgz (Root Library) - phantomjs-1.9.20.tgz - request-2.67.0.tgz - form-data-1.0.1.tgz - async-2.6.2.tgz - :x: **lodash-4.17.11.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/tt9133github/kubernetes/commit/6e30df8c07a51f900077867824d3807dffb18045">6e30df8c07a51f900077867824d3807dffb18045</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> All versions of package lodash; all versions of package org.fujion.webjars:lodash are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. Steps to reproduce (provided by reporter Liyuan Chen): var lo = require('lodash'); function build_blank (n) { var ret = "1" for (var i = 0; i < n; i++) { ret += " " } return ret + "1"; } var s = build_blank(50000) var time0 = Date.now(); lo.trim(s) var time_cost0 = Date.now() - time0; console.log("time_cost0: " + time_cost0) var time1 = Date.now(); lo.toNumber(s) var time_cost1 = Date.now() - time1; console.log("time_cost1: " + time_cost1) var time2 = Date.now(); lo.trimEnd(s) var time_cost2 = Date.now() - time2; console.log("time_cost2: " + time_cost2) <p>Publish Date: 2021-02-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_priority
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries lodash tgz lodash tgz lodash tgz lodash tgz lodash modular utilities library home page a href path to dependency file kubernetes staging src io kubectl docs book package json path to vulnerable library kubernetes staging src io kubectl docs book node modules gitbook cli node modules lodash package json dependency hierarchy gitbook cli tgz root library x lodash tgz vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file kubernetes staging src io kubectl docs book package json path to vulnerable library kubernetes staging src io kubectl docs book node modules gitbook plugin theme api node modules lodash package json dependency hierarchy gitbook plugin theme api tgz root library x lodash tgz vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file kubernetes staging src io kubectl docs book package json path to vulnerable library kubernetes staging src io kubectl docs book node modules gitbook plugin mermaid node modules lodash package json dependency hierarchy gitbook plugin mermaid tgz root library phantomjs tgz request tgz form data tgz async tgz x lodash tgz vulnerable library found in head commit a href vulnerability details all versions of package lodash all versions of package org fujion webjars lodash are vulnerable to regular expression denial of service redos via the tonumber trim and trimend functions steps to reproduce provided by reporter liyuan chen var lo require lodash function build blank n var ret for var i i n i ret return ret var s build blank var date now lo trim s var time date now console log time time var date now lo tonumber s var time date now console log time time var date now lo trimend s var time date now console log time time publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href step up your open source security game with whitesource
0
10,948
8,228,550,434
IssuesEvent
2018-09-07 05:59:21
snowplow/snowplow
https://api.github.com/repos/snowplow/snowplow
closed
Clojure Collector: update CORS configuration
bug security
Tomcat 8.0.53 changed their default CORS configuration, from [CVE-2018-8014](http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-8014): >The defaults settings for the CORS filter provided in Apache Tomcat 9.0.0.M1 to 9.0.8, 8.5.0 to 8.5.31, 8.0.0.RC1 to 8.0.52, 7.0.41 to 7.0.88 are insecure and enable 'supportsCredentials' for all origins. It is expected that users of the CORS filter will have configured it appropriately for their environment rather than using it in the default configuration. Therefore, it is expected that most users will not be impacted by this issue. We might want to replicate the approach taken in the ssc: https://github.com/snowplow/snowplow/blob/7ea7feb7a3b7634cd20641c4484171124c0da7f5/2-collectors/scala-stream-collector/core/src/main/scala/com.snowplowanalytics.snowplow.collectors.scalastream/CollectorService.scala#L133-L139 instead of relying on a configuration file: https://github.com/snowplow/snowplow/blob/7ea7feb7a3b7634cd20641c4484171124c0da7f5/2-collectors/clojure-collector/java-servlet/war-resources/.ebextensions/web.xml#L393-L400 cc @jbeemster
True
Clojure Collector: update CORS configuration - Tomcat 8.0.53 changed their default CORS configuration, from [CVE-2018-8014](http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-8014): >The defaults settings for the CORS filter provided in Apache Tomcat 9.0.0.M1 to 9.0.8, 8.5.0 to 8.5.31, 8.0.0.RC1 to 8.0.52, 7.0.41 to 7.0.88 are insecure and enable 'supportsCredentials' for all origins. It is expected that users of the CORS filter will have configured it appropriately for their environment rather than using it in the default configuration. Therefore, it is expected that most users will not be impacted by this issue. We might want to replicate the approach taken in the ssc: https://github.com/snowplow/snowplow/blob/7ea7feb7a3b7634cd20641c4484171124c0da7f5/2-collectors/scala-stream-collector/core/src/main/scala/com.snowplowanalytics.snowplow.collectors.scalastream/CollectorService.scala#L133-L139 instead of relying on a configuration file: https://github.com/snowplow/snowplow/blob/7ea7feb7a3b7634cd20641c4484171124c0da7f5/2-collectors/clojure-collector/java-servlet/war-resources/.ebextensions/web.xml#L393-L400 cc @jbeemster
non_priority
clojure collector update cors configuration tomcat changed their default cors configuration from the defaults settings for the cors filter provided in apache tomcat to to to to are insecure and enable supportscredentials for all origins it is expected that users of the cors filter will have configured it appropriately for their environment rather than using it in the default configuration therefore it is expected that most users will not be impacted by this issue we might want to replicate the approach taken in the ssc instead of relying on a configuration file cc jbeemster
0
328,439
9,994,942,014
IssuesEvent
2019-07-11 18:57:20
INN/largo
https://api.github.com/repos/INN/largo
closed
apply font-display:block to fontello
Estimate: < 2 Hours category: styles priority: low type: improvement
https://developers.google.com/web/updates/2016/02/font-display > **block** > > block gives the font face a short block period (3s is recommended in most cases) and an infinite swap period. In other words, the browser draws "invisible" text at first if the font is not loaded, but swaps the font face in as soon as it loads. To do this the browser creates an anonymous font face with metrics similar to the selected font but with all glyphs containing no "ink." This value should only be used if rendering text in a particular typeface is required for the page to be useable.
1.0
apply font-display:block to fontello - https://developers.google.com/web/updates/2016/02/font-display > **block** > > block gives the font face a short block period (3s is recommended in most cases) and an infinite swap period. In other words, the browser draws "invisible" text at first if the font is not loaded, but swaps the font face in as soon as it loads. To do this the browser creates an anonymous font face with metrics similar to the selected font but with all glyphs containing no "ink." This value should only be used if rendering text in a particular typeface is required for the page to be useable.
priority
apply font display block to fontello block block gives the font face a short block period is recommended in most cases and an infinite swap period in other words the browser draws invisible text at first if the font is not loaded but swaps the font face in as soon as it loads to do this the browser creates an anonymous font face with metrics similar to the selected font but with all glyphs containing no ink this value should only be used if rendering text in a particular typeface is required for the page to be useable
1
19,468
4,413,792,715
IssuesEvent
2016-08-13 02:12:25
PokemonGoMap/PokemonGo-Map
https://api.github.com/repos/PokemonGoMap/PokemonGo-Map
closed
Display more helpful error when account has not accepted ToS
backend documentation enhancement feature request minor
Hello, i get this error message for 20 minutes now : 2016-08-08 19:21:29,384 [ search_worker_7][ search][ ERROR] Exception in search_worker: sequence index must be integer, not 'str' Traceback (most recent call last): File "/Users/Musa/Documents/PokemonGo-Map/pogom/search.py", line 242, in search_worker_thread parsed = parse_map(response_dict, step_location) File "/Users/Musa/Documents/PokemonGo-Map/pogom/models.py", line 296, in parse_map cells = map_dict['responses']['GET_MAP_OBJECTS']['map_cells'] TypeError: sequence index must be integer, not 'str' Can someone help me? I have the lattest version and run it on MacOs.
1.0
Display more helpful error when account has not accepted ToS - Hello, i get this error message for 20 minutes now : 2016-08-08 19:21:29,384 [ search_worker_7][ search][ ERROR] Exception in search_worker: sequence index must be integer, not 'str' Traceback (most recent call last): File "/Users/Musa/Documents/PokemonGo-Map/pogom/search.py", line 242, in search_worker_thread parsed = parse_map(response_dict, step_location) File "/Users/Musa/Documents/PokemonGo-Map/pogom/models.py", line 296, in parse_map cells = map_dict['responses']['GET_MAP_OBJECTS']['map_cells'] TypeError: sequence index must be integer, not 'str' Can someone help me? I have the lattest version and run it on MacOs.
non_priority
display more helpful error when account has not accepted tos hello i get this error message for minutes now exception in search worker sequence index must be integer not str traceback most recent call last file users musa documents pokemongo map pogom search py line in search worker thread parsed parse map response dict step location file users musa documents pokemongo map pogom models py line in parse map cells map dict typeerror sequence index must be integer not str can someone help me i have the lattest version and run it on macos
0
705,042
24,219,305,358
IssuesEvent
2022-09-26 09:29:53
Co-Laon/claon-server
https://api.github.com/repos/Co-Laon/claon-server
closed
검색 화면 구현
enhancement priority: medium
## Describe 검색어와 유사한 이용자와 암장을 검색 허용 ## (Optional) Solution Please describe your preferred solution -
1.0
검색 화면 구현 - ## Describe 검색어와 유사한 이용자와 암장을 검색 허용 ## (Optional) Solution Please describe your preferred solution -
priority
검색 화면 구현 describe 검색어와 유사한 이용자와 암장을 검색 허용 optional solution please describe your preferred solution
1
343,357
10,328,720,611
IssuesEvent
2019-09-02 10:13:08
webpack/schema-utils
https://api.github.com/repos/webpack/schema-utils
closed
improve output in some cases
priority: 5 (nice to have) semver: Minor type: Feature
<!-- Issues are so 🔥 If you remove or skip this template, you'll make the 🐼 sad and the mighty god of Github will appear and pile-drive the close button from a great height while making animal noises. 👉🏽 Need support, advice, or help? Don't open an issue! Head to StackOverflow or https://gitter.im/webpack/webpack. --> - Operating System: no matter - Node Version: no matter - NPM Version: no matter - webpack Version: no matter - schema-utils Version: 2.1.0 ### Feature Proposal Example: ``` optimization: { runtimeChunk: { name: /fef/ } } ``` ### Feature Use Case Actual output: ``` - configuration.optimization.runtimeChunk should be one of these: boolean | "single" | "multiple" | object { name? } -> Create an additional chunk which contains only the webpack runtime and chunk hash maps Details: * configuration.optimization.runtimeChunk.name should be a string. * configuration.optimization.runtimeChunk.name should be an instance of function. * configuration.optimization.runtimeChunk.name should be one of these: string | function -> The name or name factory for the runtime chunks ``` Expected output: ``` - configuration.optimization.runtimeChunk should be one of these: boolean | "single" | "multiple" | object { name? } -> Create an additional chunk which contains only the webpack runtime and chunk hash maps Details: * configuration.optimization.runtimeChunk.name should be one of these: string | function -> The name or name factory for the runtime chunks Details: * configuration.optimization.runtimeChunk.name should be a string. * configuration.optimization.runtimeChunk.name should be an instance of function. ```
1.0
improve output in some cases - <!-- Issues are so 🔥 If you remove or skip this template, you'll make the 🐼 sad and the mighty god of Github will appear and pile-drive the close button from a great height while making animal noises. 👉🏽 Need support, advice, or help? Don't open an issue! Head to StackOverflow or https://gitter.im/webpack/webpack. --> - Operating System: no matter - Node Version: no matter - NPM Version: no matter - webpack Version: no matter - schema-utils Version: 2.1.0 ### Feature Proposal Example: ``` optimization: { runtimeChunk: { name: /fef/ } } ``` ### Feature Use Case Actual output: ``` - configuration.optimization.runtimeChunk should be one of these: boolean | "single" | "multiple" | object { name? } -> Create an additional chunk which contains only the webpack runtime and chunk hash maps Details: * configuration.optimization.runtimeChunk.name should be a string. * configuration.optimization.runtimeChunk.name should be an instance of function. * configuration.optimization.runtimeChunk.name should be one of these: string | function -> The name or name factory for the runtime chunks ``` Expected output: ``` - configuration.optimization.runtimeChunk should be one of these: boolean | "single" | "multiple" | object { name? } -> Create an additional chunk which contains only the webpack runtime and chunk hash maps Details: * configuration.optimization.runtimeChunk.name should be one of these: string | function -> The name or name factory for the runtime chunks Details: * configuration.optimization.runtimeChunk.name should be a string. * configuration.optimization.runtimeChunk.name should be an instance of function. ```
priority
improve output in some cases issues are so 🔥 if you remove or skip this template you ll make the 🐼 sad and the mighty god of github will appear and pile drive the close button from a great height while making animal noises 👉🏽 need support advice or help don t open an issue head to stackoverflow or operating system no matter node version no matter npm version no matter webpack version no matter schema utils version feature proposal example optimization runtimechunk name fef feature use case actual output configuration optimization runtimechunk should be one of these boolean single multiple object name create an additional chunk which contains only the webpack runtime and chunk hash maps details configuration optimization runtimechunk name should be a string configuration optimization runtimechunk name should be an instance of function configuration optimization runtimechunk name should be one of these string function the name or name factory for the runtime chunks expected output configuration optimization runtimechunk should be one of these boolean single multiple object name create an additional chunk which contains only the webpack runtime and chunk hash maps details configuration optimization runtimechunk name should be one of these string function the name or name factory for the runtime chunks details configuration optimization runtimechunk name should be a string configuration optimization runtimechunk name should be an instance of function
1
89,922
11,302,730,342
IssuesEvent
2020-01-17 18:22:58
18F/identity-style-guide
https://api.github.com/repos/18F/identity-style-guide
closed
Develop an initial content style guide page
skill: content design / strategy type: feature request
**Why:** By consistently practicing language in an intentional way, we can provide content that supports login.gov users’ needs and improve their experience on our sites and materials. **Stories:** - As a login.gov team member, I would like to use design.login.gov as a go-to reference when writing login materials, which would help us write clear and consistent content across teams and channels - [Possible story] As a journalist/writer reporting on login.gov, I would like to use the content style guide to help me understand common login acronyms and terms. Also, how to format them via word list **Steps** - [x] Use [Login.gov Content Guidelines](https://docs.google.com/document/d/1gasSrnvRG7BVB_iK8YUeXteBymhN0u1Ywz7EABNgJh0/edit#heading=h.r8bl0nbf9nug) 🔒 to develop the initial page - [x] Determine where it should be located in design.login.gov - [ ] Have team members to review - [ ] Publish it live and share it via #login channel - [ ] Create additional issues as needed if there are actions beyond developing an initial content style guide page **Additional resources:** - [login.gov content style guide](https://github.com/18F/identity-private/wiki/Content-Style-Guide) from 2017 - Inspiration: [VA.gov content style guide](https://design.va.gov/content-style-guide/) **Notes:** - Issue #47 is related, which we can determine whether this current issue answers that issue or not (and close if needed)
1.0
Develop an initial content style guide page - **Why:** By consistently practicing language in an intentional way, we can provide content that supports login.gov users’ needs and improve their experience on our sites and materials. **Stories:** - As a login.gov team member, I would like to use design.login.gov as a go-to reference when writing login materials, which would help us write clear and consistent content across teams and channels - [Possible story] As a journalist/writer reporting on login.gov, I would like to use the content style guide to help me understand common login acronyms and terms. Also, how to format them via word list **Steps** - [x] Use [Login.gov Content Guidelines](https://docs.google.com/document/d/1gasSrnvRG7BVB_iK8YUeXteBymhN0u1Ywz7EABNgJh0/edit#heading=h.r8bl0nbf9nug) 🔒 to develop the initial page - [x] Determine where it should be located in design.login.gov - [ ] Have team members to review - [ ] Publish it live and share it via #login channel - [ ] Create additional issues as needed if there are actions beyond developing an initial content style guide page **Additional resources:** - [login.gov content style guide](https://github.com/18F/identity-private/wiki/Content-Style-Guide) from 2017 - Inspiration: [VA.gov content style guide](https://design.va.gov/content-style-guide/) **Notes:** - Issue #47 is related, which we can determine whether this current issue answers that issue or not (and close if needed)
non_priority
develop an initial content style guide page why by consistently practicing language in an intentional way we can provide content that supports login gov users’ needs and improve their experience on our sites and materials stories as a login gov team member i would like to use design login gov as a go to reference when writing login materials which would help us write clear and consistent content across teams and channels as a journalist writer reporting on login gov i would like to use the content style guide to help me understand common login acronyms and terms also how to format them via word list steps use 🔒 to develop the initial page determine where it should be located in design login gov have team members to review publish it live and share it via login channel create additional issues as needed if there are actions beyond developing an initial content style guide page additional resources from inspiration notes issue is related which we can determine whether this current issue answers that issue or not and close if needed
0
822,796
30,885,220,174
IssuesEvent
2023-08-03 21:05:56
letehaha/budget-tracker-fe
https://api.github.com/repos/letehaha/budget-tracker-fe
closed
Update monobank balances history not by calculating it from income/expense, but by just reading "balance" property
type::enhancement repo: backend priority-0-highest
Currently, each Monobank's account balance history is calculated based on transaction movements, and it is calculated manually. Since the bank already providing us info about the account's balance "after" the tx, we actually just need to update the balance history for that account by just setting the `balance` value. The best option is to probably take the latest transaction's "balance" of the single date.
1.0
Update monobank balances history not by calculating it from income/expense, but by just reading "balance" property - Currently, each Monobank's account balance history is calculated based on transaction movements, and it is calculated manually. Since the bank already providing us info about the account's balance "after" the tx, we actually just need to update the balance history for that account by just setting the `balance` value. The best option is to probably take the latest transaction's "balance" of the single date.
priority
update monobank balances history not by calculating it from income expense but by just reading balance property currently each monobank s account balance history is calculated based on transaction movements and it is calculated manually since the bank already providing us info about the account s balance after the tx we actually just need to update the balance history for that account by just setting the balance value the best option is to probably take the latest transaction s balance of the single date
1
16,579
12,058,036,757
IssuesEvent
2020-04-15 16:46:49
skypyproject/skypy
https://api.github.com/repos/skypyproject/skypy
closed
Twine warning: `long_description_content_type` missing
bug infrastructure v0.1 Hack
**Describe the bug** When packaging for release, twine warns that `long_description_content_type` is missing. **To Reproduce** 1. `python setup.py build sdist` 2. `twine check dist/*` ``` Checking dist/skypy-0.1rc1.tar.gz: PASSED, with warnings warning: `long_description_content_type` missing. defaulting to `text/x-rst`. ``` **Fix** Add `long_description_content_type = text/x-rst` to `setup.cfg`
1.0
Twine warning: `long_description_content_type` missing - **Describe the bug** When packaging for release, twine warns that `long_description_content_type` is missing. **To Reproduce** 1. `python setup.py build sdist` 2. `twine check dist/*` ``` Checking dist/skypy-0.1rc1.tar.gz: PASSED, with warnings warning: `long_description_content_type` missing. defaulting to `text/x-rst`. ``` **Fix** Add `long_description_content_type = text/x-rst` to `setup.cfg`
non_priority
twine warning long description content type missing describe the bug when packaging for release twine warns that long description content type is missing to reproduce python setup py build sdist twine check dist checking dist skypy tar gz passed with warnings warning long description content type missing defaulting to text x rst fix add long description content type text x rst to setup cfg
0
181,462
21,658,680,215
IssuesEvent
2022-05-06 16:39:02
doc-ai/snipe-it
https://api.github.com/repos/doc-ai/snipe-it
closed
CVE-2021-41183 (Medium) detected in jquery-ui-1.11.4.js, jquery-ui-1.11.4.min.js - autoclosed
security vulnerability
## CVE-2021-41183 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-ui-1.11.4.js</b>, <b>jquery-ui-1.11.4.min.js</b></p></summary> <p> <details><summary><b>jquery-ui-1.11.4.js</b></p></summary> <p>A curated set of user interface interactions, effects, widgets, and themes built on top of the jQuery JavaScript Library.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.11.4/jquery-ui.js">https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.11.4/jquery-ui.js</a></p> <p>Path to vulnerable library: /public/js/plugins/jQueryUI/jquery-ui.js</p> <p> Dependency Hierarchy: - :x: **jquery-ui-1.11.4.js** (Vulnerable Library) </details> <details><summary><b>jquery-ui-1.11.4.min.js</b></p></summary> <p>A curated set of user interface interactions, effects, widgets, and themes built on top of the jQuery JavaScript Library.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.11.4/jquery-ui.min.js">https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.11.4/jquery-ui.min.js</a></p> <p>Path to vulnerable library: /public/js/plugins/jQueryUI/jquery-ui.min.js</p> <p> Dependency Hierarchy: - :x: **jquery-ui-1.11.4.min.js** (Vulnerable Library) </details> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jQuery-UI is the official jQuery user interface library. Prior to version 1.13.0, accepting the value of various `*Text` options of the Datepicker widget from untrusted sources may execute untrusted code. The issue is fixed in jQuery UI 1.13.0. The values passed to various `*Text` options are now always treated as pure text, not HTML. A workaround is to not accept the value of the `*Text` options from untrusted sources. <p>Publish Date: 2021-10-26 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41183>CVE-2021-41183</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41183">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41183</a></p> <p>Release Date: 2021-10-26</p> <p>Fix Resolution: jquery-ui - 1.13.0</p> </p> </details> <p></p>
True
CVE-2021-41183 (Medium) detected in jquery-ui-1.11.4.js, jquery-ui-1.11.4.min.js - autoclosed - ## CVE-2021-41183 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-ui-1.11.4.js</b>, <b>jquery-ui-1.11.4.min.js</b></p></summary> <p> <details><summary><b>jquery-ui-1.11.4.js</b></p></summary> <p>A curated set of user interface interactions, effects, widgets, and themes built on top of the jQuery JavaScript Library.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.11.4/jquery-ui.js">https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.11.4/jquery-ui.js</a></p> <p>Path to vulnerable library: /public/js/plugins/jQueryUI/jquery-ui.js</p> <p> Dependency Hierarchy: - :x: **jquery-ui-1.11.4.js** (Vulnerable Library) </details> <details><summary><b>jquery-ui-1.11.4.min.js</b></p></summary> <p>A curated set of user interface interactions, effects, widgets, and themes built on top of the jQuery JavaScript Library.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.11.4/jquery-ui.min.js">https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.11.4/jquery-ui.min.js</a></p> <p>Path to vulnerable library: /public/js/plugins/jQueryUI/jquery-ui.min.js</p> <p> Dependency Hierarchy: - :x: **jquery-ui-1.11.4.min.js** (Vulnerable Library) </details> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jQuery-UI is the official jQuery user interface library. Prior to version 1.13.0, accepting the value of various `*Text` options of the Datepicker widget from untrusted sources may execute untrusted code. The issue is fixed in jQuery UI 1.13.0. The values passed to various `*Text` options are now always treated as pure text, not HTML. A workaround is to not accept the value of the `*Text` options from untrusted sources. <p>Publish Date: 2021-10-26 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41183>CVE-2021-41183</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41183">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41183</a></p> <p>Release Date: 2021-10-26</p> <p>Fix Resolution: jquery-ui - 1.13.0</p> </p> </details> <p></p>
non_priority
cve medium detected in jquery ui js jquery ui min js autoclosed cve medium severity vulnerability vulnerable libraries jquery ui js jquery ui min js jquery ui js a curated set of user interface interactions effects widgets and themes built on top of the jquery javascript library library home page a href path to vulnerable library public js plugins jqueryui jquery ui js dependency hierarchy x jquery ui js vulnerable library jquery ui min js a curated set of user interface interactions effects widgets and themes built on top of the jquery javascript library library home page a href path to vulnerable library public js plugins jqueryui jquery ui min js dependency hierarchy x jquery ui min js vulnerable library vulnerability details jquery ui is the official jquery user interface library prior to version accepting the value of various text options of the datepicker widget from untrusted sources may execute untrusted code the issue is fixed in jquery ui the values passed to various text options are now always treated as pure text not html a workaround is to not accept the value of the text options from untrusted sources publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery ui
0
554,960
16,443,763,795
IssuesEvent
2021-05-20 17:02:33
microsoft/graspologic
https://api.github.com/repos/microsoft/graspologic
closed
Consider parallelizing sampling inside of the LatentDistributionTest
enhancement low priority
## Expected Behavior LDT that uses size correction is a little slower than uncorrected one. This is expected since it has an extra step, but it may be made faster. In particular, it may be that sampling from multivariate gaussians (which is probably done via rejection sampling or some kind of transformation and is likely not very fast) can be sped up by parallelizing here: https://github.com/microsoft/graspologic/blob/bbbc68a24ca9e2097575e7f92f809916ea3eeb46/graspologic/inference/latent_distribution_test.py#L346-L350 However, there are two caveats: 1. The results of each iteration of this loop are random, and parallelization+randomness should be treated with caution. If using Joblib, consider this resourse: https://bdpedigo.github.io/posts/2020/02/demo-parallel/ :) 2. Each separate iteration by itself might be quite fast, so it is not clear whether parallelizing, either via joblib, or multiprocessing will even speed up or slow down the overall test. Thus it is imperative to run an experiment ensuring that parallelizing yields a better performance than not parallelizing. It is also important to ensure that this holds both when running only a single LDT _and_ when running a simulation that requires repeated use of the LDT (because joblib has interesting performance differences when you call it only once or multiple times). Of course, this would use the same workers as the current kwarg to LDT. ## Actual Behavior Sampling is not parallelized. :(
1.0
Consider parallelizing sampling inside of the LatentDistributionTest - ## Expected Behavior LDT that uses size correction is a little slower than uncorrected one. This is expected since it has an extra step, but it may be made faster. In particular, it may be that sampling from multivariate gaussians (which is probably done via rejection sampling or some kind of transformation and is likely not very fast) can be sped up by parallelizing here: https://github.com/microsoft/graspologic/blob/bbbc68a24ca9e2097575e7f92f809916ea3eeb46/graspologic/inference/latent_distribution_test.py#L346-L350 However, there are two caveats: 1. The results of each iteration of this loop are random, and parallelization+randomness should be treated with caution. If using Joblib, consider this resourse: https://bdpedigo.github.io/posts/2020/02/demo-parallel/ :) 2. Each separate iteration by itself might be quite fast, so it is not clear whether parallelizing, either via joblib, or multiprocessing will even speed up or slow down the overall test. Thus it is imperative to run an experiment ensuring that parallelizing yields a better performance than not parallelizing. It is also important to ensure that this holds both when running only a single LDT _and_ when running a simulation that requires repeated use of the LDT (because joblib has interesting performance differences when you call it only once or multiple times). Of course, this would use the same workers as the current kwarg to LDT. ## Actual Behavior Sampling is not parallelized. :(
priority
consider parallelizing sampling inside of the latentdistributiontest expected behavior ldt that uses size correction is a little slower than uncorrected one this is expected since it has an extra step but it may be made faster in particular it may be that sampling from multivariate gaussians which is probably done via rejection sampling or some kind of transformation and is likely not very fast can be sped up by parallelizing here however there are two caveats the results of each iteration of this loop are random and parallelization randomness should be treated with caution if using joblib consider this resourse each separate iteration by itself might be quite fast so it is not clear whether parallelizing either via joblib or multiprocessing will even speed up or slow down the overall test thus it is imperative to run an experiment ensuring that parallelizing yields a better performance than not parallelizing it is also important to ensure that this holds both when running only a single ldt and when running a simulation that requires repeated use of the ldt because joblib has interesting performance differences when you call it only once or multiple times of course this would use the same workers as the current kwarg to ldt actual behavior sampling is not parallelized
1
153,407
5,891,578,135
IssuesEvent
2017-05-17 17:24:01
mentii/mentii
https://api.github.com/repos/mentii/mentii
closed
Teachers should not be able to join their own classes. (3)
bug Priority - Low Severity - Low
Story Points | 3 :--- | --- **Owner** | Alex #### Description Teachers should not be able to join their own classes. See title. #### Acceptance Criteria As a teacher teaching a class, you cannot enroll in the class. You should be able to view your class however.
1.0
Teachers should not be able to join their own classes. (3) - Story Points | 3 :--- | --- **Owner** | Alex #### Description Teachers should not be able to join their own classes. See title. #### Acceptance Criteria As a teacher teaching a class, you cannot enroll in the class. You should be able to view your class however.
priority
teachers should not be able to join their own classes story points owner alex description teachers should not be able to join their own classes see title acceptance criteria as a teacher teaching a class you cannot enroll in the class you should be able to view your class however
1
456,065
13,136,481,871
IssuesEvent
2020-08-07 06:13:30
grey-software/grey.software
https://api.github.com/repos/grey-software/grey.software
opened
Create a 'Why Donate' Page
Domain: User Experience Priority: High Type: Enhancement
We'll take the content from here: https://docs.google.com/document/d/1a9i6x5FKs_kFwD0VFWk-MdhQbDPulpLEJKiWYFTI5aI/edit?usp=sharing # Why should I donate? ## Overview In order to honor its mission, Grey Software is a not-for-profit organization. The organization was founded so that we orient ourselves towards creating a better world instead of richer shareholders. ## To Protect Your Freedoms "As our society grows more dependent on computers, the software we run is of critical importance to securing the future of a free society." With our lives increasingly governed by software written by corporations that monetize our data, it is more important than ever to protect our freedom to know what the software programs we run are doing. Grey Software creates open source software that anyone else can study and improve by looking into the source code. It holds us as an organization accountable for being ethical with the trust our users put in us. By donating, you help keep alive the dream of a future where humans use open source software that protects their freedoms. ## To Revolutionize Software Education The software education ecosystem we’re building encourages new, enthusiastic software developers to apprentice under maintainers on open source projects. This allows students to gain valuable experience while maintainers get enthusiastic developers and a source of income. We believe that a collaborative, education-driven environment such as this can advance software technology in a positive direction. By donating, you help fund the technical infrastructure needed to run our education programs, and provide the possibility of financial aid to students.
1.0
Create a 'Why Donate' Page - We'll take the content from here: https://docs.google.com/document/d/1a9i6x5FKs_kFwD0VFWk-MdhQbDPulpLEJKiWYFTI5aI/edit?usp=sharing # Why should I donate? ## Overview In order to honor its mission, Grey Software is a not-for-profit organization. The organization was founded so that we orient ourselves towards creating a better world instead of richer shareholders. ## To Protect Your Freedoms "As our society grows more dependent on computers, the software we run is of critical importance to securing the future of a free society." With our lives increasingly governed by software written by corporations that monetize our data, it is more important than ever to protect our freedom to know what the software programs we run are doing. Grey Software creates open source software that anyone else can study and improve by looking into the source code. It holds us as an organization accountable for being ethical with the trust our users put in us. By donating, you help keep alive the dream of a future where humans use open source software that protects their freedoms. ## To Revolutionize Software Education The software education ecosystem we’re building encourages new, enthusiastic software developers to apprentice under maintainers on open source projects. This allows students to gain valuable experience while maintainers get enthusiastic developers and a source of income. We believe that a collaborative, education-driven environment such as this can advance software technology in a positive direction. By donating, you help fund the technical infrastructure needed to run our education programs, and provide the possibility of financial aid to students.
priority
create a why donate page we ll take the content from here why should i donate overview in order to honor its mission grey software is a not for profit organization the organization was founded so that we orient ourselves towards creating a better world instead of richer shareholders to protect your freedoms as our society grows more dependent on computers the software we run is of critical importance to securing the future of a free society with our lives increasingly governed by software written by corporations that monetize our data it is more important than ever to protect our freedom to know what the software programs we run are doing grey software creates open source software that anyone else can study and improve by looking into the source code it holds us as an organization accountable for being ethical with the trust our users put in us by donating you help keep alive the dream of a future where humans use open source software that protects their freedoms to revolutionize software education the software education ecosystem we’re building encourages new enthusiastic software developers to apprentice under maintainers on open source projects this allows students to gain valuable experience while maintainers get enthusiastic developers and a source of income we believe that a collaborative education driven environment such as this can advance software technology in a positive direction by donating you help fund the technical infrastructure needed to run our education programs and provide the possibility of financial aid to students
1
48,911
3,000,833,308
IssuesEvent
2015-07-24 06:34:15
jayway/powermock
https://api.github.com/repos/jayway/powermock
closed
SAX2 parsing - com.sun.org.apache.xerces.internal.parsers.SAXParser cannot be cast to org.xml.sax.XMLReader
bug imported invalid Priority-Medium
_From [AlexWib...@gmail.com](https://code.google.com/u/111621038469822880271/) on July 28, 2010 08:58:48_ Hi all, First of all, my problem is probably similar to the following issue: http://groups.google.com/group/powermock/browse_thread/thread/88079512f2dfcbd1/59b5d831e3e7f9b8?pli=1 basically, I have a hibernate configuration that gets loaded in my test. I'm getting the following error: \-------------------------------------------------- java.lang.ClassCastException: com.sun.org.apache.xerces.internal.parsers.SAXParser cannot be cast to org.xml.sax.XMLReader at org.xml.sax.helpers.XMLReaderFactory.loadClass(XMLReaderFactory.java:199) at org.xml.sax.helpers.XMLReaderFactory.createXMLReader(XMLReaderFactory.java:150) at org.dom4j.io.SAXHelper.createXMLReader(SAXHelper.java:83) at org.dom4j.io.SAXReader.createXMLReader(SAXReader.java:894) at org.dom4j.io.SAXReader.getXMLReader(SAXReader.java:715) at org.dom4j.io.SAXReader.read(SAXReader.java:435) org.hibernate.HibernateException: Could not parse configuration: inMemory-hibernate.cfg.xml at org.hibernate.cfg.Configuration.doConfigure(Configuration.java:1528) at org.hibernate.cfg.AnnotationConfiguration.doConfigure(AnnotationConfiguration.java:1035) at org.hibernate.cfg.AnnotationConfiguration.doConfigure(AnnotationConfiguration.java:64) at org.hibernate.cfg.Configuration.configure(Configuration.java:1462) at org.hibernate.cfg.AnnotationConfiguration.configure(AnnotationConfiguration.java:1017) ... .. Caused by: org.dom4j.DocumentException: SAX2 driver class com.sun.org.apache.xerces.internal.parsers.SAXParser does not implement XMLReader Nested exception: SAX2 driver class com.sun.org.apache.xerces.internal.parsers.SAXParser does not implement XMLReader at org.dom4j.io.SAXReader.read(SAXReader.java:484) at org.hibernate.cfg.Configuration.doConfigure(Configuration.java:1518) ... 34 more at org.hibernate.cfg.Configuration.doConfigure(Configuration.java:1518) Warning: Caught exception attempting to use SAX to load a SAX XMLReader at org.hibernate.cfg.AnnotationConfiguration.doConfigure(AnnotationConfiguration.java:1035) at org.hibernate.cfg.AnnotationConfiguration.doConfigure(AnnotationConfiguration.java:64) Warning: Exception was: java.lang.ClassCastException: com.sun.org.apache.xerces.internal.parsers.SAXParser cannot be cast to org.xml.sax.XMLReader at org.hibernate.cfg.Configuration.configure(Configuration.java:1462) Warning: I will print the stack trace then carry on using the default SAX parser at org.hibernate.cfg.AnnotationConfiguration.configure(AnnotationConfiguration.java:1017) ... .. \-------------------------------------- I've tried the recommendation suggested in the issue I mentioned above, without any luck: \---------------------------------- @RunWith(PowerMockRunner.class) @PrepareForTest({MyServiceImpl.class}) @PowerMockIgnore( { "com.sun.org.apache.xerces.*", "org.dom4j.*", "org.xml.sax.*" }) public class MyTest extends InMemoryLookupTests{ private MyServiceImpl service; ... @Before public void setup() throws Exception { MockitoAnnotations.initMocks(this); service = new MyServiceImpl(); .... } ..... } \------------------------------------ Could someone please share some light? Thanks in advance! Regards, Alex. _Original issue: http://code.google.com/p/powermock/issues/detail?id=270_
1.0
SAX2 parsing - com.sun.org.apache.xerces.internal.parsers.SAXParser cannot be cast to org.xml.sax.XMLReader - _From [AlexWib...@gmail.com](https://code.google.com/u/111621038469822880271/) on July 28, 2010 08:58:48_ Hi all, First of all, my problem is probably similar to the following issue: http://groups.google.com/group/powermock/browse_thread/thread/88079512f2dfcbd1/59b5d831e3e7f9b8?pli=1 basically, I have a hibernate configuration that gets loaded in my test. I'm getting the following error: \-------------------------------------------------- java.lang.ClassCastException: com.sun.org.apache.xerces.internal.parsers.SAXParser cannot be cast to org.xml.sax.XMLReader at org.xml.sax.helpers.XMLReaderFactory.loadClass(XMLReaderFactory.java:199) at org.xml.sax.helpers.XMLReaderFactory.createXMLReader(XMLReaderFactory.java:150) at org.dom4j.io.SAXHelper.createXMLReader(SAXHelper.java:83) at org.dom4j.io.SAXReader.createXMLReader(SAXReader.java:894) at org.dom4j.io.SAXReader.getXMLReader(SAXReader.java:715) at org.dom4j.io.SAXReader.read(SAXReader.java:435) org.hibernate.HibernateException: Could not parse configuration: inMemory-hibernate.cfg.xml at org.hibernate.cfg.Configuration.doConfigure(Configuration.java:1528) at org.hibernate.cfg.AnnotationConfiguration.doConfigure(AnnotationConfiguration.java:1035) at org.hibernate.cfg.AnnotationConfiguration.doConfigure(AnnotationConfiguration.java:64) at org.hibernate.cfg.Configuration.configure(Configuration.java:1462) at org.hibernate.cfg.AnnotationConfiguration.configure(AnnotationConfiguration.java:1017) ... .. Caused by: org.dom4j.DocumentException: SAX2 driver class com.sun.org.apache.xerces.internal.parsers.SAXParser does not implement XMLReader Nested exception: SAX2 driver class com.sun.org.apache.xerces.internal.parsers.SAXParser does not implement XMLReader at org.dom4j.io.SAXReader.read(SAXReader.java:484) at org.hibernate.cfg.Configuration.doConfigure(Configuration.java:1518) ... 34 more at org.hibernate.cfg.Configuration.doConfigure(Configuration.java:1518) Warning: Caught exception attempting to use SAX to load a SAX XMLReader at org.hibernate.cfg.AnnotationConfiguration.doConfigure(AnnotationConfiguration.java:1035) at org.hibernate.cfg.AnnotationConfiguration.doConfigure(AnnotationConfiguration.java:64) Warning: Exception was: java.lang.ClassCastException: com.sun.org.apache.xerces.internal.parsers.SAXParser cannot be cast to org.xml.sax.XMLReader at org.hibernate.cfg.Configuration.configure(Configuration.java:1462) Warning: I will print the stack trace then carry on using the default SAX parser at org.hibernate.cfg.AnnotationConfiguration.configure(AnnotationConfiguration.java:1017) ... .. \-------------------------------------- I've tried the recommendation suggested in the issue I mentioned above, without any luck: \---------------------------------- @RunWith(PowerMockRunner.class) @PrepareForTest({MyServiceImpl.class}) @PowerMockIgnore( { "com.sun.org.apache.xerces.*", "org.dom4j.*", "org.xml.sax.*" }) public class MyTest extends InMemoryLookupTests{ private MyServiceImpl service; ... @Before public void setup() throws Exception { MockitoAnnotations.initMocks(this); service = new MyServiceImpl(); .... } ..... } \------------------------------------ Could someone please share some light? Thanks in advance! Regards, Alex. _Original issue: http://code.google.com/p/powermock/issues/detail?id=270_
priority
parsing com sun org apache xerces internal parsers saxparser cannot be cast to org xml sax xmlreader from on july hi all first of all my problem is probably similar to the following issue basically i have a hibernate configuration that gets loaded in my test i m getting the following error java lang classcastexception com sun org apache xerces internal parsers saxparser cannot be cast to org xml sax xmlreader at org xml sax helpers xmlreaderfactory loadclass xmlreaderfactory java at org xml sax helpers xmlreaderfactory createxmlreader xmlreaderfactory java at org io saxhelper createxmlreader saxhelper java at org io saxreader createxmlreader saxreader java at org io saxreader getxmlreader saxreader java at org io saxreader read saxreader java org hibernate hibernateexception could not parse configuration inmemory hibernate cfg xml at org hibernate cfg configuration doconfigure configuration java at org hibernate cfg annotationconfiguration doconfigure annotationconfiguration java at org hibernate cfg annotationconfiguration doconfigure annotationconfiguration java at org hibernate cfg configuration configure configuration java at org hibernate cfg annotationconfiguration configure annotationconfiguration java caused by org documentexception driver class com sun org apache xerces internal parsers saxparser does not implement xmlreader nested exception driver class com sun org apache xerces internal parsers saxparser does not implement xmlreader at org io saxreader read saxreader java at org hibernate cfg configuration doconfigure configuration java more at org hibernate cfg configuration doconfigure configuration java warning caught exception attempting to use sax to load a sax xmlreader at org hibernate cfg annotationconfiguration doconfigure annotationconfiguration java at org hibernate cfg annotationconfiguration doconfigure annotationconfiguration java warning exception was java lang classcastexception com sun org apache xerces internal parsers saxparser cannot be cast to org xml sax xmlreader at org hibernate cfg configuration configure configuration java warning i will print the stack trace then carry on using the default sax parser at org hibernate cfg annotationconfiguration configure annotationconfiguration java i ve tried the recommendation suggested in the issue i mentioned above without any luck runwith powermockrunner class preparefortest myserviceimpl class powermockignore com sun org apache xerces org org xml sax public class mytest extends inmemorylookuptests private myserviceimpl service before public void setup throws exception mockitoannotations initmocks this service new myserviceimpl could someone please share some light thanks in advance regards alex original issue
1
43,119
23,122,582,507
IssuesEvent
2022-07-27 23:49:27
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
opened
Threads accumulating in Suspended state and issue with CLR Critical Section
tenet-performance
**Description** We have an intermittent issue that happens in **Production**, so no repros. The issue is that the **IIS Worker Process (W3WP.exe)** starts to consume a lot of memory, then recycles when it reaches the **10 GB** we use as threshold. **ETW traces** and **dump files** analysis reveal a cascade effect where the bottom of the issue are the many threads suspended by the **GC** and a potential **CLR leaked Critical Section**, sometimes affecting about 10% of the threads. The cumulative effect are more threads being created to work on incoming **HTTP Requests**, since the current threads aren’t making progress, memory utilization increases as a consequence of the new objects rooted to the new threads and, at some point, causing pressure on the **GC** and affecting other threads as well, which are waiting for the **GC** to finish. It’s not clear yet what in our code is triggering the threads to be in suspended state, but the issue has similarities to: **Garbage Collection Thread is blocked waiting for another thread for 10 seconds or more**. #44698 [](https://github.com/dotnet/runtime/issues/44698) **Deadlock while trying to Suspend EE & acquire threadstore lock while starting** a GC #37571 [](https://github.com/dotnet/runtime/issues/37571) **Configuration** Windows 10 Version 17763 MP (64 procs) Free x64 Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz, 16 Cores, 32 Logical Processors Physical Memory: 96 GB Hyper-V enabled Microsoft .NET Runtime CLR version 4.8.4515.0 built by: NET48REL1LAST_C .NET Framework 8, latest version GC: Server mode with 64 gc heaps **Analysis** This screenshot is from a **15 GB** dump file: ![image](https://user-images.githubusercontent.com/106347412/181380781-46cdf43e-6b11-48dd-87df-ebe67e553c33.png) **Thread 276**, owning the **lock**: # Call Site 00 ntdll!NtGetContextThread 01 KERNELBASE!GetThreadContext 02 **clr!Thread::SuspendThread** 03 **clr!Thread::SysStartSuspendForDebug** 04 clr!Debugger::TrapAllRuntimeThreads 05 clr!Debugger::SendCatchHandlerFound 06 **clr!Debugger::FirstChanceManagedExceptionCatcherFound** 07 clr!ExceptionTracker::ProcessManagedCallFrame 08 clr!ExceptionTracker::ProcessOSExceptionNotification 09 clr!ProcessCLRException 0a ntdll!RtlpExecuteHandlerForException 0b ntdll!RtlDispatchException 0c ntdll!KiUserExceptionDispatch 0d KERNELBASE!RaiseException 0e clr!RaiseTheExceptionInternalOnly 0f clr!IL_Throw 10 **System!System.Net.Sockets.NetworkStream.Read** 11 clr!ExceptionTracker::CallHandler 12 clr!ExceptionTracker::CallCatchHandler 13 clr!ProcessCLRException 14 ntdll!RtlpExecuteHandlerForUnwind 15 ntdll!RtlUnwindEx 16 clr!ClrUnwindEx 17 clr!ProcessCLRException 18 ntdll!RtlpExecuteHandlerForException 19 ntdll!RtlDispatchException 1a ntdll!KiUserExceptionDispatch 1b KERNELBASE!RaiseException 1c clr!RaiseTheExceptionInternalOnly 1d clr!IL_Throw 1e **System!System.Net.Sockets.NetworkStream.Read** 1f Platform_Framework_Common_2c3813a0000!Framework.Common.IntegrityStream.RawReadBytes 20 Platform_Framework_Common_2c3813a0000!Framework.Common.IntegrityStream.RawReadByte 21 Platform_Framework_Common_2c3813a0000!Framework.Common.IntegrityStream.ManagedReadBuffer 22 Platform_Framework_Common_2c3813a0000!Framework.Common.IntegrityStream.ReadString 23 DistributedComputation_Client!XXX.DistributedComputation.Client.DCompNodeTcpClient.DoComputationsV1 24 DistributedComputation_Client!XXX.DistributedComputation.Client.ComputationClient.<>c__DisplayClass37_2.<DoComputations>b__2 25 Platform_Framework_Common_2c3813a0000!Framework.Common.NodeClientConnectionPoolHandler.DoClientWork 26 DistributedComputation_Client!XXX.DistributedComputation.Client.ComputationClient.<>c__DisplayClass37_0.<DoComputations>b__1 27 Platform_Framework_Common_2c3813a0000!Framework.Common.FanOutClientOperator<Framework.Common.Machine>.<>c__DisplayClass4_1.<DoOpParallelToQuorum>b__0 28 Platform_Framework_Common_2c3813a0000!XXX.Platform.TaskFactoryExtensions.<>c__DisplayClass0_0.<StartNewWithTraceToken>b__0 29 mscorlib!System.Threading.Tasks.Task.Execute 2a mscorlib!System.Threading.ExecutionContext.RunInternal 2b mscorlib!System.Threading.ExecutionContext.Run 2c mscorlib!System.Threading.Tasks.Task.ExecuteWithThreadLocal 2d mscorlib!System.Threading.Tasks.Task.ExecuteEntry 2e Platform_Common_2c381210000!Platform.Common.StaticThreadPoolQueuedTaskScheduler.ThreadBasedDispatchLoop 2f mscorlib!System.Threading.ExecutionContext.RunInternal 30 mscorlib!System.Threading.ExecutionContext.Run 31 mscorlib!System.Threading.ExecutionContext.Run 32 mscorlib!System.Threading.ThreadHelper.ThreadStart 33 clr!CallDescrWorkerInternal 34 clr!CallDescrWorkerWithHandler 35 clr!MethodDescCallSite::CallTargetWorker 36 clr!ThreadNative::KickOffThread_Worker 37 clr!ManagedThreadBase_DispatchInner 38 clr!ManagedThreadBase_DispatchMiddle 39 clr!ManagedThreadBase_DispatchOuter 3a clr!ManagedThreadBase_DispatchInCorrectAD 3b clr!Thread::DoADCallBack 3c clr!ManagedThreadBase_DispatchInner 3d clr!ManagedThreadBase_DispatchMiddle 3e clr!ManagedThreadBase_DispatchOuter 3f clr!ManagedThreadBase_FullTransitionWithAD 40 clr!ThreadNative::KickOffThread 41 clr!Thread::intermediateThreadProc 42 kernel32!BaseThreadInitThunk 43 ntdll!RtlUserThreadStart **Exceptions** from the thread above: ![image](https://user-images.githubusercontent.com/106347412/181383566-0fe8ff9e-d271-4882-b9cf-2aed9ec89a88.png) Example from another dump file. Thread trying to acquire the **CS lock**: OS Thread Id: 0xcdd0 (1372) Child SP IP Call Site 000000d970e7eab8 00007ffd53e68ce0 System.Threading._ThreadPoolWaitCallback.PerformWaitCallback() 000000d970e7eec0 00007ffd56556893 [DebuggerU2MCatchHandlerFrame: 000000d970e7eec0] 000000d970e7f028 00007ffd56556893 [ContextTransitionFrame: 000000d970e7f028] 000000d970e7f260 00007ffd56556893 [DebuggerU2MCatchHandlerFrame: 000000d970e7f260] **Native side:** # Call Site 00 ntdll!NtWaitForAlertByThreadId 01 ntdll!RtlpWaitOnAddressWithTimeout 02 ntdll!RtlpWaitOnAddress 03 ntdll!RtlpWaitOnCriticalSection 04 ntdll!RtlpEnterCriticalSectionContended 05 **ntdll!RtlEnterCriticalSection** 06 **clr!CrstBase::SpinEnter** 07 **clr!CrstBase::Enter** 08 clr!Debugger::DoNotCallDirectlyPrivateLock 09 clr!Debugger::LockForEventSending 0a clr!DebuggerController::DispatchPatchOrSingleStep 0b clr!DebuggerController::DispatchNativeException 0c **clr!Debugger::FirstChanceNativeException** 0d clr!IsDebuggerFault 0e clr!CLRVectoredExceptionHandlerPhase2 0f **clr!CLRVectoredExceptionHandler** 10 clr!CLRVectoredExceptionHandlerShim 11 ntdll!RtlpCallVectoredHandlers 12 **ntdll!RtlDispatchException** 13 ntdll!KiUserExceptionDispatch 14 **mscorlib!System.Threading._ThreadPoolWaitCallback.PerformWaitCallback** 15 clr!CallDescrWorkerInternal 16 clr!CallDescrWorkerWithHandler 17 clr!MethodDescCallSite::CallTargetWorker 18 clr!QueueUserWorkItemManagedCallback 19 clr!ManagedThreadBase_DispatchInner 1a clr!ManagedThreadBase_DispatchMiddle 1b clr!ManagedThreadBase_DispatchOuter 1c clr!ManagedThreadBase_DispatchInCorrectAD 1d clr!Thread::DoADCallBack 1e clr!ManagedThreadBase_DispatchInner 1f clr!ManagedThreadBase_DispatchMiddle 20 clr!ManagedThreadBase_DispatchOuter 21 clr!ManagedThreadBase_FullTransitionWithAD 22 clr!ManagedPerAppDomainTPCount::DispatchWorkItem 23 clr!ThreadpoolMgr::ExecuteWorkRequest 24 clr!ThreadpoolMgr::WorkerThreadStart 25 clr!Thread::intermediateThreadProc 26 kernel32!BaseThreadInitThunk 27 ntdll!RtlUserThreadStart **Critical Section:** CritSec +e29bc4f8 at 00000255e29bc4f8 WaiterWoken No **LockCount 1** RecursionCount 1 **OwningThread 5a78** EntryCount 0 ContentionCount 1ff98 *** Locked **Owner:** # Call Site 00 ntdll!NtWaitForMultipleObjects 01 KERNELBASE!WaitForMultipleObjectsEx 02 clr!DebuggerRCThread::MainLoop 03 clr!DebuggerRCThread::ThreadProc 04 clr!DebuggerRCThread::ThreadProcStatic 05 kernel32!BaseThreadInitThunk 06 ntdll!RtlUserThreadStart A more granular call stack reveals what seems to be an **exception** when **Leave()** was called: OS Thread Id: 0x5a78 (76) Current frame: ntdll!NtWaitForMultipleObjects+0x14 Child-SP RetAddr Caller, Callee 000000d9483ff930 00007ffd609cd43e KERNELBASE!WaitForMultipleObjectsEx+0xfe, calling ntdll!NtWaitForMultipleObjects 000000d9483ff950 00007ffd63bbd5b7 ntdll!RtlpFreeUserBlockToHeap+0x2b, calling ntdll!RtlFreeHeap 000000d9483ff960 00007ffd63bbb7e2 ntdll!RtlpFreeUserBlock+0x186, calling ntdll!RtlGetCurrentServiceSessionId 000000d9483ffa00 00007ffd609a932e **KERNELBASE!RaiseException**+0x7e, calling KERNELBASE!_security_check_cookie 000000d9483ffa38 00007ffd609a9319 **KERNELBASE!RaiseException**+0x69, calling ntdll!RtlRaiseException 000000d9483ffae0 00007ffd566d2221 **clr!Debugger::SendRawEvent**+0x59, calling clr!_security_check_cookie 000000d9483ffb90 00007ffd56c8d7dc **clr!DebuggerRCThread::SendIPCEvent**+0xad, calling clr!Debugger::SendRawEvent 000000d9483ffba0 00007ffd56c7626d clr!Debugger::InitIPCEvent+0x3d 000000d9483ffbd0 00007ffd56c7d918 clr!Debugger::SendSyncCompleteIPCEvent+0x104, calling clr!DebuggerRCThread::SendIPCEvent 000000d9483ffc00 00007ffd56c7fbe3 clr!Debugger::SuspendComplete+0x43, calling clr!Debugger::SendSyncCompleteIPCEvent 000000d9483ffc30 00007ffd566e01af **clr!DebuggerRCThread::MainLoop**+0xc9, calling kernel32!WaitForMultipleObjectsEx 000000d9483ffcc0 00007ffd56555160 **clr!CrstBase::Leave**+0x87 000000d9483ffcf0 00007ffd566e00cb clr!DebuggerRCThread::ThreadProc+0xda, calling clr!DebuggerRCThread::MainLoop 000000d9483ffd40 00007ffd566dffc1 clr!DebuggerRCThread::ThreadProcStatic+0x41, calling clr!DebuggerRCThread::ThreadProc 000000d9483ffd90 00007ffd61ed7974 kernel32!BaseThreadInitThunk+0x14, calling ntdll!LdrpDispatchUserCallTarget 000000d9483ffdc0 00007ffd63bfa2f1 ntdll!RtlUserThreadStart+0x21, calling ntdll!LdrpDispatchUserCallTarget One among many threads suspended by the **GC**: ![image](https://user-images.githubusercontent.com/106347412/181380878-e77a4026-f42f-4fbe-a5db-6d636a34d2cb.png) **Native side:** ![image](https://user-images.githubusercontent.com/106347412/181380954-9a7b0c1b-2d23-439c-8ac7-45464ecdfda8.png) Most, if not all, threads throwing **1st-chance exceptions** are in the same state. But other threads not throwing **exceptions** are also in the same state. **Thread state:** CLR Owns However, most threads have this state: **GC Suspend Pending** **GC On Transitions** Legal to Join Yield Requested **Blocking GC for Stack Overflow** <<< no idea what this means and couldn’t find documentation Background CLR Owns My theory is that the **Critical Section** issue and the threads in **suspended state** issue may be connected where the first causes the second issue. There is a **DLL** sampling data, which is the primary suspect based on issue #44698, but I couldn’t find evidence of this **DLL** triggering the issue like it was found in https://github.com/dotnet/runtime/issues/44698 Anyway, here are all **call stacks** where this **DLL** is running, you may spot something interesting: 146 Id: 1190.1e6c Suspend: 0 Teb: 000000d9`45afe000 Unfrozen # Call Site 00 ntdll!NtWaitForSingleObject 01 KERNELBASE!WaitForSingleObjectEx 02 clr!CLREventWaitHelper2 03 clr!CLREventWaitHelper 04 clr!CLREventBase::WaitEx 05 clr!Thread::WaitSuspendEventsHelper 06 **clr!Thread::WaitSuspendEvents** 07 **clr!Thread::RareEnablePreemptiveGC** 08 **clr!Thread::RareDisablePreemptiveGC** 09 clr!Thread::DoAppropriateWaitWorker 0a clr!Thread::DoAppropriateWait 0b **clr!WaitHandleNative::CorWaitOneNative** 0c mscorlib!System.Threading.WaitHandle.InternalWaitOne 0d mscorlib!System.Threading.WaitHandle.WaitOne 0e mscorlib!System.Threading.WaitHandle.WaitOne 0f **Monitoring_System_269f0150000!XXX.Monitoring.CounterManager.SamplingThread** 10 mscorlib!System.Threading.Tasks.Task.Execute 11 mscorlib!System.Threading.ExecutionContext.RunInternal 12 mscorlib!System.Threading.ExecutionContext.Run 13 mscorlib!System.Threading.Tasks.Task.ExecuteWithThreadLocal 14 mscorlib!System.Threading.Tasks.Task.ExecuteEntry 15 mscorlib!System.Threading.ExecutionContext.RunInternal 16 mscorlib!System.Threading.ExecutionContext.Run 17 mscorlib!System.Threading.ExecutionContext.Run . . . . . . 147 Id: 1190.38c0 Suspend: 0 Teb: 000000d9`45b00000 Unfrozen # Call Site 00 ntdll!NtWaitForSingleObject 01 KERNELBASE!WaitForSingleObjectEx 02 clr!CLREventWaitHelper2 03 clr!CLREventWaitHelper 04 clr!CLREventBase::WaitEx 05 clr!Thread::WaitSuspendEventsHelper 06 **clr!Thread::WaitSuspendEvents** 07 **clr!Thread::RareEnablePreemptiveGC** 08 **clr!Thread::RareDisablePreemptiveGC** 09 **clr!PendingSync::Restore** 0a clr!Thread::DoAppropriateWait 0b clr!CLREventBase::WaitEx 0c **clr!Thread::Block** 0d **clr!SyncBlock::Wait** 0e clr!ObjectNative::WaitTimeout 0f **Monitoring_System_269f0150000!XXX.Monitoring.EventSinkHandler.QueueThread** 10 mscorlib!System.Threading.Tasks.Task.Execute 11 mscorlib!System.Threading.ExecutionContext.RunInternal 12 mscorlib!System.Threading.ExecutionContext.Run 13 mscorlib!System.Threading.Tasks.Task.ExecuteWithThreadLocal 14 mscorlib!System.Threading.Tasks.Task.ExecuteEntry 15 mscorlib!System.Threading.ExecutionContext.RunInternal 16 mscorlib!System.Threading.ExecutionContext.Run 17 mscorlib!System.Threading.ExecutionContext.Run 18 mscorlib!System.Threading.ThreadHelper.ThreadStart 19 clr!CallDescrWorkerInternal 1a clr!CallDescrWorkerWithHandler 1b clr!MethodDescCallSite::CallTargetWorker . . . . . . 148 Id: 1190.c178 Suspend: 0 Teb: 000000d9`45b02000 Unfrozen # Call Site 00 ntdll!NtWaitForSingleObject 01 KERNELBASE!WaitForSingleObjectEx 02 clr!CLREventWaitHelper2 03 clr!CLREventWaitHelper 04 clr!CLREventBase::WaitEx 05 clr!Thread::WaitSuspendEventsHelper 06 **clr!Thread::WaitSuspendEvents** 07 **clr!Thread::RareEnablePreemptiveGC** 08 **clr!Thread::RareDisablePreemptiveGC** 09 **clr!PendingSync::Restore** 0a clr!Thread::DoAppropriateWait 0b **clr!CLREventBase::WaitEx** 0c **clr!Thread::Block** 0d **clr!SyncBlock::Wait** 0e clr!ObjectNative::WaitTimeout 0f **Monitoring_System_269f0150000!XXX.Monitoring.EventSinkHandler.QueueThread** 10 mscorlib!System.Threading.Tasks.Task.Execute 11 mscorlib!System.Threading.ExecutionContext.RunInternal 12 mscorlib!System.Threading.ExecutionContext.Run 13 mscorlib!System.Threading.Tasks.Task.ExecuteWithThreadLocal 14 mscorlib!System.Threading.Tasks.Task.ExecuteEntry 15 mscorlib!System.Threading.ExecutionContext.RunInternal 16 mscorlib!System.Threading.ExecutionContext.Run 17 mscorlib!System.Threading.ExecutionContext.Run 18 mscorlib!System.Threading.ThreadHelper.ThreadStart 19 clr!CallDescrWorkerInternal 1a clr!CallDescrWorkerWithHandler 1b clr!MethodDescCallSite::CallTargetWorker . . . . . . 148 Id: 1190.c178 Suspend: 0 Teb: 000000d9`45b02000 Unfrozen # Call Site 00 ntdll!NtWaitForSingleObject 01 KERNELBASE!WaitForSingleObjectEx 02 clr!CLREventWaitHelper2 03 clr!CLREventWaitHelper 04 clr!CLREventBase::WaitEx 05 clr!Thread::WaitSuspendEventsHelper 06 **clr!Thread::WaitSuspendEvents** 07 **clr!Thread::RareEnablePreemptiveGC** 08 **clr!Thread::RareDisablePreemptiveGC** 09 **clr!PendingSync::Restore** 0a clr!Thread::DoAppropriateWait 0b clr!CLREventBase::WaitEx 0c **clr!Thread::Block** 0d **clr!SyncBlock::Wait** 0e clr!ObjectNative::WaitTimeout 0f **Monitoring_System_269f0150000!XXX.Monitoring.EventSinkHandler.QueueThread** 10 mscorlib!System.Threading.Tasks.Task.Execute 11 mscorlib!System.Threading.ExecutionContext.RunInternal 12 mscorlib!System.Threading.ExecutionContext.Run 13 mscorlib!System.Threading.Tasks.Task.ExecuteWithThreadLocal . . . . . . 149 Id: 1190.8bb0 Suspend: 0 Teb: 000000d9`45b04000 Unfrozen # Call Site 00 ntdll!NtWaitForSingleObject 01 KERNELBASE!WaitForSingleObjectEx 02 clr!CLREventWaitHelper2 03 clr!CLREventWaitHelper 04 clr!CLREventBase::WaitEx 05 clr!Thread::WaitSuspendEventsHelper 06 **clr!Thread::WaitSuspendEvents** 07 **clr!Thread::RareEnablePreemptiveGC** 08 **clr!Thread::RareDisablePreemptiveGC** 09 **clr!PendingSync::Restore** 0a clr!Thread::DoAppropriateWait 0b clr!CLREventBase::WaitEx 0c **clr!Thread::Block** 0d **clr!SyncBlock::Wait** 0e clr!ObjectNative::WaitTimeout 0f **Monitoring_System_269f0150000!XXX.Monitoring.EventSinkHandler.QueueThread** 10 mscorlib!System.Threading.Tasks.Task.Execute 11 mscorlib!System.Threading.ExecutionContext.RunInternal 12 mscorlib!System.Threading.ExecutionContext.Run 13 mscorlib!System.Threading.Tasks.Task.ExecuteWithThreadLocal 14 mscorlib!System.Threading.Tasks.Task.ExecuteEntry 15 mscorlib!System.Threading.ExecutionContext.RunInternal 16 mscorlib!System.Threading.ExecutionContext.Run . . . . . . 150 Id: 1190.2a08 Suspend: 0 Teb: 000000d9`45b06000 Unfrozen # Call Site 00 ntdll!NtWaitForMultipleObjects 01 KERNELBASE!WaitForMultipleObjectsEx 02 clr!WaitForMultipleObjectsEx_SO_TOLERANT 03 clr!Thread::DoAppropriateWaitWorker 04 clr!Thread::DoAppropriateWait 05 **clr!CLREventBase::WaitEx** 06 **clr!Thread::Block** 07 **clr!SyncBlock::Wait** 08 clr!ObjectNative::WaitTimeout 09 mscorlib!System.Threading.SemaphoreSlim.WaitUntilCountOrTimeout 0a mscorlib!System.Threading.SemaphoreSlim.Wait 0b System! 0c System! 0d System! 0e **Monitoring_System_269f0150000!XXX.Monitoring.EventChannelDispatcherV2.QueueThread** 0f mscorlib!System.Threading.Tasks.Task.Execute 10 mscorlib!System.Threading.ExecutionContext.RunInternal 11 mscorlib!System.Threading.ExecutionContext.Run 12 mscorlib!System.Threading.Tasks.Task.ExecuteWithThreadLocal 13 mscorlib!System.Threading.Tasks.Task.ExecuteEntry 14 mscorlib!System.Threading.ExecutionContext.RunInternal 15 mscorlib!System.Threading.ExecutionContext.Run 16 mscorlib!System.Threading.ExecutionContext.Run 17 mscorlib!System.Threading.ThreadHelper.ThreadStart 18 clr!CallDescrWorkerInternal 19 clr!CallDescrWorkerWithHandler 1a clr!MethodDescCallSite::CallTargetWorker 1b clr!ThreadNative::KickOffThread_Worker 1c clr!ManagedThreadBase_DispatchInner 1d clr!ManagedThreadBase_DispatchMiddle 1e clr!ManagedThreadBase_DispatchOuter 1f clr!ManagedThreadBase_DispatchInCorrectAD 20 clr!Thread::DoADCallBack 21 clr!ManagedThreadBase_DispatchInner 22 clr!ManagedThreadBase_DispatchMiddle 23 clr!ManagedThreadBase_DispatchOuter 24 clr!ManagedThreadBase_FullTransitionWithAD 25 clr!ThreadNative::KickOffThread 26 clr!Thread::intermediateThreadProc 27 kernel32!BaseThreadInitThunk 28 ntdll!RtlUserThreadStart All threads not making progress show this pattern: ![image](https://user-images.githubusercontent.com/106347412/181386887-ae4215a6-63e1-45fe-b4a0-ce828708108d.png) ![image](https://user-images.githubusercontent.com/106347412/181386955-3113880b-674b-44d2-8c38-cd2d56a6d50c.png) ![image](https://user-images.githubusercontent.com/106347412/181387243-7bfd6c51-b9d4-4667-9d40-1261d5007a1f.png) ![image](https://user-images.githubusercontent.com/106347412/181387313-27c21427-0d9d-4bc5-92a7-2c9324df907c.png) From a total of **1247 threads**, there are **~510 threads** in the state above and **673 threads** in this state: ![image](https://user-images.githubusercontent.com/106347412/181391141-b44d0bbe-a998-445e-aef9-aaf54fffcc41.png) **GC Mode** is **Preemptive** for all threads: ![image](https://user-images.githubusercontent.com/106347412/181388022-5fb59541-5546-4259-84a1-8120ecae16f8.png) If needed, I have much more information from the analysis. Also, please let me know: - What kind of information from the **dump files** and **ETW traces** will help to identify why the threads are in this suspended state, and I’ll extract them from the data - Is there a **GC setting** that could mitigate this issue? - Is here an easy and quick way to get the **Private PDB** files for the **CLR.DLL**? I'm getting the **Public** one, so can't see arguments/local variables ![image](https://user-images.githubusercontent.com/106347412/181387405-82932cdc-96fa-4fad-97c9-e84b1d12f3fc.png) Thanks!
True
Threads accumulating in Suspended state and issue with CLR Critical Section - **Description** We have an intermittent issue that happens in **Production**, so no repros. The issue is that the **IIS Worker Process (W3WP.exe)** starts to consume a lot of memory, then recycles when it reaches the **10 GB** we use as threshold. **ETW traces** and **dump files** analysis reveal a cascade effect where the bottom of the issue are the many threads suspended by the **GC** and a potential **CLR leaked Critical Section**, sometimes affecting about 10% of the threads. The cumulative effect are more threads being created to work on incoming **HTTP Requests**, since the current threads aren’t making progress, memory utilization increases as a consequence of the new objects rooted to the new threads and, at some point, causing pressure on the **GC** and affecting other threads as well, which are waiting for the **GC** to finish. It’s not clear yet what in our code is triggering the threads to be in suspended state, but the issue has similarities to: **Garbage Collection Thread is blocked waiting for another thread for 10 seconds or more**. #44698 [](https://github.com/dotnet/runtime/issues/44698) **Deadlock while trying to Suspend EE & acquire threadstore lock while starting** a GC #37571 [](https://github.com/dotnet/runtime/issues/37571) **Configuration** Windows 10 Version 17763 MP (64 procs) Free x64 Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz, 16 Cores, 32 Logical Processors Physical Memory: 96 GB Hyper-V enabled Microsoft .NET Runtime CLR version 4.8.4515.0 built by: NET48REL1LAST_C .NET Framework 8, latest version GC: Server mode with 64 gc heaps **Analysis** This screenshot is from a **15 GB** dump file: ![image](https://user-images.githubusercontent.com/106347412/181380781-46cdf43e-6b11-48dd-87df-ebe67e553c33.png) **Thread 276**, owning the **lock**: # Call Site 00 ntdll!NtGetContextThread 01 KERNELBASE!GetThreadContext 02 **clr!Thread::SuspendThread** 03 **clr!Thread::SysStartSuspendForDebug** 04 clr!Debugger::TrapAllRuntimeThreads 05 clr!Debugger::SendCatchHandlerFound 06 **clr!Debugger::FirstChanceManagedExceptionCatcherFound** 07 clr!ExceptionTracker::ProcessManagedCallFrame 08 clr!ExceptionTracker::ProcessOSExceptionNotification 09 clr!ProcessCLRException 0a ntdll!RtlpExecuteHandlerForException 0b ntdll!RtlDispatchException 0c ntdll!KiUserExceptionDispatch 0d KERNELBASE!RaiseException 0e clr!RaiseTheExceptionInternalOnly 0f clr!IL_Throw 10 **System!System.Net.Sockets.NetworkStream.Read** 11 clr!ExceptionTracker::CallHandler 12 clr!ExceptionTracker::CallCatchHandler 13 clr!ProcessCLRException 14 ntdll!RtlpExecuteHandlerForUnwind 15 ntdll!RtlUnwindEx 16 clr!ClrUnwindEx 17 clr!ProcessCLRException 18 ntdll!RtlpExecuteHandlerForException 19 ntdll!RtlDispatchException 1a ntdll!KiUserExceptionDispatch 1b KERNELBASE!RaiseException 1c clr!RaiseTheExceptionInternalOnly 1d clr!IL_Throw 1e **System!System.Net.Sockets.NetworkStream.Read** 1f Platform_Framework_Common_2c3813a0000!Framework.Common.IntegrityStream.RawReadBytes 20 Platform_Framework_Common_2c3813a0000!Framework.Common.IntegrityStream.RawReadByte 21 Platform_Framework_Common_2c3813a0000!Framework.Common.IntegrityStream.ManagedReadBuffer 22 Platform_Framework_Common_2c3813a0000!Framework.Common.IntegrityStream.ReadString 23 DistributedComputation_Client!XXX.DistributedComputation.Client.DCompNodeTcpClient.DoComputationsV1 24 DistributedComputation_Client!XXX.DistributedComputation.Client.ComputationClient.<>c__DisplayClass37_2.<DoComputations>b__2 25 Platform_Framework_Common_2c3813a0000!Framework.Common.NodeClientConnectionPoolHandler.DoClientWork 26 DistributedComputation_Client!XXX.DistributedComputation.Client.ComputationClient.<>c__DisplayClass37_0.<DoComputations>b__1 27 Platform_Framework_Common_2c3813a0000!Framework.Common.FanOutClientOperator<Framework.Common.Machine>.<>c__DisplayClass4_1.<DoOpParallelToQuorum>b__0 28 Platform_Framework_Common_2c3813a0000!XXX.Platform.TaskFactoryExtensions.<>c__DisplayClass0_0.<StartNewWithTraceToken>b__0 29 mscorlib!System.Threading.Tasks.Task.Execute 2a mscorlib!System.Threading.ExecutionContext.RunInternal 2b mscorlib!System.Threading.ExecutionContext.Run 2c mscorlib!System.Threading.Tasks.Task.ExecuteWithThreadLocal 2d mscorlib!System.Threading.Tasks.Task.ExecuteEntry 2e Platform_Common_2c381210000!Platform.Common.StaticThreadPoolQueuedTaskScheduler.ThreadBasedDispatchLoop 2f mscorlib!System.Threading.ExecutionContext.RunInternal 30 mscorlib!System.Threading.ExecutionContext.Run 31 mscorlib!System.Threading.ExecutionContext.Run 32 mscorlib!System.Threading.ThreadHelper.ThreadStart 33 clr!CallDescrWorkerInternal 34 clr!CallDescrWorkerWithHandler 35 clr!MethodDescCallSite::CallTargetWorker 36 clr!ThreadNative::KickOffThread_Worker 37 clr!ManagedThreadBase_DispatchInner 38 clr!ManagedThreadBase_DispatchMiddle 39 clr!ManagedThreadBase_DispatchOuter 3a clr!ManagedThreadBase_DispatchInCorrectAD 3b clr!Thread::DoADCallBack 3c clr!ManagedThreadBase_DispatchInner 3d clr!ManagedThreadBase_DispatchMiddle 3e clr!ManagedThreadBase_DispatchOuter 3f clr!ManagedThreadBase_FullTransitionWithAD 40 clr!ThreadNative::KickOffThread 41 clr!Thread::intermediateThreadProc 42 kernel32!BaseThreadInitThunk 43 ntdll!RtlUserThreadStart **Exceptions** from the thread above: ![image](https://user-images.githubusercontent.com/106347412/181383566-0fe8ff9e-d271-4882-b9cf-2aed9ec89a88.png) Example from another dump file. Thread trying to acquire the **CS lock**: OS Thread Id: 0xcdd0 (1372) Child SP IP Call Site 000000d970e7eab8 00007ffd53e68ce0 System.Threading._ThreadPoolWaitCallback.PerformWaitCallback() 000000d970e7eec0 00007ffd56556893 [DebuggerU2MCatchHandlerFrame: 000000d970e7eec0] 000000d970e7f028 00007ffd56556893 [ContextTransitionFrame: 000000d970e7f028] 000000d970e7f260 00007ffd56556893 [DebuggerU2MCatchHandlerFrame: 000000d970e7f260] **Native side:** # Call Site 00 ntdll!NtWaitForAlertByThreadId 01 ntdll!RtlpWaitOnAddressWithTimeout 02 ntdll!RtlpWaitOnAddress 03 ntdll!RtlpWaitOnCriticalSection 04 ntdll!RtlpEnterCriticalSectionContended 05 **ntdll!RtlEnterCriticalSection** 06 **clr!CrstBase::SpinEnter** 07 **clr!CrstBase::Enter** 08 clr!Debugger::DoNotCallDirectlyPrivateLock 09 clr!Debugger::LockForEventSending 0a clr!DebuggerController::DispatchPatchOrSingleStep 0b clr!DebuggerController::DispatchNativeException 0c **clr!Debugger::FirstChanceNativeException** 0d clr!IsDebuggerFault 0e clr!CLRVectoredExceptionHandlerPhase2 0f **clr!CLRVectoredExceptionHandler** 10 clr!CLRVectoredExceptionHandlerShim 11 ntdll!RtlpCallVectoredHandlers 12 **ntdll!RtlDispatchException** 13 ntdll!KiUserExceptionDispatch 14 **mscorlib!System.Threading._ThreadPoolWaitCallback.PerformWaitCallback** 15 clr!CallDescrWorkerInternal 16 clr!CallDescrWorkerWithHandler 17 clr!MethodDescCallSite::CallTargetWorker 18 clr!QueueUserWorkItemManagedCallback 19 clr!ManagedThreadBase_DispatchInner 1a clr!ManagedThreadBase_DispatchMiddle 1b clr!ManagedThreadBase_DispatchOuter 1c clr!ManagedThreadBase_DispatchInCorrectAD 1d clr!Thread::DoADCallBack 1e clr!ManagedThreadBase_DispatchInner 1f clr!ManagedThreadBase_DispatchMiddle 20 clr!ManagedThreadBase_DispatchOuter 21 clr!ManagedThreadBase_FullTransitionWithAD 22 clr!ManagedPerAppDomainTPCount::DispatchWorkItem 23 clr!ThreadpoolMgr::ExecuteWorkRequest 24 clr!ThreadpoolMgr::WorkerThreadStart 25 clr!Thread::intermediateThreadProc 26 kernel32!BaseThreadInitThunk 27 ntdll!RtlUserThreadStart **Critical Section:** CritSec +e29bc4f8 at 00000255e29bc4f8 WaiterWoken No **LockCount 1** RecursionCount 1 **OwningThread 5a78** EntryCount 0 ContentionCount 1ff98 *** Locked **Owner:** # Call Site 00 ntdll!NtWaitForMultipleObjects 01 KERNELBASE!WaitForMultipleObjectsEx 02 clr!DebuggerRCThread::MainLoop 03 clr!DebuggerRCThread::ThreadProc 04 clr!DebuggerRCThread::ThreadProcStatic 05 kernel32!BaseThreadInitThunk 06 ntdll!RtlUserThreadStart A more granular call stack reveals what seems to be an **exception** when **Leave()** was called: OS Thread Id: 0x5a78 (76) Current frame: ntdll!NtWaitForMultipleObjects+0x14 Child-SP RetAddr Caller, Callee 000000d9483ff930 00007ffd609cd43e KERNELBASE!WaitForMultipleObjectsEx+0xfe, calling ntdll!NtWaitForMultipleObjects 000000d9483ff950 00007ffd63bbd5b7 ntdll!RtlpFreeUserBlockToHeap+0x2b, calling ntdll!RtlFreeHeap 000000d9483ff960 00007ffd63bbb7e2 ntdll!RtlpFreeUserBlock+0x186, calling ntdll!RtlGetCurrentServiceSessionId 000000d9483ffa00 00007ffd609a932e **KERNELBASE!RaiseException**+0x7e, calling KERNELBASE!_security_check_cookie 000000d9483ffa38 00007ffd609a9319 **KERNELBASE!RaiseException**+0x69, calling ntdll!RtlRaiseException 000000d9483ffae0 00007ffd566d2221 **clr!Debugger::SendRawEvent**+0x59, calling clr!_security_check_cookie 000000d9483ffb90 00007ffd56c8d7dc **clr!DebuggerRCThread::SendIPCEvent**+0xad, calling clr!Debugger::SendRawEvent 000000d9483ffba0 00007ffd56c7626d clr!Debugger::InitIPCEvent+0x3d 000000d9483ffbd0 00007ffd56c7d918 clr!Debugger::SendSyncCompleteIPCEvent+0x104, calling clr!DebuggerRCThread::SendIPCEvent 000000d9483ffc00 00007ffd56c7fbe3 clr!Debugger::SuspendComplete+0x43, calling clr!Debugger::SendSyncCompleteIPCEvent 000000d9483ffc30 00007ffd566e01af **clr!DebuggerRCThread::MainLoop**+0xc9, calling kernel32!WaitForMultipleObjectsEx 000000d9483ffcc0 00007ffd56555160 **clr!CrstBase::Leave**+0x87 000000d9483ffcf0 00007ffd566e00cb clr!DebuggerRCThread::ThreadProc+0xda, calling clr!DebuggerRCThread::MainLoop 000000d9483ffd40 00007ffd566dffc1 clr!DebuggerRCThread::ThreadProcStatic+0x41, calling clr!DebuggerRCThread::ThreadProc 000000d9483ffd90 00007ffd61ed7974 kernel32!BaseThreadInitThunk+0x14, calling ntdll!LdrpDispatchUserCallTarget 000000d9483ffdc0 00007ffd63bfa2f1 ntdll!RtlUserThreadStart+0x21, calling ntdll!LdrpDispatchUserCallTarget One among many threads suspended by the **GC**: ![image](https://user-images.githubusercontent.com/106347412/181380878-e77a4026-f42f-4fbe-a5db-6d636a34d2cb.png) **Native side:** ![image](https://user-images.githubusercontent.com/106347412/181380954-9a7b0c1b-2d23-439c-8ac7-45464ecdfda8.png) Most, if not all, threads throwing **1st-chance exceptions** are in the same state. But other threads not throwing **exceptions** are also in the same state. **Thread state:** CLR Owns However, most threads have this state: **GC Suspend Pending** **GC On Transitions** Legal to Join Yield Requested **Blocking GC for Stack Overflow** <<< no idea what this means and couldn’t find documentation Background CLR Owns My theory is that the **Critical Section** issue and the threads in **suspended state** issue may be connected where the first causes the second issue. There is a **DLL** sampling data, which is the primary suspect based on issue #44698, but I couldn’t find evidence of this **DLL** triggering the issue like it was found in https://github.com/dotnet/runtime/issues/44698 Anyway, here are all **call stacks** where this **DLL** is running, you may spot something interesting: 146 Id: 1190.1e6c Suspend: 0 Teb: 000000d9`45afe000 Unfrozen # Call Site 00 ntdll!NtWaitForSingleObject 01 KERNELBASE!WaitForSingleObjectEx 02 clr!CLREventWaitHelper2 03 clr!CLREventWaitHelper 04 clr!CLREventBase::WaitEx 05 clr!Thread::WaitSuspendEventsHelper 06 **clr!Thread::WaitSuspendEvents** 07 **clr!Thread::RareEnablePreemptiveGC** 08 **clr!Thread::RareDisablePreemptiveGC** 09 clr!Thread::DoAppropriateWaitWorker 0a clr!Thread::DoAppropriateWait 0b **clr!WaitHandleNative::CorWaitOneNative** 0c mscorlib!System.Threading.WaitHandle.InternalWaitOne 0d mscorlib!System.Threading.WaitHandle.WaitOne 0e mscorlib!System.Threading.WaitHandle.WaitOne 0f **Monitoring_System_269f0150000!XXX.Monitoring.CounterManager.SamplingThread** 10 mscorlib!System.Threading.Tasks.Task.Execute 11 mscorlib!System.Threading.ExecutionContext.RunInternal 12 mscorlib!System.Threading.ExecutionContext.Run 13 mscorlib!System.Threading.Tasks.Task.ExecuteWithThreadLocal 14 mscorlib!System.Threading.Tasks.Task.ExecuteEntry 15 mscorlib!System.Threading.ExecutionContext.RunInternal 16 mscorlib!System.Threading.ExecutionContext.Run 17 mscorlib!System.Threading.ExecutionContext.Run . . . . . . 147 Id: 1190.38c0 Suspend: 0 Teb: 000000d9`45b00000 Unfrozen # Call Site 00 ntdll!NtWaitForSingleObject 01 KERNELBASE!WaitForSingleObjectEx 02 clr!CLREventWaitHelper2 03 clr!CLREventWaitHelper 04 clr!CLREventBase::WaitEx 05 clr!Thread::WaitSuspendEventsHelper 06 **clr!Thread::WaitSuspendEvents** 07 **clr!Thread::RareEnablePreemptiveGC** 08 **clr!Thread::RareDisablePreemptiveGC** 09 **clr!PendingSync::Restore** 0a clr!Thread::DoAppropriateWait 0b clr!CLREventBase::WaitEx 0c **clr!Thread::Block** 0d **clr!SyncBlock::Wait** 0e clr!ObjectNative::WaitTimeout 0f **Monitoring_System_269f0150000!XXX.Monitoring.EventSinkHandler.QueueThread** 10 mscorlib!System.Threading.Tasks.Task.Execute 11 mscorlib!System.Threading.ExecutionContext.RunInternal 12 mscorlib!System.Threading.ExecutionContext.Run 13 mscorlib!System.Threading.Tasks.Task.ExecuteWithThreadLocal 14 mscorlib!System.Threading.Tasks.Task.ExecuteEntry 15 mscorlib!System.Threading.ExecutionContext.RunInternal 16 mscorlib!System.Threading.ExecutionContext.Run 17 mscorlib!System.Threading.ExecutionContext.Run 18 mscorlib!System.Threading.ThreadHelper.ThreadStart 19 clr!CallDescrWorkerInternal 1a clr!CallDescrWorkerWithHandler 1b clr!MethodDescCallSite::CallTargetWorker . . . . . . 148 Id: 1190.c178 Suspend: 0 Teb: 000000d9`45b02000 Unfrozen # Call Site 00 ntdll!NtWaitForSingleObject 01 KERNELBASE!WaitForSingleObjectEx 02 clr!CLREventWaitHelper2 03 clr!CLREventWaitHelper 04 clr!CLREventBase::WaitEx 05 clr!Thread::WaitSuspendEventsHelper 06 **clr!Thread::WaitSuspendEvents** 07 **clr!Thread::RareEnablePreemptiveGC** 08 **clr!Thread::RareDisablePreemptiveGC** 09 **clr!PendingSync::Restore** 0a clr!Thread::DoAppropriateWait 0b **clr!CLREventBase::WaitEx** 0c **clr!Thread::Block** 0d **clr!SyncBlock::Wait** 0e clr!ObjectNative::WaitTimeout 0f **Monitoring_System_269f0150000!XXX.Monitoring.EventSinkHandler.QueueThread** 10 mscorlib!System.Threading.Tasks.Task.Execute 11 mscorlib!System.Threading.ExecutionContext.RunInternal 12 mscorlib!System.Threading.ExecutionContext.Run 13 mscorlib!System.Threading.Tasks.Task.ExecuteWithThreadLocal 14 mscorlib!System.Threading.Tasks.Task.ExecuteEntry 15 mscorlib!System.Threading.ExecutionContext.RunInternal 16 mscorlib!System.Threading.ExecutionContext.Run 17 mscorlib!System.Threading.ExecutionContext.Run 18 mscorlib!System.Threading.ThreadHelper.ThreadStart 19 clr!CallDescrWorkerInternal 1a clr!CallDescrWorkerWithHandler 1b clr!MethodDescCallSite::CallTargetWorker . . . . . . 148 Id: 1190.c178 Suspend: 0 Teb: 000000d9`45b02000 Unfrozen # Call Site 00 ntdll!NtWaitForSingleObject 01 KERNELBASE!WaitForSingleObjectEx 02 clr!CLREventWaitHelper2 03 clr!CLREventWaitHelper 04 clr!CLREventBase::WaitEx 05 clr!Thread::WaitSuspendEventsHelper 06 **clr!Thread::WaitSuspendEvents** 07 **clr!Thread::RareEnablePreemptiveGC** 08 **clr!Thread::RareDisablePreemptiveGC** 09 **clr!PendingSync::Restore** 0a clr!Thread::DoAppropriateWait 0b clr!CLREventBase::WaitEx 0c **clr!Thread::Block** 0d **clr!SyncBlock::Wait** 0e clr!ObjectNative::WaitTimeout 0f **Monitoring_System_269f0150000!XXX.Monitoring.EventSinkHandler.QueueThread** 10 mscorlib!System.Threading.Tasks.Task.Execute 11 mscorlib!System.Threading.ExecutionContext.RunInternal 12 mscorlib!System.Threading.ExecutionContext.Run 13 mscorlib!System.Threading.Tasks.Task.ExecuteWithThreadLocal . . . . . . 149 Id: 1190.8bb0 Suspend: 0 Teb: 000000d9`45b04000 Unfrozen # Call Site 00 ntdll!NtWaitForSingleObject 01 KERNELBASE!WaitForSingleObjectEx 02 clr!CLREventWaitHelper2 03 clr!CLREventWaitHelper 04 clr!CLREventBase::WaitEx 05 clr!Thread::WaitSuspendEventsHelper 06 **clr!Thread::WaitSuspendEvents** 07 **clr!Thread::RareEnablePreemptiveGC** 08 **clr!Thread::RareDisablePreemptiveGC** 09 **clr!PendingSync::Restore** 0a clr!Thread::DoAppropriateWait 0b clr!CLREventBase::WaitEx 0c **clr!Thread::Block** 0d **clr!SyncBlock::Wait** 0e clr!ObjectNative::WaitTimeout 0f **Monitoring_System_269f0150000!XXX.Monitoring.EventSinkHandler.QueueThread** 10 mscorlib!System.Threading.Tasks.Task.Execute 11 mscorlib!System.Threading.ExecutionContext.RunInternal 12 mscorlib!System.Threading.ExecutionContext.Run 13 mscorlib!System.Threading.Tasks.Task.ExecuteWithThreadLocal 14 mscorlib!System.Threading.Tasks.Task.ExecuteEntry 15 mscorlib!System.Threading.ExecutionContext.RunInternal 16 mscorlib!System.Threading.ExecutionContext.Run . . . . . . 150 Id: 1190.2a08 Suspend: 0 Teb: 000000d9`45b06000 Unfrozen # Call Site 00 ntdll!NtWaitForMultipleObjects 01 KERNELBASE!WaitForMultipleObjectsEx 02 clr!WaitForMultipleObjectsEx_SO_TOLERANT 03 clr!Thread::DoAppropriateWaitWorker 04 clr!Thread::DoAppropriateWait 05 **clr!CLREventBase::WaitEx** 06 **clr!Thread::Block** 07 **clr!SyncBlock::Wait** 08 clr!ObjectNative::WaitTimeout 09 mscorlib!System.Threading.SemaphoreSlim.WaitUntilCountOrTimeout 0a mscorlib!System.Threading.SemaphoreSlim.Wait 0b System! 0c System! 0d System! 0e **Monitoring_System_269f0150000!XXX.Monitoring.EventChannelDispatcherV2.QueueThread** 0f mscorlib!System.Threading.Tasks.Task.Execute 10 mscorlib!System.Threading.ExecutionContext.RunInternal 11 mscorlib!System.Threading.ExecutionContext.Run 12 mscorlib!System.Threading.Tasks.Task.ExecuteWithThreadLocal 13 mscorlib!System.Threading.Tasks.Task.ExecuteEntry 14 mscorlib!System.Threading.ExecutionContext.RunInternal 15 mscorlib!System.Threading.ExecutionContext.Run 16 mscorlib!System.Threading.ExecutionContext.Run 17 mscorlib!System.Threading.ThreadHelper.ThreadStart 18 clr!CallDescrWorkerInternal 19 clr!CallDescrWorkerWithHandler 1a clr!MethodDescCallSite::CallTargetWorker 1b clr!ThreadNative::KickOffThread_Worker 1c clr!ManagedThreadBase_DispatchInner 1d clr!ManagedThreadBase_DispatchMiddle 1e clr!ManagedThreadBase_DispatchOuter 1f clr!ManagedThreadBase_DispatchInCorrectAD 20 clr!Thread::DoADCallBack 21 clr!ManagedThreadBase_DispatchInner 22 clr!ManagedThreadBase_DispatchMiddle 23 clr!ManagedThreadBase_DispatchOuter 24 clr!ManagedThreadBase_FullTransitionWithAD 25 clr!ThreadNative::KickOffThread 26 clr!Thread::intermediateThreadProc 27 kernel32!BaseThreadInitThunk 28 ntdll!RtlUserThreadStart All threads not making progress show this pattern: ![image](https://user-images.githubusercontent.com/106347412/181386887-ae4215a6-63e1-45fe-b4a0-ce828708108d.png) ![image](https://user-images.githubusercontent.com/106347412/181386955-3113880b-674b-44d2-8c38-cd2d56a6d50c.png) ![image](https://user-images.githubusercontent.com/106347412/181387243-7bfd6c51-b9d4-4667-9d40-1261d5007a1f.png) ![image](https://user-images.githubusercontent.com/106347412/181387313-27c21427-0d9d-4bc5-92a7-2c9324df907c.png) From a total of **1247 threads**, there are **~510 threads** in the state above and **673 threads** in this state: ![image](https://user-images.githubusercontent.com/106347412/181391141-b44d0bbe-a998-445e-aef9-aaf54fffcc41.png) **GC Mode** is **Preemptive** for all threads: ![image](https://user-images.githubusercontent.com/106347412/181388022-5fb59541-5546-4259-84a1-8120ecae16f8.png) If needed, I have much more information from the analysis. Also, please let me know: - What kind of information from the **dump files** and **ETW traces** will help to identify why the threads are in this suspended state, and I’ll extract them from the data - Is there a **GC setting** that could mitigate this issue? - Is here an easy and quick way to get the **Private PDB** files for the **CLR.DLL**? I'm getting the **Public** one, so can't see arguments/local variables ![image](https://user-images.githubusercontent.com/106347412/181387405-82932cdc-96fa-4fad-97c9-e84b1d12f3fc.png) Thanks!
non_priority
threads accumulating in suspended state and issue with clr critical section description we have an intermittent issue that happens in production so no repros the issue is that the iis worker process exe starts to consume a lot of memory then recycles when it reaches the gb we use as threshold etw traces and dump files analysis reveal a cascade effect where the bottom of the issue are the many threads suspended by the gc and a potential clr leaked critical section sometimes affecting about of the threads the cumulative effect are more threads being created to work on incoming http requests since the current threads aren’t making progress memory utilization increases as a consequence of the new objects rooted to the new threads and at some point causing pressure on the gc and affecting other threads as well which are waiting for the gc to finish it’s not clear yet what in our code is triggering the threads to be in suspended state but the issue has similarities to garbage collection thread is blocked waiting for another thread for seconds or more deadlock while trying to suspend ee acquire threadstore lock while starting a gc configuration windows version mp procs free intel r xeon r silver cpu cores logical processors physical memory gb hyper v enabled microsoft net runtime clr version built by c net framework latest version gc server mode with gc heaps analysis this screenshot is from a gb dump file thread owning the lock call site ntdll ntgetcontextthread kernelbase getthreadcontext clr thread suspendthread clr thread sysstartsuspendfordebug clr debugger trapallruntimethreads clr debugger sendcatchhandlerfound clr debugger firstchancemanagedexceptioncatcherfound clr exceptiontracker processmanagedcallframe clr exceptiontracker processosexceptionnotification clr processclrexception ntdll rtlpexecutehandlerforexception ntdll rtldispatchexception ntdll kiuserexceptiondispatch kernelbase raiseexception clr raisetheexceptioninternalonly clr il throw system system net sockets networkstream read clr exceptiontracker callhandler clr exceptiontracker callcatchhandler clr processclrexception ntdll rtlpexecutehandlerforunwind ntdll rtlunwindex clr clrunwindex clr processclrexception ntdll rtlpexecutehandlerforexception ntdll rtldispatchexception ntdll kiuserexceptiondispatch kernelbase raiseexception clr raisetheexceptioninternalonly clr il throw system system net sockets networkstream read platform framework common framework common integritystream rawreadbytes platform framework common framework common integritystream rawreadbyte platform framework common framework common integritystream managedreadbuffer platform framework common framework common integritystream readstring distributedcomputation client xxx distributedcomputation client dcompnodetcpclient distributedcomputation client xxx distributedcomputation client computationclient c b platform framework common framework common nodeclientconnectionpoolhandler doclientwork distributedcomputation client xxx distributedcomputation client computationclient c b platform framework common framework common fanoutclientoperator c b platform framework common xxx platform taskfactoryextensions c b mscorlib system threading tasks task execute mscorlib system threading executioncontext runinternal mscorlib system threading executioncontext run mscorlib system threading tasks task executewiththreadlocal mscorlib system threading tasks task executeentry platform common platform common staticthreadpoolqueuedtaskscheduler threadbaseddispatchloop mscorlib system threading executioncontext runinternal mscorlib system threading executioncontext run mscorlib system threading executioncontext run mscorlib system threading threadhelper threadstart clr calldescrworkerinternal clr calldescrworkerwithhandler clr methoddesccallsite calltargetworker clr threadnative kickoffthread worker clr managedthreadbase dispatchinner clr managedthreadbase dispatchmiddle clr managedthreadbase dispatchouter clr managedthreadbase dispatchincorrectad clr thread doadcallback clr managedthreadbase dispatchinner clr managedthreadbase dispatchmiddle clr managedthreadbase dispatchouter clr managedthreadbase fulltransitionwithad clr threadnative kickoffthread clr thread intermediatethreadproc basethreadinitthunk ntdll rtluserthreadstart exceptions from the thread above example from another dump file thread trying to acquire the cs lock os thread id child sp ip call site system threading threadpoolwaitcallback performwaitcallback native side call site ntdll ntwaitforalertbythreadid ntdll rtlpwaitonaddresswithtimeout ntdll rtlpwaitonaddress ntdll rtlpwaitoncriticalsection ntdll rtlpentercriticalsectioncontended ntdll rtlentercriticalsection clr crstbase spinenter clr crstbase enter clr debugger donotcalldirectlyprivatelock clr debugger lockforeventsending clr debuggercontroller dispatchpatchorsinglestep clr debuggercontroller dispatchnativeexception clr debugger firstchancenativeexception clr isdebuggerfault clr clr clrvectoredexceptionhandler clr clrvectoredexceptionhandlershim ntdll rtlpcallvectoredhandlers ntdll rtldispatchexception ntdll kiuserexceptiondispatch mscorlib system threading threadpoolwaitcallback performwaitcallback clr calldescrworkerinternal clr calldescrworkerwithhandler clr methoddesccallsite calltargetworker clr queueuserworkitemmanagedcallback clr managedthreadbase dispatchinner clr managedthreadbase dispatchmiddle clr managedthreadbase dispatchouter clr managedthreadbase dispatchincorrectad clr thread doadcallback clr managedthreadbase dispatchinner clr managedthreadbase dispatchmiddle clr managedthreadbase dispatchouter clr managedthreadbase fulltransitionwithad clr managedperappdomaintpcount dispatchworkitem clr threadpoolmgr executeworkrequest clr threadpoolmgr workerthreadstart clr thread intermediatethreadproc basethreadinitthunk ntdll rtluserthreadstart critical section critsec at waiterwoken no lockcount recursioncount owningthread entrycount contentioncount locked owner call site ntdll ntwaitformultipleobjects kernelbase waitformultipleobjectsex clr debuggerrcthread mainloop clr debuggerrcthread threadproc clr debuggerrcthread threadprocstatic basethreadinitthunk ntdll rtluserthreadstart a more granular call stack reveals what seems to be an exception when leave was called os thread id current frame ntdll ntwaitformultipleobjects child sp retaddr caller callee kernelbase waitformultipleobjectsex calling ntdll ntwaitformultipleobjects ntdll rtlpfreeuserblocktoheap calling ntdll rtlfreeheap ntdll rtlpfreeuserblock calling ntdll rtlgetcurrentservicesessionid kernelbase raiseexception calling kernelbase security check cookie kernelbase raiseexception calling ntdll rtlraiseexception clr debugger sendrawevent calling clr security check cookie clr debuggerrcthread sendipcevent calling clr debugger sendrawevent clr debugger initipcevent clr debugger sendsynccompleteipcevent calling clr debuggerrcthread sendipcevent clr debugger suspendcomplete calling clr debugger sendsynccompleteipcevent clr debuggerrcthread mainloop calling waitformultipleobjectsex clr crstbase leave clr debuggerrcthread threadproc calling clr debuggerrcthread mainloop clr debuggerrcthread threadprocstatic calling clr debuggerrcthread threadproc basethreadinitthunk calling ntdll ldrpdispatchusercalltarget ntdll rtluserthreadstart calling ntdll ldrpdispatchusercalltarget one among many threads suspended by the gc native side most if not all threads throwing chance exceptions are in the same state but other threads not throwing exceptions are also in the same state thread state clr owns however most threads have this state gc suspend pending gc on transitions legal to join yield requested blocking gc for stack overflow no idea what this means and couldn’t find documentation background clr owns my theory is that the critical section issue and the threads in suspended state issue may be connected where the first causes the second issue there is a dll sampling data which is the primary suspect based on issue but i couldn’t find evidence of this dll triggering the issue like it was found in anyway here are all call stacks where this dll is running you may spot something interesting id suspend teb unfrozen call site ntdll ntwaitforsingleobject kernelbase waitforsingleobjectex clr clr clreventwaithelper clr clreventbase waitex clr thread waitsuspendeventshelper clr thread waitsuspendevents clr thread rareenablepreemptivegc clr thread raredisablepreemptivegc clr thread doappropriatewaitworker clr thread doappropriatewait clr waithandlenative corwaitonenative mscorlib system threading waithandle internalwaitone mscorlib system threading waithandle waitone mscorlib system threading waithandle waitone monitoring system xxx monitoring countermanager samplingthread mscorlib system threading tasks task execute mscorlib system threading executioncontext runinternal mscorlib system threading executioncontext run mscorlib system threading tasks task executewiththreadlocal mscorlib system threading tasks task executeentry mscorlib system threading executioncontext runinternal mscorlib system threading executioncontext run mscorlib system threading executioncontext run id suspend teb unfrozen call site ntdll ntwaitforsingleobject kernelbase waitforsingleobjectex clr clr clreventwaithelper clr clreventbase waitex clr thread waitsuspendeventshelper clr thread waitsuspendevents clr thread rareenablepreemptivegc clr thread raredisablepreemptivegc clr pendingsync restore clr thread doappropriatewait clr clreventbase waitex clr thread block clr syncblock wait clr objectnative waittimeout monitoring system xxx monitoring eventsinkhandler queuethread mscorlib system threading tasks task execute mscorlib system threading executioncontext runinternal mscorlib system threading executioncontext run mscorlib system threading tasks task executewiththreadlocal mscorlib system threading tasks task executeentry mscorlib system threading executioncontext runinternal mscorlib system threading executioncontext run mscorlib system threading executioncontext run mscorlib system threading threadhelper threadstart clr calldescrworkerinternal clr calldescrworkerwithhandler clr methoddesccallsite calltargetworker id suspend teb unfrozen call site ntdll ntwaitforsingleobject kernelbase waitforsingleobjectex clr clr clreventwaithelper clr clreventbase waitex clr thread waitsuspendeventshelper clr thread waitsuspendevents clr thread rareenablepreemptivegc clr thread raredisablepreemptivegc clr pendingsync restore clr thread doappropriatewait clr clreventbase waitex clr thread block clr syncblock wait clr objectnative waittimeout monitoring system xxx monitoring eventsinkhandler queuethread mscorlib system threading tasks task execute mscorlib system threading executioncontext runinternal mscorlib system threading executioncontext run mscorlib system threading tasks task executewiththreadlocal mscorlib system threading tasks task executeentry mscorlib system threading executioncontext runinternal mscorlib system threading executioncontext run mscorlib system threading executioncontext run mscorlib system threading threadhelper threadstart clr calldescrworkerinternal clr calldescrworkerwithhandler clr methoddesccallsite calltargetworker id suspend teb unfrozen call site ntdll ntwaitforsingleobject kernelbase waitforsingleobjectex clr clr clreventwaithelper clr clreventbase waitex clr thread waitsuspendeventshelper clr thread waitsuspendevents clr thread rareenablepreemptivegc clr thread raredisablepreemptivegc clr pendingsync restore clr thread doappropriatewait clr clreventbase waitex clr thread block clr syncblock wait clr objectnative waittimeout monitoring system xxx monitoring eventsinkhandler queuethread mscorlib system threading tasks task execute mscorlib system threading executioncontext runinternal mscorlib system threading executioncontext run mscorlib system threading tasks task executewiththreadlocal id suspend teb unfrozen call site ntdll ntwaitforsingleobject kernelbase waitforsingleobjectex clr clr clreventwaithelper clr clreventbase waitex clr thread waitsuspendeventshelper clr thread waitsuspendevents clr thread rareenablepreemptivegc clr thread raredisablepreemptivegc clr pendingsync restore clr thread doappropriatewait clr clreventbase waitex clr thread block clr syncblock wait clr objectnative waittimeout monitoring system xxx monitoring eventsinkhandler queuethread mscorlib system threading tasks task execute mscorlib system threading executioncontext runinternal mscorlib system threading executioncontext run mscorlib system threading tasks task executewiththreadlocal mscorlib system threading tasks task executeentry mscorlib system threading executioncontext runinternal mscorlib system threading executioncontext run id suspend teb unfrozen call site ntdll ntwaitformultipleobjects kernelbase waitformultipleobjectsex clr waitformultipleobjectsex so tolerant clr thread doappropriatewaitworker clr thread doappropriatewait clr clreventbase waitex clr thread block clr syncblock wait clr objectnative waittimeout mscorlib system threading semaphoreslim waituntilcountortimeout mscorlib system threading semaphoreslim wait system system system monitoring system xxx monitoring queuethread mscorlib system threading tasks task execute mscorlib system threading executioncontext runinternal mscorlib system threading executioncontext run mscorlib system threading tasks task executewiththreadlocal mscorlib system threading tasks task executeentry mscorlib system threading executioncontext runinternal mscorlib system threading executioncontext run mscorlib system threading executioncontext run mscorlib system threading threadhelper threadstart clr calldescrworkerinternal clr calldescrworkerwithhandler clr methoddesccallsite calltargetworker clr threadnative kickoffthread worker clr managedthreadbase dispatchinner clr managedthreadbase dispatchmiddle clr managedthreadbase dispatchouter clr managedthreadbase dispatchincorrectad clr thread doadcallback clr managedthreadbase dispatchinner clr managedthreadbase dispatchmiddle clr managedthreadbase dispatchouter clr managedthreadbase fulltransitionwithad clr threadnative kickoffthread clr thread intermediatethreadproc basethreadinitthunk ntdll rtluserthreadstart all threads not making progress show this pattern from a total of threads there are threads in the state above and threads in this state gc mode is preemptive for all threads if needed i have much more information from the analysis also please let me know what kind of information from the dump files and etw traces will help to identify why the threads are in this suspended state and i’ll extract them from the data is there a gc setting that could mitigate this issue is here an easy and quick way to get the private pdb files for the clr dll i m getting the public one so can t see arguments local variables thanks
0
541,182
15,822,878,095
IssuesEvent
2021-04-05 23:18:17
simonbaird/tiddlyhost
https://api.github.com/repos/simonbaird/tiddlyhost
closed
Remove encrypted credentials from the repo and git ignore it
priority
The readme says "remove the credentials file" but then you have a permanent git diff that you can't commit. Considerations: * I'd like a failsafe so that I don't accidentally build and deploy without the credentials file, or without changes to the credentials file. Currently it's easy since that file is checked in. * What's a better way to manage this credentials file? * There are some other secrets outside the rails credential file that are not checked in. What's a good way to manage them? Can I do the same thing with them as well as the rails credentials file. (I expect there's some sensible best practices.)
1.0
Remove encrypted credentials from the repo and git ignore it - The readme says "remove the credentials file" but then you have a permanent git diff that you can't commit. Considerations: * I'd like a failsafe so that I don't accidentally build and deploy without the credentials file, or without changes to the credentials file. Currently it's easy since that file is checked in. * What's a better way to manage this credentials file? * There are some other secrets outside the rails credential file that are not checked in. What's a good way to manage them? Can I do the same thing with them as well as the rails credentials file. (I expect there's some sensible best practices.)
priority
remove encrypted credentials from the repo and git ignore it the readme says remove the credentials file but then you have a permanent git diff that you can t commit considerations i d like a failsafe so that i don t accidentally build and deploy without the credentials file or without changes to the credentials file currently it s easy since that file is checked in what s a better way to manage this credentials file there are some other secrets outside the rails credential file that are not checked in what s a good way to manage them can i do the same thing with them as well as the rails credentials file i expect there s some sensible best practices
1
412,351
27,854,914,393
IssuesEvent
2023-03-20 21:53:23
danielbrendel/dnyAsatruPHP-Framework
https://api.github.com/repos/danielbrendel/dnyAsatruPHP-Framework
opened
add 'npm run watch' to npm documentation
documentation
Add 'npm run watch' documentation into npm/webpack section.
1.0
add 'npm run watch' to npm documentation - Add 'npm run watch' documentation into npm/webpack section.
non_priority
add npm run watch to npm documentation add npm run watch documentation into npm webpack section
0
37,542
18,508,219,370
IssuesEvent
2021-10-19 21:29:40
mapbox/mapbox-gl-js
https://api.github.com/repos/mapbox/mapbox-gl-js
opened
optimize projections with more precise tile cover
bug :lady_beetle: performance :zap:
Non-mercator projections are slower to load than mercator at higher zoom levels. `winkelTripel` is roughly ~17% slower in our benchmarks. It looks like most of this is caused by [imprecise bounds checking](https://github.com/mapbox/mapbox-gl-js/blob/107a5256bebb4d06c6fb073618632d865e48e5b9/src/geo/transform.js#L747-L760), which leads to loading tiles that are entirely offscreen. This affects both loading and rendering performance. Mercator maps are not affected by this. ![Screen Shot 2021-10-19 at 4 26 44 PM](https://user-images.githubusercontent.com/1421652/137985305-a0226557-48bb-48c3-b3b9-6e351bd60513.png) cc @mourner
True
optimize projections with more precise tile cover - Non-mercator projections are slower to load than mercator at higher zoom levels. `winkelTripel` is roughly ~17% slower in our benchmarks. It looks like most of this is caused by [imprecise bounds checking](https://github.com/mapbox/mapbox-gl-js/blob/107a5256bebb4d06c6fb073618632d865e48e5b9/src/geo/transform.js#L747-L760), which leads to loading tiles that are entirely offscreen. This affects both loading and rendering performance. Mercator maps are not affected by this. ![Screen Shot 2021-10-19 at 4 26 44 PM](https://user-images.githubusercontent.com/1421652/137985305-a0226557-48bb-48c3-b3b9-6e351bd60513.png) cc @mourner
non_priority
optimize projections with more precise tile cover non mercator projections are slower to load than mercator at higher zoom levels winkeltripel is roughly slower in our benchmarks it looks like most of this is caused by which leads to loading tiles that are entirely offscreen this affects both loading and rendering performance mercator maps are not affected by this cc mourner
0
775,767
27,236,539,682
IssuesEvent
2023-02-21 16:43:25
testomatio/app
https://api.github.com/repos/testomatio/app
closed
Tests name needs word wrapping
bug ui\ux priority low
**To Reproduce** Steps to reproduce the behavior: 1. open a project 2. create a test with a very long name 3. resize the browser window 4. see the issue **Expected behavior** The name of the test is wrapped by words, o a user can see tests statuses **Screenshots** ![screencast 2021-08-20 10-00-25](https://user-images.githubusercontent.com/77803888/130194128-c6fd0ebc-f65c-4af7-9366-90c6efa8f71a.gif) **Desktop (please complete the following information):** - OS: MacOS - Browser chrome - Application: production and beta
1.0
Tests name needs word wrapping - **To Reproduce** Steps to reproduce the behavior: 1. open a project 2. create a test with a very long name 3. resize the browser window 4. see the issue **Expected behavior** The name of the test is wrapped by words, o a user can see tests statuses **Screenshots** ![screencast 2021-08-20 10-00-25](https://user-images.githubusercontent.com/77803888/130194128-c6fd0ebc-f65c-4af7-9366-90c6efa8f71a.gif) **Desktop (please complete the following information):** - OS: MacOS - Browser chrome - Application: production and beta
priority
tests name needs word wrapping to reproduce steps to reproduce the behavior open a project create a test with a very long name resize the browser window see the issue expected behavior the name of the test is wrapped by words o a user can see tests statuses screenshots desktop please complete the following information os macos browser chrome application production and beta
1
420,055
12,232,498,585
IssuesEvent
2020-05-04 09:48:15
containous/maesh
https://api.github.com/repos/containous/maesh
closed
Inject errors in topology
area/api area/logs kind/enhancement kind/task priority/P1
Replaces #477 In the end, we'd rather inject topology builder errors in the built topology, just like we do it for Traefik's dynamic configuration already. This makes the process of pointing out exactly where an issue lies much easier than with our previous implementation, while also being less invasive by leaving the current logging system in its initial state. The final goal remains to later on add an entrypoint to the API to list the configuration errors with precise context.
1.0
Inject errors in topology - Replaces #477 In the end, we'd rather inject topology builder errors in the built topology, just like we do it for Traefik's dynamic configuration already. This makes the process of pointing out exactly where an issue lies much easier than with our previous implementation, while also being less invasive by leaving the current logging system in its initial state. The final goal remains to later on add an entrypoint to the API to list the configuration errors with precise context.
priority
inject errors in topology replaces in the end we d rather inject topology builder errors in the built topology just like we do it for traefik s dynamic configuration already this makes the process of pointing out exactly where an issue lies much easier than with our previous implementation while also being less invasive by leaving the current logging system in its initial state the final goal remains to later on add an entrypoint to the api to list the configuration errors with precise context
1
99,874
4,074,063,123
IssuesEvent
2016-05-28 06:01:44
GLolol/PyLink
https://api.github.com/repos/GLolol/PyLink
opened
SWHOIS support
priority:wishlist protocol spec
- [ ] Track SWHOIS of clients on various IRCds. - [ ] Allow service bots to specify custom WHOIS lines via SWHOIS, as desired.
1.0
SWHOIS support - - [ ] Track SWHOIS of clients on various IRCds. - [ ] Allow service bots to specify custom WHOIS lines via SWHOIS, as desired.
priority
swhois support track swhois of clients on various ircds allow service bots to specify custom whois lines via swhois as desired
1
505,573
14,641,279,041
IssuesEvent
2020-12-25 06:18:21
PyTorchLightning/pytorch-lightning
https://api.github.com/repos/PyTorchLightning/pytorch-lightning
closed
slurm auto re-queue inconsistency
Checkpoint Priority P1 SLURM bug / fix help wanted won't fix
Hi! I submitted a slurm job-array with pytorch lightning functionality. I used the suggested signal (#SBATCH --signal=SIGUSR1@90) and set distributed_backend to 'ddp' in the Trainer call. I did notice successful auto-resubmission this morning whenever my jobs were pre-emptied; however, I now notice that several of them have not completed and are not queued either. Wondering if this has been reported by someone earlier and any clue why this could happen? Is there a maximum to the number of times the jobs would be re-queued or other slurm rules that may prevent requeuing, etc.? My jobs might have been pre-emptied several times as I was running them on the low priority "non-capped" queue so as to occupy maximum number of gpus whenever they become available (already using my quota of high/medium priority queues). Thanks in advance!
1.0
slurm auto re-queue inconsistency - Hi! I submitted a slurm job-array with pytorch lightning functionality. I used the suggested signal (#SBATCH --signal=SIGUSR1@90) and set distributed_backend to 'ddp' in the Trainer call. I did notice successful auto-resubmission this morning whenever my jobs were pre-emptied; however, I now notice that several of them have not completed and are not queued either. Wondering if this has been reported by someone earlier and any clue why this could happen? Is there a maximum to the number of times the jobs would be re-queued or other slurm rules that may prevent requeuing, etc.? My jobs might have been pre-emptied several times as I was running them on the low priority "non-capped" queue so as to occupy maximum number of gpus whenever they become available (already using my quota of high/medium priority queues). Thanks in advance!
priority
slurm auto re queue inconsistency hi i submitted a slurm job array with pytorch lightning functionality i used the suggested signal sbatch signal and set distributed backend to ddp in the trainer call i did notice successful auto resubmission this morning whenever my jobs were pre emptied however i now notice that several of them have not completed and are not queued either wondering if this has been reported by someone earlier and any clue why this could happen is there a maximum to the number of times the jobs would be re queued or other slurm rules that may prevent requeuing etc my jobs might have been pre emptied several times as i was running them on the low priority non capped queue so as to occupy maximum number of gpus whenever they become available already using my quota of high medium priority queues thanks in advance
1
720,104
24,779,394,833
IssuesEvent
2022-10-24 02:26:42
mito-ds/monorepo
https://api.github.com/repos/mito-ds/monorepo
closed
No way to sign up for Mito Pro after signed in
type: mitosheet waiting on: mockups effort: 5 priority: medium
Currently, there is no way to sign up for Mito Pro after you finish the signing process, without using the installer. We should make it easy to sign up for Mito Pro. Probably the easiest solution is a simple modal with an input that allows you to signup for Pro; that's it!
1.0
No way to sign up for Mito Pro after signed in - Currently, there is no way to sign up for Mito Pro after you finish the signing process, without using the installer. We should make it easy to sign up for Mito Pro. Probably the easiest solution is a simple modal with an input that allows you to signup for Pro; that's it!
priority
no way to sign up for mito pro after signed in currently there is no way to sign up for mito pro after you finish the signing process without using the installer we should make it easy to sign up for mito pro probably the easiest solution is a simple modal with an input that allows you to signup for pro that s it
1
381,897
11,297,491,709
IssuesEvent
2020-01-17 06:12:59
glints-dev/glints-aries
https://api.github.com/repos/glints-dev/glints-aries
opened
Collapsible: Fix UI Mismatches
:nail_care: UI Mismatch Low Priority
Glints UI: https://app.zeplin.io/project/5ba343bf0c399c33db483653/screen/5bee63340a4c5f3e7a816d15 Glints Aries Status: https://docs.google.com/spreadsheets/d/1O9WY_x9MmzRIbskPE-2_TsKgOz3N7C61iEqcx6zWln8/ Identify the UI Mismatches and then fix them.
1.0
Collapsible: Fix UI Mismatches - Glints UI: https://app.zeplin.io/project/5ba343bf0c399c33db483653/screen/5bee63340a4c5f3e7a816d15 Glints Aries Status: https://docs.google.com/spreadsheets/d/1O9WY_x9MmzRIbskPE-2_TsKgOz3N7C61iEqcx6zWln8/ Identify the UI Mismatches and then fix them.
priority
collapsible fix ui mismatches glints ui glints aries status identify the ui mismatches and then fix them
1
19,170
5,814,941,387
IssuesEvent
2017-05-05 06:42:59
joomla/joomla-cms
https://api.github.com/repos/joomla/joomla-cms
closed
Massive loading times after update to 3.7?
No Code Attached Yet
Hey there, i updated two pages now to joomla 3.7 and notices that both of them got a high loading time up to 8 seconds to get the page loaded. Does anyone else have that problem?
1.0
Massive loading times after update to 3.7? - Hey there, i updated two pages now to joomla 3.7 and notices that both of them got a high loading time up to 8 seconds to get the page loaded. Does anyone else have that problem?
non_priority
massive loading times after update to hey there i updated two pages now to joomla and notices that both of them got a high loading time up to seconds to get the page loaded does anyone else have that problem
0
32,201
4,756,659,988
IssuesEvent
2016-10-24 14:36:02
Microsoft/vscode
https://api.github.com/repos/Microsoft/vscode
opened
Test extension packs
extensions testplan-item
Test for #14277 - [ ] Windows - [ ] macOS - [ ] Linux Complexity - 4 ### Set up - Make sure you can publish an extension. Ref - http://code.visualstudio.com/docs/tools/vscecli - Use the [sample test extensions](https://github.com/Microsoft/vscode-extension-samples/tree/master/extension-deps-sample) - This repo has 3 extension packs (1, 2, 3) which are already published - Each has some dependencies - Look at it's package.json for dependencies - Use these extensions for testing following scenarios (You can also come up with your own if you want to :) ) - Also do play around with changing dependencies and bumping the versions to test different scenarios for installing and updating. **Note:** Publishing the extension takes a while to get reflected in the market place. ### Dependencies view - Search for one of the above extensions and open the editor - Verify that dependencies tab is show and navigate to it - Verify that dependencies tree is shown here - Navigate down the tree to see dependencies of dependencies and verify that they are shown correctly - If a dependency is a dependency of itself (direct/recursive) then tree stops there. - If there is an invalid dependency entry, an error entry is shown - Verify that you can open (also open to side) the dependency by clicking on the name ### UnInstalling packs - Search for one of the above extensions and install - Verify that a confirmation is asked if to uninstall the extension only or with dependencies - Selecting `only` option should uninstall only itself - Selecting `All` option should uninstall itself and all its dependencies recursively. - Verify that Uninstall should not uninstall the dependency of the extension if other installed Search for one of the above extensions and open the editor/s depends on this dependency. - Verify that uninstalling any extension should be prevented if other installed extension/s depends on this.
1.0
Test extension packs - Test for #14277 - [ ] Windows - [ ] macOS - [ ] Linux Complexity - 4 ### Set up - Make sure you can publish an extension. Ref - http://code.visualstudio.com/docs/tools/vscecli - Use the [sample test extensions](https://github.com/Microsoft/vscode-extension-samples/tree/master/extension-deps-sample) - This repo has 3 extension packs (1, 2, 3) which are already published - Each has some dependencies - Look at it's package.json for dependencies - Use these extensions for testing following scenarios (You can also come up with your own if you want to :) ) - Also do play around with changing dependencies and bumping the versions to test different scenarios for installing and updating. **Note:** Publishing the extension takes a while to get reflected in the market place. ### Dependencies view - Search for one of the above extensions and open the editor - Verify that dependencies tab is show and navigate to it - Verify that dependencies tree is shown here - Navigate down the tree to see dependencies of dependencies and verify that they are shown correctly - If a dependency is a dependency of itself (direct/recursive) then tree stops there. - If there is an invalid dependency entry, an error entry is shown - Verify that you can open (also open to side) the dependency by clicking on the name ### UnInstalling packs - Search for one of the above extensions and install - Verify that a confirmation is asked if to uninstall the extension only or with dependencies - Selecting `only` option should uninstall only itself - Selecting `All` option should uninstall itself and all its dependencies recursively. - Verify that Uninstall should not uninstall the dependency of the extension if other installed Search for one of the above extensions and open the editor/s depends on this dependency. - Verify that uninstalling any extension should be prevented if other installed extension/s depends on this.
non_priority
test extension packs test for windows macos linux complexity set up make sure you can publish an extension ref use the this repo has extension packs which are already published each has some dependencies look at it s package json for dependencies use these extensions for testing following scenarios you can also come up with your own if you want to also do play around with changing dependencies and bumping the versions to test different scenarios for installing and updating note publishing the extension takes a while to get reflected in the market place dependencies view search for one of the above extensions and open the editor verify that dependencies tab is show and navigate to it verify that dependencies tree is shown here navigate down the tree to see dependencies of dependencies and verify that they are shown correctly if a dependency is a dependency of itself direct recursive then tree stops there if there is an invalid dependency entry an error entry is shown verify that you can open also open to side the dependency by clicking on the name uninstalling packs search for one of the above extensions and install verify that a confirmation is asked if to uninstall the extension only or with dependencies selecting only option should uninstall only itself selecting all option should uninstall itself and all its dependencies recursively verify that uninstall should not uninstall the dependency of the extension if other installed search for one of the above extensions and open the editor s depends on this dependency verify that uninstalling any extension should be prevented if other installed extension s depends on this
0
155,416
5,955,114,684
IssuesEvent
2017-05-28 01:32:26
Athissa/Tracker
https://api.github.com/repos/Athissa/Tracker
opened
Servant of the Queen
Class-Shaman Priority-Critical Type-Spell
**Describe the issue you're having**: Servant of the Queen procs on everything, but reincarnation. **Explain how you expect it work**: Die, Use Reincarnation, Get buff. **Steps to reproduce the problem**: 1. Swap to resto 2. Cast any healing spell on anyone 3. Gain the buff **Links to Wowhead, YouTube, etc**: http://www.wowhead.com/spell=207357/servant-of-the-queen
1.0
Servant of the Queen - **Describe the issue you're having**: Servant of the Queen procs on everything, but reincarnation. **Explain how you expect it work**: Die, Use Reincarnation, Get buff. **Steps to reproduce the problem**: 1. Swap to resto 2. Cast any healing spell on anyone 3. Gain the buff **Links to Wowhead, YouTube, etc**: http://www.wowhead.com/spell=207357/servant-of-the-queen
priority
servant of the queen describe the issue you re having servant of the queen procs on everything but reincarnation explain how you expect it work die use reincarnation get buff steps to reproduce the problem swap to resto cast any healing spell on anyone gain the buff links to wowhead youtube etc
1
278,691
8,649,089,456
IssuesEvent
2018-11-26 18:21:38
mozilla/addons-frontend
https://api.github.com/repos/mozilla/addons-frontend
closed
Star breakdown is not correctly updated on the add-on review page when rating stars are edited
component: add-on ratings priority: mvp priority: p3 type: bug
STR: 1. Log in to AMO 2. Review an add-on 3. Go to the add-on Reviews page 4. Make a note of the star breakdown 5. Edit your review, stars included, and click Submit review 6. Observe the star breakdown again Actual result: The star breakdown doesn't reflect the new star selection Expected result: The number corresponding to the new rating is increased while the number of the previous rating decreases Notes: - this happens all the time, not only when the add-on Reviews page was visited before - the updated numbers are visible after the page is reloaded - reproduced on all AMO servers with FF61, Win10x64 ![edit stars](https://user-images.githubusercontent.com/31961530/44327942-dc0a0b80-a468-11e8-8a37-acf51f234e06.gif)
2.0
Star breakdown is not correctly updated on the add-on review page when rating stars are edited - STR: 1. Log in to AMO 2. Review an add-on 3. Go to the add-on Reviews page 4. Make a note of the star breakdown 5. Edit your review, stars included, and click Submit review 6. Observe the star breakdown again Actual result: The star breakdown doesn't reflect the new star selection Expected result: The number corresponding to the new rating is increased while the number of the previous rating decreases Notes: - this happens all the time, not only when the add-on Reviews page was visited before - the updated numbers are visible after the page is reloaded - reproduced on all AMO servers with FF61, Win10x64 ![edit stars](https://user-images.githubusercontent.com/31961530/44327942-dc0a0b80-a468-11e8-8a37-acf51f234e06.gif)
priority
star breakdown is not correctly updated on the add on review page when rating stars are edited str log in to amo review an add on go to the add on reviews page make a note of the star breakdown edit your review stars included and click submit review observe the star breakdown again actual result the star breakdown doesn t reflect the new star selection expected result the number corresponding to the new rating is increased while the number of the previous rating decreases notes this happens all the time not only when the add on reviews page was visited before the updated numbers are visible after the page is reloaded reproduced on all amo servers with
1
643,413
20,956,857,009
IssuesEvent
2022-03-27 07:56:12
AY2122S2-CS2103-F09-2/tp
https://api.github.com/repos/AY2122S2-CS2103-F09-2/tp
opened
Modify findfriend
type.Story priority.High
Modify findfriend to split by argument, OR-based, substring for tags + substring for description
1.0
Modify findfriend - Modify findfriend to split by argument, OR-based, substring for tags + substring for description
priority
modify findfriend modify findfriend to split by argument or based substring for tags substring for description
1
331,745
10,076,586,839
IssuesEvent
2019-07-24 16:35:25
svof/svof
https://api.github.com/repos/svof/svof
closed
Error during Serverside Priority Sync
bug confirmed in-client medium priority
In order to reliable create this error: Turnoff Serverside curing and force a classchange. ![image](https://user-images.githubusercontent.com/14912622/61603911-85272c00-ac0d-11e9-83a3-71f288485fec.png) I believe the error is caused when sk.notifypriodiffs() runs into these when comparing differences. ![image](https://user-images.githubusercontent.com/14912622/61603760-d125a100-ac0c-11e9-929a-0d60e3c2c113.png)
1.0
Error during Serverside Priority Sync - In order to reliable create this error: Turnoff Serverside curing and force a classchange. ![image](https://user-images.githubusercontent.com/14912622/61603911-85272c00-ac0d-11e9-83a3-71f288485fec.png) I believe the error is caused when sk.notifypriodiffs() runs into these when comparing differences. ![image](https://user-images.githubusercontent.com/14912622/61603760-d125a100-ac0c-11e9-929a-0d60e3c2c113.png)
priority
error during serverside priority sync in order to reliable create this error turnoff serverside curing and force a classchange i believe the error is caused when sk notifypriodiffs runs into these when comparing differences
1
435,426
30,499,997,844
IssuesEvent
2023-07-18 13:23:57
GSTT-CSC/hazen
https://api.github.com/repos/GSTT-CSC/hazen
closed
Improve/fix usage instructions - separate init.py and main.py
bug documentation
![Screenshot 2023-06-30 at 10 57 15](https://github.com/GSTT-CSC/hazen/assets/15593138/5f2bf057-86f9-4b61-91e1-e81e1fe2754f) there are typos in the current instructions and could be arranged in a more logical order
1.0
Improve/fix usage instructions - separate init.py and main.py - ![Screenshot 2023-06-30 at 10 57 15](https://github.com/GSTT-CSC/hazen/assets/15593138/5f2bf057-86f9-4b61-91e1-e81e1fe2754f) there are typos in the current instructions and could be arranged in a more logical order
non_priority
improve fix usage instructions separate init py and main py there are typos in the current instructions and could be arranged in a more logical order
0
378,850
11,209,806,852
IssuesEvent
2020-01-06 11:27:36
StrangeLoopGames/EcoIssues
https://api.github.com/repos/StrangeLoopGames/EcoIssues
closed
Minimap icons are 1 frame older than minimap
Fixed Medium Priority
When you drag quicly the minimap you can see that icons loose their positions in the minimap ![ezgif-2-908ba9238d27](https://user-images.githubusercontent.com/53317567/70945931-f7713080-2034-11ea-9d0f-dee37769bc0d.gif)
1.0
Minimap icons are 1 frame older than minimap - When you drag quicly the minimap you can see that icons loose their positions in the minimap ![ezgif-2-908ba9238d27](https://user-images.githubusercontent.com/53317567/70945931-f7713080-2034-11ea-9d0f-dee37769bc0d.gif)
priority
minimap icons are frame older than minimap when you drag quicly the minimap you can see that icons loose their positions in the minimap
1
139,336
11,258,015,087
IssuesEvent
2020-01-13 02:30:46
hfaerber/african-love-this-app
https://api.github.com/repos/hfaerber/african-love-this-app
closed
UX/UI notes
User Testing/Feedback
-Search feature searches all countries regardless of which display filter is selected. Doesn't change the display feature. User testing confirmed that user preference if executing a search would be to see results from all countries. Changed search label from `Search` to `Search All` to give clarity based on user testing recommendation
1.0
UX/UI notes - -Search feature searches all countries regardless of which display filter is selected. Doesn't change the display feature. User testing confirmed that user preference if executing a search would be to see results from all countries. Changed search label from `Search` to `Search All` to give clarity based on user testing recommendation
non_priority
ux ui notes search feature searches all countries regardless of which display filter is selected doesn t change the display feature user testing confirmed that user preference if executing a search would be to see results from all countries changed search label from search to search all to give clarity based on user testing recommendation
0
702,790
24,136,167,564
IssuesEvent
2022-09-21 11:29:30
unep-grid/map-x-mgl
https://api.github.com/repos/unep-grid/map-x-mgl
opened
Edit base map UN labels and terms of use to increase compliance with UN Cartographic Section
priority 1 improvement
- [ ] Replace the tileset used for country labels with new one (country names contain * in all languages where needed) in the mapx style.json - [ ] Edit terms of use to explain the * Files will be provided separately
1.0
Edit base map UN labels and terms of use to increase compliance with UN Cartographic Section - - [ ] Replace the tileset used for country labels with new one (country names contain * in all languages where needed) in the mapx style.json - [ ] Edit terms of use to explain the * Files will be provided separately
priority
edit base map un labels and terms of use to increase compliance with un cartographic section replace the tileset used for country labels with new one country names contain in all languages where needed in the mapx style json edit terms of use to explain the files will be provided separately
1