Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
13,617
10,348,322,676
IssuesEvent
2019-09-04 19:31:21
elastic/beats
https://api.github.com/repos/elastic/beats
closed
Prometheus collector in metricbeat silently fails if metric name is incorrect
:infrastructure Metricbeat [zube]: In Review bug good first issue
Prometheus collector silently fails if the prometheus response contains invalid metric names, this can be reproduced for example if the metric name contain dots. [This error](https://github.com/elastic/beats/blob/v7.3.0/metricbeat/helper/prometheus/prometheus.go#L82) should be reported, or at least logged on the debug level. For confirmed bugs, please report: - Version: 6.X, 7.X - Steps to Reproduce: Configure the prometheus collector to an endpoint that exposes prometheus metrics with incorrect names. Seen in java applications with Micrometer exposing metrics with dots in their names.
1.0
Prometheus collector in metricbeat silently fails if metric name is incorrect - Prometheus collector silently fails if the prometheus response contains invalid metric names, this can be reproduced for example if the metric name contain dots. [This error](https://github.com/elastic/beats/blob/v7.3.0/metricbeat/helper/prometheus/prometheus.go#L82) should be reported, or at least logged on the debug level. For confirmed bugs, please report: - Version: 6.X, 7.X - Steps to Reproduce: Configure the prometheus collector to an endpoint that exposes prometheus metrics with incorrect names. Seen in java applications with Micrometer exposing metrics with dots in their names.
non_process
prometheus collector in metricbeat silently fails if metric name is incorrect prometheus collector silently fails if the prometheus response contains invalid metric names this can be reproduced for example if the metric name contain dots should be reported or at least logged on the debug level for confirmed bugs please report version x x steps to reproduce configure the prometheus collector to an endpoint that exposes prometheus metrics with incorrect names seen in java applications with micrometer exposing metrics with dots in their names
0
2,158
7,370,916,894
IssuesEvent
2018-03-13 10:03:07
poanetwork/poa-explorer
https://api.github.com/repos/poanetwork/poa-explorer
opened
Build new workers. Blocks, Transactions, Internal Transactions, Events
architecture
Fetching Blocks = Transactions Fetching Transactions = Internal Transactions & Events Internal Transactions = Correct Balances Events = Token Data
1.0
Build new workers. Blocks, Transactions, Internal Transactions, Events - Fetching Blocks = Transactions Fetching Transactions = Internal Transactions & Events Internal Transactions = Correct Balances Events = Token Data
non_process
build new workers blocks transactions internal transactions events fetching blocks transactions fetching transactions internal transactions events internal transactions correct balances events token data
0
13,585
16,136,290,482
IssuesEvent
2021-04-29 12:17:43
melink14/rikaikun
https://api.github.com/repos/melink14/rikaikun
closed
Automate Release Process
P2 process
# Current Thinking - Use https://github.com/semantic-release/semantic-release [combined with Github Actions](https://github.com/semantic-release/semantic-release/blob/master/docs/recipes/github-actions.md) with plugins: - npm update but don't publish - generate release notes - edit changelog with release notes - Publish to chrome web store (https://github.com/GabrielDuarteM/semantic-release-chrome) - git plugin to commit package.json, manifest.json, and CHANGELOG.md - github plugin to push changes and update issues and PRs with release. One question is whether we can use a PR to commit the changes and then after that's merged we can trigger the publish and final tagging... By convention this process auto releases whenver a commit is published but I think we'll probably use a manual trigger to avoid spamming.
1.0
Automate Release Process - # Current Thinking - Use https://github.com/semantic-release/semantic-release [combined with Github Actions](https://github.com/semantic-release/semantic-release/blob/master/docs/recipes/github-actions.md) with plugins: - npm update but don't publish - generate release notes - edit changelog with release notes - Publish to chrome web store (https://github.com/GabrielDuarteM/semantic-release-chrome) - git plugin to commit package.json, manifest.json, and CHANGELOG.md - github plugin to push changes and update issues and PRs with release. One question is whether we can use a PR to commit the changes and then after that's merged we can trigger the publish and final tagging... By convention this process auto releases whenver a commit is published but I think we'll probably use a manual trigger to avoid spamming.
process
automate release process current thinking use with plugins npm update but don t publish generate release notes edit changelog with release notes publish to chrome web store git plugin to commit package json manifest json and changelog md github plugin to push changes and update issues and prs with release one question is whether we can use a pr to commit the changes and then after that s merged we can trigger the publish and final tagging by convention this process auto releases whenver a commit is published but i think we ll probably use a manual trigger to avoid spamming
1
348,124
10,439,219,859
IssuesEvent
2019-09-18 05:28:01
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
mail.google.com - site is not usable
browser-firefox-mobile engine-gecko priority-critical
<!-- @browser: Firefox Mobile 68.0 --> <!-- @ua_header: QwantMobile/3.3 (Android 5.0.2; Mobile; rv:68.0) Gecko/68.0 QwantBrowser Firefox/68.0.2 --> <!-- @reported_with: mobile-reporter --> **URL**: https://mail.google.com/mail/mu/mp/373/#mn **Browser / Version**: Firefox Mobile 68.0 **Operating System**: Android 5.0.2 **Tested Another Browser**: No **Problem type**: Site is not usable **Description**: Impossible account deconnection **Steps to Reproduce**: [![Screenshot Description](https://webcompat.com/uploads/2019/9/0975a5bf-5e28-496c-83fd-9286580aed6a-thumb.jpeg)](https://webcompat.com/uploads/2019/9/0975a5bf-5e28-496c-83fd-9286580aed6a.jpeg) <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20190814171121</li><li>channel: default</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> <p>Console Messages:</p> <pre> ['[JavaScript Warning: "Content Security Policy: Ignoring \'report-sample\' within script-src: strict-dynamic specified"]', '[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src: strict-dynamic specified"]', '[JavaScript Warning: "Content Security Policy: Ignoring https: within script-src: strict-dynamic specified"]', '[JavaScript Warning: "Content Security Policy: Ignoring http: within script-src: strict-dynamic specified"]', '[JavaScript Warning: "Using //@ to indicate sourceURL pragmas is deprecated. Use //# instead" {file: "https://mail.google.com/mail/mu/mp/373/" line: 141 column: 14 source: "//@ sourceURL=closure,ui,prefsmanager.js"}]', '[JavaScript Warning: "Using //@ to indicate sourceURL pragmas is deprecated. Use //# instead" {file: "https://mail.google.com/mail/mu/mp/373/" line: 65 column: 14 source: "//@ sourceURL=tl.js"}]', '[JavaScript Warning: "Using //@ to indicate sourceURL pragmas is deprecated. Use //# instead" {file: "https://mail.google.com/mail/mu/mp/373/" line: 53 column: 14 source: "//@ sourceURL=ads.js"}]', '[JavaScript Warning: "Using //@ to indicate sourceURL pragmas is deprecated. Use //# instead" {file: "https://mail.google.com/mail/mu/mp/373/" line: 167 column: 14 source: "//@ sourceURL=offline.js"}]', '[JavaScript Warning: "Using //@ to indicate sourceURL pragmas is deprecated. Use //# instead" {file: "https://mail.google.com/mail/mu/mp/373/" line: 8 column: 14 source: "//@ sourceURL=cacheactions.js"}]', '[JavaScript Warning: "Using //@ to indicate sourceURL pragmas is deprecated. Use //# instead" {file: "https://mail.google.com/mail/mu/mp/373/" line: 52 column: 14 source: "//@ sourceURL=avatar,bamenu,menu.js"}]', '[JavaScript Warning: "Using //@ to indicate sourceURL pragmas is deprecated. Use //# instead" {file: "https://mail.google.com/mail/mu/mp/373/" line: 11 column: 14 source: "//@ sourceURL=imagepreloader.js"}]', '[JavaScript Warning: "Using //@ to indicate sourceURL pragmas is deprecated. Use //# instead" {file: "https://mail.google.com/mail/mu/mp/373/" line: 31 column: 14 source: "//@ sourceURL=tlactions.js"}]', '[JavaScript Warning: "Using //@ to indicate sourceURL pragmas is deprecated. Use //# instead" {file: "https://mail.google.com/mail/mu/mp/373/" line: 11 column: 14 source: "//@ sourceURL=keyboard.js"}]', '[JavaScript Warning: "This site appears to use a scroll-linked positioning effect. This may not work well with asynchronous panning; see https://developer.mozilla.org/docs/Mozilla/Performance/ScrollLinkedEffects for further details and to join the discussion on related tools and features!" {file: "https://mail.google.com/mail/mu/mp/373/#tl/priority/%5Esmartlabel_personal" line: 0}]', '[JavaScript Warning: "Using //@ to indicate sourceURL pragmas is deprecated. Use //# instead" {file: "https://mail.google.com/mail/mu/mp/373/" line: 19 column: 14 source: "//@ sourceURL=settings.js"}]'] </pre> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
mail.google.com - site is not usable - <!-- @browser: Firefox Mobile 68.0 --> <!-- @ua_header: QwantMobile/3.3 (Android 5.0.2; Mobile; rv:68.0) Gecko/68.0 QwantBrowser Firefox/68.0.2 --> <!-- @reported_with: mobile-reporter --> **URL**: https://mail.google.com/mail/mu/mp/373/#mn **Browser / Version**: Firefox Mobile 68.0 **Operating System**: Android 5.0.2 **Tested Another Browser**: No **Problem type**: Site is not usable **Description**: Impossible account deconnection **Steps to Reproduce**: [![Screenshot Description](https://webcompat.com/uploads/2019/9/0975a5bf-5e28-496c-83fd-9286580aed6a-thumb.jpeg)](https://webcompat.com/uploads/2019/9/0975a5bf-5e28-496c-83fd-9286580aed6a.jpeg) <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20190814171121</li><li>channel: default</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> <p>Console Messages:</p> <pre> ['[JavaScript Warning: "Content Security Policy: Ignoring \'report-sample\' within script-src: strict-dynamic specified"]', '[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src: strict-dynamic specified"]', '[JavaScript Warning: "Content Security Policy: Ignoring https: within script-src: strict-dynamic specified"]', '[JavaScript Warning: "Content Security Policy: Ignoring http: within script-src: strict-dynamic specified"]', '[JavaScript Warning: "Using //@ to indicate sourceURL pragmas is deprecated. Use //# instead" {file: "https://mail.google.com/mail/mu/mp/373/" line: 141 column: 14 source: "//@ sourceURL=closure,ui,prefsmanager.js"}]', '[JavaScript Warning: "Using //@ to indicate sourceURL pragmas is deprecated. Use //# instead" {file: "https://mail.google.com/mail/mu/mp/373/" line: 65 column: 14 source: "//@ sourceURL=tl.js"}]', '[JavaScript Warning: "Using //@ to indicate sourceURL pragmas is deprecated. Use //# instead" {file: "https://mail.google.com/mail/mu/mp/373/" line: 53 column: 14 source: "//@ sourceURL=ads.js"}]', '[JavaScript Warning: "Using //@ to indicate sourceURL pragmas is deprecated. Use //# instead" {file: "https://mail.google.com/mail/mu/mp/373/" line: 167 column: 14 source: "//@ sourceURL=offline.js"}]', '[JavaScript Warning: "Using //@ to indicate sourceURL pragmas is deprecated. Use //# instead" {file: "https://mail.google.com/mail/mu/mp/373/" line: 8 column: 14 source: "//@ sourceURL=cacheactions.js"}]', '[JavaScript Warning: "Using //@ to indicate sourceURL pragmas is deprecated. Use //# instead" {file: "https://mail.google.com/mail/mu/mp/373/" line: 52 column: 14 source: "//@ sourceURL=avatar,bamenu,menu.js"}]', '[JavaScript Warning: "Using //@ to indicate sourceURL pragmas is deprecated. Use //# instead" {file: "https://mail.google.com/mail/mu/mp/373/" line: 11 column: 14 source: "//@ sourceURL=imagepreloader.js"}]', '[JavaScript Warning: "Using //@ to indicate sourceURL pragmas is deprecated. Use //# instead" {file: "https://mail.google.com/mail/mu/mp/373/" line: 31 column: 14 source: "//@ sourceURL=tlactions.js"}]', '[JavaScript Warning: "Using //@ to indicate sourceURL pragmas is deprecated. Use //# instead" {file: "https://mail.google.com/mail/mu/mp/373/" line: 11 column: 14 source: "//@ sourceURL=keyboard.js"}]', '[JavaScript Warning: "This site appears to use a scroll-linked positioning effect. This may not work well with asynchronous panning; see https://developer.mozilla.org/docs/Mozilla/Performance/ScrollLinkedEffects for further details and to join the discussion on related tools and features!" {file: "https://mail.google.com/mail/mu/mp/373/#tl/priority/%5Esmartlabel_personal" line: 0}]', '[JavaScript Warning: "Using //@ to indicate sourceURL pragmas is deprecated. Use //# instead" {file: "https://mail.google.com/mail/mu/mp/373/" line: 19 column: 14 source: "//@ sourceURL=settings.js"}]'] </pre> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
mail google com site is not usable url browser version firefox mobile operating system android tested another browser no problem type site is not usable description impossible account deconnection steps to reproduce browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel default hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false console messages from with ❤️
0
15,349
19,515,143,687
IssuesEvent
2021-12-29 08:56:57
linuxdeepin/developer-center
https://api.github.com/repos/linuxdeepin/developer-center
closed
Some files are installed into `/var/cache/` folder
other | need confirmation other | delay processing
From [package file list](https://www.archlinux.org/packages/community/any/deepin-wallpapers/), some files are installed into `/var/cache/` folder. I don't think they should be bundled into the package. ``` Package Contents usr/ usr/share/ usr/share/backgrounds/ usr/share/backgrounds/deepin/ usr/share/backgrounds/deepin/desktop.jpg usr/share/wallpapers/ usr/share/wallpapers/deepin/ usr/share/wallpapers/deepin/Balloon_by_Matt_Benson.jpg usr/share/wallpapers/deepin/Beach_by_Samuel_Scrimshaw.jpg usr/share/wallpapers/deepin/Flowing_Ribbon.jpg usr/share/wallpapers/deepin/Flying_Whale_by_Shu_Le.jpg usr/share/wallpapers/deepin/Grassland_Scenery_by_Shi_Yuejun.jpg usr/share/wallpapers/deepin/Hummingbird_by_Shu_Le.jpg usr/share/wallpapers/deepin/Overlooking_by_Lance_Asper.jpg usr/share/wallpapers/deepin/Paper_Architecture_by_Dmitri_Popov.jpg usr/share/wallpapers/deepin/Scenery_in_Plateau_by_Arto_Marttinen.jpg usr/share/wallpapers/deepin/desktop.jpg var/ var/cache/ var/cache/image-blur/ var/cache/image-blur/06a603f05e30b4cfcf2a70694f062330.jpg var/cache/image-blur/6275ccae0513a0fe61079d045be6cc4b.jpg var/cache/image-blur/71d68f2ac762023ecaea864de2dd9490.jpg var/cache/image-blur/8421eebb3e37d07678c41002b1ec89f3.jpg var/cache/image-blur/87a7810cf4d14cfae69e3d89fb42179a.jpg var/cache/image-blur/8a72ea8c487812fee42693db55d47b14.jpg var/cache/image-blur/9ec059b3eec52474af9e1f81fe42e5a0.jpg var/cache/image-blur/b0987cf0c936a18e0f020366cddb63ee.jpg var/cache/image-blur/b0b7cab0b915ef1fe721ce77c8c6fe2b.jpg var/cache/image-blur/eff98ab2b2d91574a79fabd2b79eb7a9.jpg var/cache/image-blur/f1b59d03d44b2c3babaa7769b3f905b3.jpg ```
1.0
Some files are installed into `/var/cache/` folder - From [package file list](https://www.archlinux.org/packages/community/any/deepin-wallpapers/), some files are installed into `/var/cache/` folder. I don't think they should be bundled into the package. ``` Package Contents usr/ usr/share/ usr/share/backgrounds/ usr/share/backgrounds/deepin/ usr/share/backgrounds/deepin/desktop.jpg usr/share/wallpapers/ usr/share/wallpapers/deepin/ usr/share/wallpapers/deepin/Balloon_by_Matt_Benson.jpg usr/share/wallpapers/deepin/Beach_by_Samuel_Scrimshaw.jpg usr/share/wallpapers/deepin/Flowing_Ribbon.jpg usr/share/wallpapers/deepin/Flying_Whale_by_Shu_Le.jpg usr/share/wallpapers/deepin/Grassland_Scenery_by_Shi_Yuejun.jpg usr/share/wallpapers/deepin/Hummingbird_by_Shu_Le.jpg usr/share/wallpapers/deepin/Overlooking_by_Lance_Asper.jpg usr/share/wallpapers/deepin/Paper_Architecture_by_Dmitri_Popov.jpg usr/share/wallpapers/deepin/Scenery_in_Plateau_by_Arto_Marttinen.jpg usr/share/wallpapers/deepin/desktop.jpg var/ var/cache/ var/cache/image-blur/ var/cache/image-blur/06a603f05e30b4cfcf2a70694f062330.jpg var/cache/image-blur/6275ccae0513a0fe61079d045be6cc4b.jpg var/cache/image-blur/71d68f2ac762023ecaea864de2dd9490.jpg var/cache/image-blur/8421eebb3e37d07678c41002b1ec89f3.jpg var/cache/image-blur/87a7810cf4d14cfae69e3d89fb42179a.jpg var/cache/image-blur/8a72ea8c487812fee42693db55d47b14.jpg var/cache/image-blur/9ec059b3eec52474af9e1f81fe42e5a0.jpg var/cache/image-blur/b0987cf0c936a18e0f020366cddb63ee.jpg var/cache/image-blur/b0b7cab0b915ef1fe721ce77c8c6fe2b.jpg var/cache/image-blur/eff98ab2b2d91574a79fabd2b79eb7a9.jpg var/cache/image-blur/f1b59d03d44b2c3babaa7769b3f905b3.jpg ```
process
some files are installed into var cache folder from some files are installed into var cache folder i don t think they should be bundled into the package package contents usr usr share usr share backgrounds usr share backgrounds deepin usr share backgrounds deepin desktop jpg usr share wallpapers usr share wallpapers deepin usr share wallpapers deepin balloon by matt benson jpg usr share wallpapers deepin beach by samuel scrimshaw jpg usr share wallpapers deepin flowing ribbon jpg usr share wallpapers deepin flying whale by shu le jpg usr share wallpapers deepin grassland scenery by shi yuejun jpg usr share wallpapers deepin hummingbird by shu le jpg usr share wallpapers deepin overlooking by lance asper jpg usr share wallpapers deepin paper architecture by dmitri popov jpg usr share wallpapers deepin scenery in plateau by arto marttinen jpg usr share wallpapers deepin desktop jpg var var cache var cache image blur var cache image blur jpg var cache image blur jpg var cache image blur jpg var cache image blur jpg var cache image blur jpg var cache image blur jpg var cache image blur jpg var cache image blur jpg var cache image blur jpg var cache image blur jpg var cache image blur jpg
1
203,932
15,395,363,264
IssuesEvent
2021-03-03 19:07:30
openservicemesh/osm
https://api.github.com/repos/openservicemesh/osm
opened
test: pkg/envoy GetLocalClusterNameForService
tests
In `pkg/envoy` `GetLocalClusterNameForService()` does not have good unit test coverage. It would be great to write a small test for this function. ![image](https://user-images.githubusercontent.com/49918230/109858294-7b9ade00-7c10-11eb-8e4f-c4a20b02c21c.png)
1.0
test: pkg/envoy GetLocalClusterNameForService - In `pkg/envoy` `GetLocalClusterNameForService()` does not have good unit test coverage. It would be great to write a small test for this function. ![image](https://user-images.githubusercontent.com/49918230/109858294-7b9ade00-7c10-11eb-8e4f-c4a20b02c21c.png)
non_process
test pkg envoy getlocalclusternameforservice in pkg envoy getlocalclusternameforservice does not have good unit test coverage it would be great to write a small test for this function
0
20,599
27,265,744,461
IssuesEvent
2023-02-22 17:53:28
googleapis/google-cloud-node
https://api.github.com/repos/googleapis/google-cloud-node
closed
Your .repo-metadata.json file has a problem 🤒
type: process api: file repo-metadata: lint
You have a problem with your .repo-metadata.json file: Result of scan 📈: * api_shortname 'filestore' invalid in .repo-metadata.json ☝️ Once you address these problems, you can close this issue. ### Need help? * [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field. * [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**. * Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file: Result of scan 📈: * api_shortname 'filestore' invalid in .repo-metadata.json ☝️ Once you address these problems, you can close this issue. ### Need help? * [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field. * [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**. * Reach out to **go/github-automation** if you have any questions.
process
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname filestore invalid in repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
1
14,025
16,825,591,539
IssuesEvent
2021-06-17 18:06:18
GoogleCloudPlatform/cloud-ops-sandbox
https://api.github.com/repos/GoogleCloudPlatform/cloud-ops-sandbox
closed
Sandboxctl destroy timeout
priority: p2 type: feature request type: process
When running Sandboxctl destroy, cloud shell session timeout before command complete. Also it will be useful to have status updates during the run.
1.0
Sandboxctl destroy timeout - When running Sandboxctl destroy, cloud shell session timeout before command complete. Also it will be useful to have status updates during the run.
process
sandboxctl destroy timeout when running sandboxctl destroy cloud shell session timeout before command complete also it will be useful to have status updates during the run
1
5,635
8,485,722,639
IssuesEvent
2018-10-26 08:44:08
nodejs/node
https://api.github.com/repos/nodejs/node
closed
child process exec() function performance degraded when called from socket multiple times simultaneously
child_process performance
Hi I am facing same issue relevant to this post (https://github.com/nodejs/node/issues/5636) on nodejsv10.4.0 I am trying to execute a bash script from this exec child process. It works fine when i call it from a socket one time. But when it gets called simulatenously , then child process becomes too slow. Any one knows how to fix this issue. Nodejs version: v10.4.0 Ubuntu 16.04 (4.13.0-43-generic) var address="10.0.0.0"; ```js socket.on('start@'+address, (data) =>{ var start_time=new Date(); var command="./migrate2.sh " + data.arg1+" "+data.arg2; exec(command,function(err, stdout){ if stdout { var end_time=new Date(); console.log("Execution Time : "+(end_time.getTime()-start_time.getTime() )+"ms"); } }); }); ``` One time socket call result in : Execution Time : 1708 Calling exec() 3 time simultaneously results in delayed execution time for each call: Execution Time : 4968 ms Execution Time: 5103 ms Execution Time: 5203 ms
1.0
child process exec() function performance degraded when called from socket multiple times simultaneously - Hi I am facing same issue relevant to this post (https://github.com/nodejs/node/issues/5636) on nodejsv10.4.0 I am trying to execute a bash script from this exec child process. It works fine when i call it from a socket one time. But when it gets called simulatenously , then child process becomes too slow. Any one knows how to fix this issue. Nodejs version: v10.4.0 Ubuntu 16.04 (4.13.0-43-generic) var address="10.0.0.0"; ```js socket.on('start@'+address, (data) =>{ var start_time=new Date(); var command="./migrate2.sh " + data.arg1+" "+data.arg2; exec(command,function(err, stdout){ if stdout { var end_time=new Date(); console.log("Execution Time : "+(end_time.getTime()-start_time.getTime() )+"ms"); } }); }); ``` One time socket call result in : Execution Time : 1708 Calling exec() 3 time simultaneously results in delayed execution time for each call: Execution Time : 4968 ms Execution Time: 5103 ms Execution Time: 5203 ms
process
child process exec function performance degraded when called from socket multiple times simultaneously hi i am facing same issue relevant to this post on i am trying to execute a bash script from this exec child process it works fine when i call it from a socket one time but when it gets called simulatenously then child process becomes too slow any one knows how to fix this issue nodejs version ubuntu generic var address js socket on start address data var start time new date var command sh data data exec command function err stdout if stdout var end time new date console log execution time end time gettime start time gettime ms one time socket call result in execution time calling exec time simultaneously results in delayed execution time for each call execution time ms execution time ms execution time ms
1
31,448
7,370,154,692
IssuesEvent
2018-03-13 07:11:28
Owldream/Ginseng
https://api.github.com/repos/Owldream/Ginseng
opened
Create basic Tefri operators
code development
# Basic Tefri operators ### Flow combination * [ ] combine_latest
 * [ ] concat
 * [ ] merge
 * [ ] race
 * [ ] zip
 ---- ### Conditional * [ ] every
 * [ ] is_sequence_equal
 ---- ### Filtering * [ ] distinct
 * [ ] distinct_until_changed * [ ] element_at
 * [x] filter
 * [ ] find
 * [ ] find_index
 * [ ] first * [ ] last
 * [ ] sample
 * [ ] skip
 * [ ] skip_until
 * [ ] skip_while
 * [ ] take
 * [x] take_last * [ ] take_until
 * [ ] take_while
 ---- ### Math * [ ] count
 * [ ] max
 * [ ] min
 * [ ] reduce
 ---- ### Transformation * [ ] buffer * [ ] buffer_count * [x] map
 * [ ] pairwise
 * [ ] combine * [ ] repeat
1.0
Create basic Tefri operators - # Basic Tefri operators ### Flow combination * [ ] combine_latest
 * [ ] concat
 * [ ] merge
 * [ ] race
 * [ ] zip
 ---- ### Conditional * [ ] every
 * [ ] is_sequence_equal
 ---- ### Filtering * [ ] distinct
 * [ ] distinct_until_changed * [ ] element_at
 * [x] filter
 * [ ] find
 * [ ] find_index
 * [ ] first * [ ] last
 * [ ] sample
 * [ ] skip
 * [ ] skip_until
 * [ ] skip_while
 * [ ] take
 * [x] take_last * [ ] take_until
 * [ ] take_while
 ---- ### Math * [ ] count
 * [ ] max
 * [ ] min
 * [ ] reduce
 ---- ### Transformation * [ ] buffer * [ ] buffer_count * [x] map
 * [ ] pairwise
 * [ ] combine * [ ] repeat
non_process
create basic tefri operators basic tefri operators flow combination combine latest
 concat
 merge
 race
 zip
 conditional every
 is sequence equal
 filtering distinct
 distinct until changed element at
 filter
 find
 find index
 first last
 sample
 skip
 skip until
 skip while
 take
 take last take until
 take while
 math count
 max
 min
 reduce
 transformation buffer buffer count map
 pairwise
 combine repeat
0
40,874
12,799,761,398
IssuesEvent
2020-07-02 15:54:09
mwilliams7197/bootstrap
https://api.github.com/repos/mwilliams7197/bootstrap
opened
CVE-2020-7598 (Medium) detected in multiple libraries
security vulnerability
## CVE-2020-7598 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimist-0.0.8.tgz</b>, <b>minimist-1.1.3.tgz</b>, <b>minimist-1.2.0.tgz</b></p></summary> <p> <details><summary><b>minimist-0.0.8.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/bootstrap/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/bootstrap/node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - cli-7.2.3.tgz (Root Library) - mkdirp-0.5.1.tgz - :x: **minimist-0.0.8.tgz** (Vulnerable Library) </details> <details><summary><b>minimist-1.1.3.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.1.3.tgz">https://registry.npmjs.org/minimist/-/minimist-1.1.3.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/bootstrap/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/bootstrap/node_modules/gonzales-pe/node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - stylelint-9.9.0.tgz (Root Library) - postcss-sass-0.3.5.tgz - gonzales-pe-4.2.3.tgz - :x: **minimist-1.1.3.tgz** (Vulnerable Library) </details> <details><summary><b>minimist-1.2.0.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/bootstrap/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/bootstrap/node_modules/ecstatic/node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - coveralls-3.0.2.tgz (Root Library) - :x: **minimist-1.2.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://api.github.com/repos/mwilliams7197/bootstrap/commits/cef7d523bb5983ecfdc468692177af7cd1632f48">cef7d523bb5983ecfdc468692177af7cd1632f48</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload. <p>Publish Date: 2020-03-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598>CVE-2020-7598</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94">https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94</a></p> <p>Release Date: 2020-03-11</p> <p>Fix Resolution: minimist - 0.2.1,1.2.3</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"minimist","packageVersion":"0.0.8","isTransitiveDependency":true,"dependencyTree":"@babel/cli:7.2.3;mkdirp:0.5.1;minimist:0.0.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"minimist - 0.2.1,1.2.3"},{"packageType":"javascript/Node.js","packageName":"minimist","packageVersion":"1.1.3","isTransitiveDependency":true,"dependencyTree":"stylelint:9.9.0;postcss-sass:0.3.5;gonzales-pe:4.2.3;minimist:1.1.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"minimist - 0.2.1,1.2.3"},{"packageType":"javascript/Node.js","packageName":"minimist","packageVersion":"1.2.0","isTransitiveDependency":true,"dependencyTree":"coveralls:3.0.2;minimist:1.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"minimist - 0.2.1,1.2.3"}],"vulnerabilityIdentifier":"CVE-2020-7598","vulnerabilityDetails":"minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a \"constructor\" or \"__proto__\" payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598","cvss3Severity":"medium","cvss3Score":"5.6","cvss3Metrics":{"A":"Low","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-7598 (Medium) detected in multiple libraries - ## CVE-2020-7598 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimist-0.0.8.tgz</b>, <b>minimist-1.1.3.tgz</b>, <b>minimist-1.2.0.tgz</b></p></summary> <p> <details><summary><b>minimist-0.0.8.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/bootstrap/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/bootstrap/node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - cli-7.2.3.tgz (Root Library) - mkdirp-0.5.1.tgz - :x: **minimist-0.0.8.tgz** (Vulnerable Library) </details> <details><summary><b>minimist-1.1.3.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.1.3.tgz">https://registry.npmjs.org/minimist/-/minimist-1.1.3.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/bootstrap/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/bootstrap/node_modules/gonzales-pe/node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - stylelint-9.9.0.tgz (Root Library) - postcss-sass-0.3.5.tgz - gonzales-pe-4.2.3.tgz - :x: **minimist-1.1.3.tgz** (Vulnerable Library) </details> <details><summary><b>minimist-1.2.0.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/bootstrap/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/bootstrap/node_modules/ecstatic/node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - coveralls-3.0.2.tgz (Root Library) - :x: **minimist-1.2.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://api.github.com/repos/mwilliams7197/bootstrap/commits/cef7d523bb5983ecfdc468692177af7cd1632f48">cef7d523bb5983ecfdc468692177af7cd1632f48</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload. <p>Publish Date: 2020-03-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598>CVE-2020-7598</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94">https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94</a></p> <p>Release Date: 2020-03-11</p> <p>Fix Resolution: minimist - 0.2.1,1.2.3</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"minimist","packageVersion":"0.0.8","isTransitiveDependency":true,"dependencyTree":"@babel/cli:7.2.3;mkdirp:0.5.1;minimist:0.0.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"minimist - 0.2.1,1.2.3"},{"packageType":"javascript/Node.js","packageName":"minimist","packageVersion":"1.1.3","isTransitiveDependency":true,"dependencyTree":"stylelint:9.9.0;postcss-sass:0.3.5;gonzales-pe:4.2.3;minimist:1.1.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"minimist - 0.2.1,1.2.3"},{"packageType":"javascript/Node.js","packageName":"minimist","packageVersion":"1.2.0","isTransitiveDependency":true,"dependencyTree":"coveralls:3.0.2;minimist:1.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"minimist - 0.2.1,1.2.3"}],"vulnerabilityIdentifier":"CVE-2020-7598","vulnerabilityDetails":"minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a \"constructor\" or \"__proto__\" payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598","cvss3Severity":"medium","cvss3Score":"5.6","cvss3Metrics":{"A":"Low","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
non_process
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries minimist tgz minimist tgz minimist tgz minimist tgz parse argument options library home page a href path to dependency file tmp ws scm bootstrap package json path to vulnerable library tmp ws scm bootstrap node modules minimist package json dependency hierarchy cli tgz root library mkdirp tgz x minimist tgz vulnerable library minimist tgz parse argument options library home page a href path to dependency file tmp ws scm bootstrap package json path to vulnerable library tmp ws scm bootstrap node modules gonzales pe node modules minimist package json dependency hierarchy stylelint tgz root library postcss sass tgz gonzales pe tgz x minimist tgz vulnerable library minimist tgz parse argument options library home page a href path to dependency file tmp ws scm bootstrap package json path to vulnerable library tmp ws scm bootstrap node modules ecstatic node modules minimist package json dependency hierarchy coveralls tgz root library x minimist tgz vulnerable library found in head commit a href vulnerability details minimist before could be tricked into adding or modifying properties of object prototype using a constructor or proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution minimist isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails minimist before could be tricked into adding or modifying properties of object prototype using a constructor or proto payload vulnerabilityurl
0
197,170
22,580,984,723
IssuesEvent
2022-06-28 11:37:17
powsybl/powsybl-core
https://api.github.com/repos/powsybl/powsybl-core
opened
Security analysis lost load/generation
Security Analysis Feature
* **Do you want to request a *feature* or report a *bug*?** Feature * **What is the current behavior?** When running a security analysis, for some contingencies, we could endup with a loss of generation or loss. This is not reported in security analysis result API. * **What is the expected behavior?** We should add to PostContingencyResult, the amount of generation and load lost because of the contingency. * **What is the motivation / use case for changing the behavior?** * **Please tell us about your environment:** - PowSyBl Version: ... - OS Version: ... * **Other information** (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, spectrum, etc) (if a question doesn't apply, you can delete it)
True
Security analysis lost load/generation - * **Do you want to request a *feature* or report a *bug*?** Feature * **What is the current behavior?** When running a security analysis, for some contingencies, we could endup with a loss of generation or loss. This is not reported in security analysis result API. * **What is the expected behavior?** We should add to PostContingencyResult, the amount of generation and load lost because of the contingency. * **What is the motivation / use case for changing the behavior?** * **Please tell us about your environment:** - PowSyBl Version: ... - OS Version: ... * **Other information** (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, spectrum, etc) (if a question doesn't apply, you can delete it)
non_process
security analysis lost load generation do you want to request a feature or report a bug feature what is the current behavior when running a security analysis for some contingencies we could endup with a loss of generation or loss this is not reported in security analysis result api what is the expected behavior we should add to postcontingencyresult the amount of generation and load lost because of the contingency what is the motivation use case for changing the behavior please tell us about your environment powsybl version os version other information e g detailed explanation stacktraces related issues suggestions how to fix links for us to have context eg stackoverflow spectrum etc if a question doesn t apply you can delete it
0
15,999
20,188,206,948
IssuesEvent
2022-02-11 01:18:01
savitamittalmsft/WAS-SEC-TEST
https://api.github.com/repos/savitamittalmsft/WAS-SEC-TEST
opened
Clearly define CI/CD roles and permissions
WARP-Import WAF FEB 2021 Security Performance and Scalability Capacity Management Processes Operational Model & DevOps Roles & Responsibilities
<a href="https://docs.microsoft.com/azure/architecture/framework/security/deploy-governance#minimize-access">Clearly define CI/CD roles and permissions</a> <p><b>Why Consider This?</b></p> Only users responsible for production releases should be able to initiate that process, and only developers should have access to source code. <p><b>Context</b></p> <p><span>To support security of your pipeline operations, you can add users to a built-in security group, set individual permissions for a user or group, or add users to pre-defined roles. You manage security for the following objects from Azure Pipelines in the web portal, either from the user or admin context.</span></p><p><span>For permissions, you grant or restrict permissions by setting the permission state to Allow or Deny, either for a security group or an individual user. For a role, you add a user or group to the role.</span></p><p><span>"nbsp;</span></p><p><span>"nbsp;</span></p> <p><b>Suggested Actions</b></p> <p><span>Review the organization's CI/CD pipeline and refine role assignment to create a clear delineation between development/production responsibilities."nbsp; </span></p> <p><b>Learn More</b></p> <p><a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/policies/permissions?view=azure-devops" target="_blank"><span>Pipeline permissions and security roles</span></a><span /></p>
1.0
Clearly define CI/CD roles and permissions - <a href="https://docs.microsoft.com/azure/architecture/framework/security/deploy-governance#minimize-access">Clearly define CI/CD roles and permissions</a> <p><b>Why Consider This?</b></p> Only users responsible for production releases should be able to initiate that process, and only developers should have access to source code. <p><b>Context</b></p> <p><span>To support security of your pipeline operations, you can add users to a built-in security group, set individual permissions for a user or group, or add users to pre-defined roles. You manage security for the following objects from Azure Pipelines in the web portal, either from the user or admin context.</span></p><p><span>For permissions, you grant or restrict permissions by setting the permission state to Allow or Deny, either for a security group or an individual user. For a role, you add a user or group to the role.</span></p><p><span>"nbsp;</span></p><p><span>"nbsp;</span></p> <p><b>Suggested Actions</b></p> <p><span>Review the organization's CI/CD pipeline and refine role assignment to create a clear delineation between development/production responsibilities."nbsp; </span></p> <p><b>Learn More</b></p> <p><a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/policies/permissions?view=azure-devops" target="_blank"><span>Pipeline permissions and security roles</span></a><span /></p>
process
clearly define ci cd roles and permissions why consider this only users responsible for production releases should be able to initiate that process and only developers should have access to source code context to support security of your pipeline operations you can add users to a built in security group set individual permissions for a user or group or add users to pre defined roles you manage security for the following objects from azure pipelines in the web portal either from the user or admin context for permissions you grant or restrict permissions by setting the permission state to allow or deny either for a security group or an individual user for a role you add a user or group to the role nbsp nbsp suggested actions review the organization s ci cd pipeline and refine role assignment to create a clear delineation between development production responsibilities nbsp learn more pipeline permissions and security roles
1
302,435
9,259,562,800
IssuesEvent
2019-03-18 00:30:36
VGV-Team/StandaloneCrawler
https://api.github.com/repos/VGV-Team/StandaloneCrawler
opened
Implement multi-threading management
Low Priority task
Add basic functions to start, stop and pause threads on demand
1.0
Implement multi-threading management - Add basic functions to start, stop and pause threads on demand
non_process
implement multi threading management add basic functions to start stop and pause threads on demand
0
92,318
18,835,505,335
IssuesEvent
2021-11-11 00:06:39
MicrosoftDocs/visualstudio-docs
https://api.github.com/repos/MicrosoftDocs/visualstudio-docs
closed
EnableNETAnalyzers for pre 5.0 does not work
doc-bug visual-studio-windows/prod vs-ide-code-analysis/tech Pri1 stale auto-close
The linked doc states that "You can enable code analysis on projects that target earlier .NET versions by setting the EnableNETAnalyzers property to true." Given this code: ```csharp public class Class1 { public Class1(string bob) { Console.WriteLine(bob.ToString()); } } ``` And this csproj: ``` <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>net5.0</TargetFramework> <EnableNETAnalyzers>true</EnableNETAnalyzers> <AnalysisMode>AllEnabledByDefault</AnalysisMode> </PropertyGroup> </Project> ``` `dotnet build` will give: > warning CA1062: In externally visible method 'Class1.Class1(string bob)', validate parameter 'bob' is non-null before using it. If appropriate, throw an ArgumentNullException when the argument is null or add a Code Contract precondition asserting non-null argument. [C:\code\CA1062Test\CA1062Test.csproj] Make this one change to .csproj: ``` <TargetFramework>netcoreapp3.1</TargetFramework> ``` then you get *no* build warnings. For completeness, this .csproj will also not give any build warnings: ``` <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>netcoreapp3.1</TargetFramework> <AnalysisMode>AllEnabledByDefault</AnalysisMode> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.CodeAnalysis.NetAnalyzers" Version="5.0.3"> <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets> <PrivateAssets>all</PrivateAssets> </PackageReference> </ItemGroup> </Project> ``` For reference: >C:\code\CA1062Test> dotnet --list-sdks >3.1.404 [C:\Program Files\dotnet\sdk] >5.0.101 [C:\Program Files\dotnet\sdk] --- #### Document details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 04393392-f11e-4825-2d02-ba41e6c6d882 * Version Independent ID: c86598c1-1363-d11b-df15-ca32bf970de5 * Content: [Enable or install first-party .NET analyzers - Visual Studio](https://docs.microsoft.com/en-gb/visualstudio/code-quality/install-net-analyzers?view=vs-2019) * Content Source: [docs/code-quality/install-net-analyzers.md](https://github.com/MicrosoftDocs/visualstudio-docs/blob/master/docs/code-quality/install-net-analyzers.md) * Product: **visual-studio-windows** * Technology: **vs-ide-code-analysis** * GitHub Login: @Mikejo5000 * Microsoft Alias: **mikejo**
1.0
EnableNETAnalyzers for pre 5.0 does not work - The linked doc states that "You can enable code analysis on projects that target earlier .NET versions by setting the EnableNETAnalyzers property to true." Given this code: ```csharp public class Class1 { public Class1(string bob) { Console.WriteLine(bob.ToString()); } } ``` And this csproj: ``` <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>net5.0</TargetFramework> <EnableNETAnalyzers>true</EnableNETAnalyzers> <AnalysisMode>AllEnabledByDefault</AnalysisMode> </PropertyGroup> </Project> ``` `dotnet build` will give: > warning CA1062: In externally visible method 'Class1.Class1(string bob)', validate parameter 'bob' is non-null before using it. If appropriate, throw an ArgumentNullException when the argument is null or add a Code Contract precondition asserting non-null argument. [C:\code\CA1062Test\CA1062Test.csproj] Make this one change to .csproj: ``` <TargetFramework>netcoreapp3.1</TargetFramework> ``` then you get *no* build warnings. For completeness, this .csproj will also not give any build warnings: ``` <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>netcoreapp3.1</TargetFramework> <AnalysisMode>AllEnabledByDefault</AnalysisMode> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.CodeAnalysis.NetAnalyzers" Version="5.0.3"> <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets> <PrivateAssets>all</PrivateAssets> </PackageReference> </ItemGroup> </Project> ``` For reference: >C:\code\CA1062Test> dotnet --list-sdks >3.1.404 [C:\Program Files\dotnet\sdk] >5.0.101 [C:\Program Files\dotnet\sdk] --- #### Document details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 04393392-f11e-4825-2d02-ba41e6c6d882 * Version Independent ID: c86598c1-1363-d11b-df15-ca32bf970de5 * Content: [Enable or install first-party .NET analyzers - Visual Studio](https://docs.microsoft.com/en-gb/visualstudio/code-quality/install-net-analyzers?view=vs-2019) * Content Source: [docs/code-quality/install-net-analyzers.md](https://github.com/MicrosoftDocs/visualstudio-docs/blob/master/docs/code-quality/install-net-analyzers.md) * Product: **visual-studio-windows** * Technology: **vs-ide-code-analysis** * GitHub Login: @Mikejo5000 * Microsoft Alias: **mikejo**
non_process
enablenetanalyzers for pre does not work the linked doc states that you can enable code analysis on projects that target earlier net versions by setting the enablenetanalyzers property to true given this code csharp public class public string bob console writeline bob tostring and this csproj true allenabledbydefault dotnet build will give warning in externally visible method string bob validate parameter bob is non null before using it if appropriate throw an argumentnullexception when the argument is null or add a code contract precondition asserting non null argument make this one change to csproj then you get no build warnings for completeness this csproj will also not give any build warnings allenabledbydefault runtime build native contentfiles analyzers buildtransitive all for reference c code dotnet list sdks document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product visual studio windows technology vs ide code analysis github login microsoft alias mikejo
0
274,085
20,825,097,561
IssuesEvent
2022-03-18 19:46:27
aws-amplify/amplify-ui
https://api.github.com/repos/aws-amplify/amplify-ui
closed
Documentation: SelectField missing name
bug Documentation ready-for-planning
The documentation for the SelectField is missing the "name" prop. I tried it out in code and it works fine. https://ui.docs.amplify.aws/components/selectfield
1.0
Documentation: SelectField missing name - The documentation for the SelectField is missing the "name" prop. I tried it out in code and it works fine. https://ui.docs.amplify.aws/components/selectfield
non_process
documentation selectfield missing name the documentation for the selectfield is missing the name prop i tried it out in code and it works fine
0
2,689
5,538,573,678
IssuesEvent
2017-03-22 02:18:31
VeliovGroup/Meteor-Files
https://api.github.com/repos/VeliovGroup/Meteor-Files
closed
Optimizing an Image going to Amazon S3
AWS S3 Post Processing question
Great plugin, love it. Running on www.SkyRooms.IO - it's a beauty! I have configured S3 and CloudFront to deliver the images - but I need to compress images before I send them up. What's the best practice to achieve this? Thanks again!
1.0
Optimizing an Image going to Amazon S3 - Great plugin, love it. Running on www.SkyRooms.IO - it's a beauty! I have configured S3 and CloudFront to deliver the images - but I need to compress images before I send them up. What's the best practice to achieve this? Thanks again!
process
optimizing an image going to amazon great plugin love it running on it s a beauty i have configured and cloudfront to deliver the images but i need to compress images before i send them up what s the best practice to achieve this thanks again
1
78,875
15,586,082,161
IssuesEvent
2021-03-18 01:07:53
cnmeier/sonatype-depshield-demo
https://api.github.com/repos/cnmeier/sonatype-depshield-demo
closed
CVE-2019-17531 (High) detected in jackson-databind-2.9.4.jar - autoclosed
security vulnerability
## CVE-2019-17531 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.4.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /tmp/ws-scm/sonatype-depshield-demo/pom.xml</p> <p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-2.0.0.RELEASE.jar (Root Library) - spring-boot-starter-json-2.0.0.RELEASE.jar - :x: **jackson-databind-2.9.4.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/cnmeier/sonatype-depshield-demo/commit/f9f0ea86749749aa86612cb00b490b15db8c555b">f9f0ea86749749aa86612cb00b490b15db8c555b</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the apache-log4j-extra (version 1.2.x) jar in the classpath, and an attacker can provide a JNDI service to access, it is possible to make the service execute a malicious payload. <p>Publish Date: 2019-10-12 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531>CVE-2019-17531</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531</a></p> <p>Release Date: 2019-10-12</p> <p>Fix Resolution: 2.10</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-17531 (High) detected in jackson-databind-2.9.4.jar - autoclosed - ## CVE-2019-17531 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.4.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /tmp/ws-scm/sonatype-depshield-demo/pom.xml</p> <p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-2.0.0.RELEASE.jar (Root Library) - spring-boot-starter-json-2.0.0.RELEASE.jar - :x: **jackson-databind-2.9.4.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/cnmeier/sonatype-depshield-demo/commit/f9f0ea86749749aa86612cb00b490b15db8c555b">f9f0ea86749749aa86612cb00b490b15db8c555b</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the apache-log4j-extra (version 1.2.x) jar in the classpath, and an attacker can provide a JNDI service to access, it is possible to make the service execute a malicious payload. <p>Publish Date: 2019-10-12 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531>CVE-2019-17531</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531</a></p> <p>Release Date: 2019-10-12</p> <p>Fix Resolution: 2.10</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in jackson databind jar autoclosed cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws scm sonatype depshield demo pom xml path to vulnerable library root repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library spring boot starter json release jar x jackson databind jar vulnerable library found in head commit a href vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has the apache extra version x jar in the classpath and an attacker can provide a jndi service to access it is possible to make the service execute a malicious payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
14,980
18,509,577,281
IssuesEvent
2021-10-19 23:57:30
googleapis/python-analytics-admin
https://api.github.com/repos/googleapis/python-analytics-admin
closed
CI is broken
type: process api: analyticsadmin
Samples checks were recently added for this repository, however the checks are failing. The samples were not part of CI when they were originally added. See https://github.com/googleapis/python-analytics-admin/commits/master
1.0
CI is broken - Samples checks were recently added for this repository, however the checks are failing. The samples were not part of CI when they were originally added. See https://github.com/googleapis/python-analytics-admin/commits/master
process
ci is broken samples checks were recently added for this repository however the checks are failing the samples were not part of ci when they were originally added see
1
22,538
31,708,737,860
IssuesEvent
2023-09-09 02:32:59
bitfocus/companion-module-requests
https://api.github.com/repos/bitfocus/companion-module-requests
opened
SwitchBot
NOT YET PROCESSED
- [x ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested** The name of the device, hardware, or software you would like to control: SwitchBot What you would like to be able to make it do from Companion: Control all buttons/remotes/etc Direct links or attachments to the ethernet control protocol or API: https://github.com/OpenWonderLabs/SwitchBotAPI https://www.home-assistant.io/integrations/switchbot/ https://github.com/Danielhiversen/pySwitchbot
1.0
SwitchBot - - [x ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested** The name of the device, hardware, or software you would like to control: SwitchBot What you would like to be able to make it do from Companion: Control all buttons/remotes/etc Direct links or attachments to the ethernet control protocol or API: https://github.com/OpenWonderLabs/SwitchBotAPI https://www.home-assistant.io/integrations/switchbot/ https://github.com/Danielhiversen/pySwitchbot
process
switchbot i have researched the list of existing companion modules and requests and have determined this has not yet been requested the name of the device hardware or software you would like to control switchbot what you would like to be able to make it do from companion control all buttons remotes etc direct links or attachments to the ethernet control protocol or api
1
58,397
7,136,235,943
IssuesEvent
2018-01-23 05:57:10
sipb/homeworld
https://api.github.com/repos/sipb/homeworld
closed
Basic monitoring
category design macro
As part of #172, and as a step towards #16, we need to set up some infrastructure that can monitor different parts of a cluster, either during setup or production. At some point, this will plug into Prometheus, probably, but we need a broader system, so that we have something that works before we can launch prometheus. This will monitor elements that include, but are not limited to: * That each node is online. * ~That each systemd service is running on each node.~ deferred, since this isn't necessary to replace the existing verification scripts. * That etcd is healthy. * That kubernetes is healthy. * That containers can be pulled from the registry. * That the core cluster services (flannel, dns-addon) are actively running. * That the flannel overlay network is functioning. * That DNS is actively accessible (via the dns-addon). * That SSH access is possible via cluster auth.
1.0
Basic monitoring - As part of #172, and as a step towards #16, we need to set up some infrastructure that can monitor different parts of a cluster, either during setup or production. At some point, this will plug into Prometheus, probably, but we need a broader system, so that we have something that works before we can launch prometheus. This will monitor elements that include, but are not limited to: * That each node is online. * ~That each systemd service is running on each node.~ deferred, since this isn't necessary to replace the existing verification scripts. * That etcd is healthy. * That kubernetes is healthy. * That containers can be pulled from the registry. * That the core cluster services (flannel, dns-addon) are actively running. * That the flannel overlay network is functioning. * That DNS is actively accessible (via the dns-addon). * That SSH access is possible via cluster auth.
non_process
basic monitoring as part of and as a step towards we need to set up some infrastructure that can monitor different parts of a cluster either during setup or production at some point this will plug into prometheus probably but we need a broader system so that we have something that works before we can launch prometheus this will monitor elements that include but are not limited to that each node is online that each systemd service is running on each node deferred since this isn t necessary to replace the existing verification scripts that etcd is healthy that kubernetes is healthy that containers can be pulled from the registry that the core cluster services flannel dns addon are actively running that the flannel overlay network is functioning that dns is actively accessible via the dns addon that ssh access is possible via cluster auth
0
3,103
6,111,054,862
IssuesEvent
2017-06-21 16:11:01
nodejs/node
https://api.github.com/repos/nodejs/node
closed
"write after end" error when stdout was end()-ed
confirmed-bug process stream
<!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: `master`, presumably all recent node major versions * **Platform**: OS X 10.10.5 * **Subsystem**: process/console <!-- Enter your issue details below this comment. --> `stdout`/`stderr` unexpectedly throw a `write after end` error when `end()` has been called on them, even though `end()` throws it's own error in `destroy()`/`destroySoon()` overriding the usual destroy code which is in `net.Socket`. Since it overrides the destroy code, I don't think it should ever actually destroy the stream, which the error seems to indicate, but it appears to destroy it nonetheless. Try in the REPL: ```js process.stderr.end('foo') // throws error, REPL catches it console.error('hello?') // Unexpectedly throws "write after end" ``` cc @nodejs/streams
1.0
"write after end" error when stdout was end()-ed - <!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: `master`, presumably all recent node major versions * **Platform**: OS X 10.10.5 * **Subsystem**: process/console <!-- Enter your issue details below this comment. --> `stdout`/`stderr` unexpectedly throw a `write after end` error when `end()` has been called on them, even though `end()` throws it's own error in `destroy()`/`destroySoon()` overriding the usual destroy code which is in `net.Socket`. Since it overrides the destroy code, I don't think it should ever actually destroy the stream, which the error seems to indicate, but it appears to destroy it nonetheless. Try in the REPL: ```js process.stderr.end('foo') // throws error, REPL catches it console.error('hello?') // Unexpectedly throws "write after end" ``` cc @nodejs/streams
process
write after end error when stdout was end ed thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version master presumably all recent node major versions platform os x subsystem process console stdout stderr unexpectedly throw a write after end error when end has been called on them even though end throws it s own error in destroy destroysoon overriding the usual destroy code which is in net socket since it overrides the destroy code i don t think it should ever actually destroy the stream which the error seems to indicate but it appears to destroy it nonetheless try in the repl js process stderr end foo throws error repl catches it console error hello unexpectedly throws write after end cc nodejs streams
1
3,276
6,362,864,583
IssuesEvent
2017-07-31 15:47:51
Pale-Moon-Addons-Team/phoebus
https://api.github.com/repos/Pale-Moon-Addons-Team/phoebus
closed
Rewrite Reading and Processing Manifests, Content, and Licenses
Application: PHOEBUS Component: AUS Component: DOWNLOAD Component: LICENSE Component: SITE Module: funcProcessContent Module: funcReadManifest Status: FIXED
The Goal here is to rethink how manifests, content, and licenses are read and processed.. This should standardize how those bits are read and processed and generate the resulting data structure and take work out from the resulting components that use it.
1.0
Rewrite Reading and Processing Manifests, Content, and Licenses - The Goal here is to rethink how manifests, content, and licenses are read and processed.. This should standardize how those bits are read and processed and generate the resulting data structure and take work out from the resulting components that use it.
process
rewrite reading and processing manifests content and licenses the goal here is to rethink how manifests content and licenses are read and processed this should standardize how those bits are read and processed and generate the resulting data structure and take work out from the resulting components that use it
1
18,337
24,458,285,040
IssuesEvent
2022-10-07 08:49:41
prisma/prisma
https://api.github.com/repos/prisma/prisma
opened
Error in migration engine. Reason: [migration-engine/connectors/sql-migration-connector/src/sql_renderer/postgres_renderer.rs:944:22] We should only be setting a changed default if there was one on the previous schema and in the next with the same enum.
bug/1-unconfirmed kind/bug process/candidate topic: error reporting team/schema topic: prisma db push topic: postgresql
<!-- If required, please update the title to be clear and descriptive --> Command: `prisma db push` Version: `4.4.0` Binary Version: `f352a33b70356f46311da8b00d83386dd9f145d6` Report: https://prisma-errors.netlify.app/report/14348 OS: `x64 darwin 21.3.0` JS Stacktrace: ``` Error: Error in migration engine. Reason: [migration-engine/connectors/sql-migration-connector/src/sql_renderer/postgres_renderer.rs:944:22] We should only be setting a changed default if there was one on the previous schema and in the next with the same enum. ``` Rust Stacktrace: ``` Starting migration engine RPC server The migration was not applied because it triggered warnings and the force flag was not passed. [migration-engine/connectors/sql-migration-connector/src/sql_renderer/postgres_renderer.rs:944:22] We should only be setting a changed default if there was one on the previous schema and in the next with the same enum. ```
1.0
Error in migration engine. Reason: [migration-engine/connectors/sql-migration-connector/src/sql_renderer/postgres_renderer.rs:944:22] We should only be setting a changed default if there was one on the previous schema and in the next with the same enum. - <!-- If required, please update the title to be clear and descriptive --> Command: `prisma db push` Version: `4.4.0` Binary Version: `f352a33b70356f46311da8b00d83386dd9f145d6` Report: https://prisma-errors.netlify.app/report/14348 OS: `x64 darwin 21.3.0` JS Stacktrace: ``` Error: Error in migration engine. Reason: [migration-engine/connectors/sql-migration-connector/src/sql_renderer/postgres_renderer.rs:944:22] We should only be setting a changed default if there was one on the previous schema and in the next with the same enum. ``` Rust Stacktrace: ``` Starting migration engine RPC server The migration was not applied because it triggered warnings and the force flag was not passed. [migration-engine/connectors/sql-migration-connector/src/sql_renderer/postgres_renderer.rs:944:22] We should only be setting a changed default if there was one on the previous schema and in the next with the same enum. ```
process
error in migration engine reason we should only be setting a changed default if there was one on the previous schema and in the next with the same enum command prisma db push version binary version report os darwin js stacktrace error error in migration engine reason we should only be setting a changed default if there was one on the previous schema and in the next with the same enum rust stacktrace starting migration engine rpc server the migration was not applied because it triggered warnings and the force flag was not passed we should only be setting a changed default if there was one on the previous schema and in the next with the same enum
1
517,092
14,994,470,269
IssuesEvent
2021-01-29 12:57:47
k0sproject/k0s
https://api.github.com/repos/k0sproject/k0s
closed
k0s supervisor should exit on fatal errors
area/controlplane area/worker bug priority/P1
Example: when mke binaries do not exist. We should identify the fatal errors, and clean up all dependent processes before exiting.
1.0
k0s supervisor should exit on fatal errors - Example: when mke binaries do not exist. We should identify the fatal errors, and clean up all dependent processes before exiting.
non_process
supervisor should exit on fatal errors example when mke binaries do not exist we should identify the fatal errors and clean up all dependent processes before exiting
0
9,452
12,429,278,331
IssuesEvent
2020-05-25 08:09:24
frictionlessdata/project
https://api.github.com/repos/frictionlessdata/project
opened
RFC process
Topic: Process
We've started moving some substantive items to an rfcs folder in this repo. It would be good to explain RFC process and approach. ## Acceptance * [ ] RFC overview in README * [ ] RFC template ## Tasks * [ ] Dig out rfcs guidelines e.g. https://github.com/vuejs/rfcs and extract relevant parts * [ ] Find a template
1.0
RFC process - We've started moving some substantive items to an rfcs folder in this repo. It would be good to explain RFC process and approach. ## Acceptance * [ ] RFC overview in README * [ ] RFC template ## Tasks * [ ] Dig out rfcs guidelines e.g. https://github.com/vuejs/rfcs and extract relevant parts * [ ] Find a template
process
rfc process we ve started moving some substantive items to an rfcs folder in this repo it would be good to explain rfc process and approach acceptance rfc overview in readme rfc template tasks dig out rfcs guidelines e g and extract relevant parts find a template
1
54,624
3,070,427,945
IssuesEvent
2015-08-19 04:12:38
v0lkan/board
https://api.github.com/repos/v0lkan/board
closed
demo app is a black black page on ios(8) mobile.
bug priority: high
not sure if it's because of websockets, or because of the network. -- need to check.
1.0
demo app is a black black page on ios(8) mobile. - not sure if it's because of websockets, or because of the network. -- need to check.
non_process
demo app is a black black page on ios mobile not sure if it s because of websockets or because of the network need to check
0
342,222
10,313,546,433
IssuesEvent
2019-08-29 23:16:13
rancher/rancher
https://api.github.com/repos/rancher/rancher
closed
Catalog automatically converts a numbered string into number instead of keeping it as string
area/catalog area/cli internal kind/bug priority/0 team/cn
**Rancher versions:** rancher/server or rancher/rancher: master 04/27 **Docker version: (`docker version`,`docker info` preferred)** 17.03.2-ce **Operating system and kernel: (`cat /etc/os-release`, `uname -r` preferred)** Ubuntu 16.04.4 LTS 4.4.0-1052-aws **Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)** AWS **Environment Template: (Cattle/Kubernetes/Swarm/Mesos)** Kubernetes **Steps to Reproduce:** 1. Launch a wordpress Charts from https://github.com/guangbochen/charts.git 2. `WordPress Password` input `111111` ,Launch. **Results:** Failed to launch, error log: ``` Helm template failed. Error: render error in "wordpress/charts/mariadb/templates/secrets.yaml": template: wordpress/charts/mariadb/templates/secrets.yaml:19:49: executing "wordpress/charts/mariadb/templates/secrets.yaml" at <b64enc>: wrong type for value; expected string; got float64 : exit status 1 ```
1.0
Catalog automatically converts a numbered string into number instead of keeping it as string - **Rancher versions:** rancher/server or rancher/rancher: master 04/27 **Docker version: (`docker version`,`docker info` preferred)** 17.03.2-ce **Operating system and kernel: (`cat /etc/os-release`, `uname -r` preferred)** Ubuntu 16.04.4 LTS 4.4.0-1052-aws **Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)** AWS **Environment Template: (Cattle/Kubernetes/Swarm/Mesos)** Kubernetes **Steps to Reproduce:** 1. Launch a wordpress Charts from https://github.com/guangbochen/charts.git 2. `WordPress Password` input `111111` ,Launch. **Results:** Failed to launch, error log: ``` Helm template failed. Error: render error in "wordpress/charts/mariadb/templates/secrets.yaml": template: wordpress/charts/mariadb/templates/secrets.yaml:19:49: executing "wordpress/charts/mariadb/templates/secrets.yaml" at <b64enc>: wrong type for value; expected string; got float64 : exit status 1 ```
non_process
catalog automatically converts a numbered string into number instead of keeping it as string rancher versions rancher server or rancher rancher master docker version docker version docker info preferred ce operating system and kernel cat etc os release uname r preferred ubuntu lts aws type provider of hosts virtualbox bare metal aws gce do aws environment template cattle kubernetes swarm mesos kubernetes steps to reproduce launch a wordpress charts from wordpress password input launch results failed to launch error log helm template failed error render error in wordpress charts mariadb templates secrets yaml template wordpress charts mariadb templates secrets yaml executing wordpress charts mariadb templates secrets yaml at wrong type for value expected string got exit status
0
94,385
19,538,331,101
IssuesEvent
2021-12-31 13:13:13
pandas-dev/pandas
https://api.github.com/repos/pandas-dev/pandas
closed
DOC: whatsnew format for minor enhancements/bug-fixes
Docs Code Style Clean
Admittedly this is extremely minor, but it's something that has always irked me. Currently for minor (single-line, not a section) enhancements and bug-fixes, we have the format: > This is sentence one (GH28527) which on its own is perfectly fine. However when a line has multiple sentences: > This is sentence one. This is sentence two (GH28527) the lack of a period at the end looks off to me. I'd prefer if we did full sentences for each line followed by the GH link at the end, this way a multi-sentence doesn't look so odd. > This is sentence one. (GH28527) > This is sentence one. This is sentence two. (GH28527) Another option is to limit one-line notes to being a single sentence (which I'm opposed to), and of course we could just stay with it as is. If we are to make any changes, I'd do it for the version minor/major version after 1.3.0. cc @simonjayhawkins
1.0
DOC: whatsnew format for minor enhancements/bug-fixes - Admittedly this is extremely minor, but it's something that has always irked me. Currently for minor (single-line, not a section) enhancements and bug-fixes, we have the format: > This is sentence one (GH28527) which on its own is perfectly fine. However when a line has multiple sentences: > This is sentence one. This is sentence two (GH28527) the lack of a period at the end looks off to me. I'd prefer if we did full sentences for each line followed by the GH link at the end, this way a multi-sentence doesn't look so odd. > This is sentence one. (GH28527) > This is sentence one. This is sentence two. (GH28527) Another option is to limit one-line notes to being a single sentence (which I'm opposed to), and of course we could just stay with it as is. If we are to make any changes, I'd do it for the version minor/major version after 1.3.0. cc @simonjayhawkins
non_process
doc whatsnew format for minor enhancements bug fixes admittedly this is extremely minor but it s something that has always irked me currently for minor single line not a section enhancements and bug fixes we have the format this is sentence one which on its own is perfectly fine however when a line has multiple sentences this is sentence one this is sentence two the lack of a period at the end looks off to me i d prefer if we did full sentences for each line followed by the gh link at the end this way a multi sentence doesn t look so odd this is sentence one this is sentence one this is sentence two another option is to limit one line notes to being a single sentence which i m opposed to and of course we could just stay with it as is if we are to make any changes i d do it for the version minor major version after cc simonjayhawkins
0
321,803
23,873,030,539
IssuesEvent
2022-09-07 16:19:45
radical-cybertools/radical.pilot
https://api.github.com/repos/radical-cybertools/radical.pilot
opened
Pilot documentation
type:bug type:documentation type:enhancement layer:rp priority:low
This is related to #2218, the documentation shows that we support `virtualenv` here: https://radicalpilot.readthedocs.io/en/stable/apidoc.html#radical.pilot.Pilot.prepare_env. This is incorrect, as we switched from `virtualenv` to `venv`. I do not know who to assign to this ticket, as Kartik is working on the documentation, I will let @andre-merzky decide.
1.0
Pilot documentation - This is related to #2218, the documentation shows that we support `virtualenv` here: https://radicalpilot.readthedocs.io/en/stable/apidoc.html#radical.pilot.Pilot.prepare_env. This is incorrect, as we switched from `virtualenv` to `venv`. I do not know who to assign to this ticket, as Kartik is working on the documentation, I will let @andre-merzky decide.
non_process
pilot documentation this is related to the documentation shows that we support virtualenv here this is incorrect as we switched from virtualenv to venv i do not know who to assign to this ticket as kartik is working on the documentation i will let andre merzky decide
0
11,242
14,015,271,893
IssuesEvent
2020-10-29 13:06:54
tdwg/dwc
https://api.github.com/repos/tdwg/dwc
closed
pathway controlled value question
Class - Occurrence Process - implement Question - answered Term - change
Is the controlled value for https://dwc.tdwg.org/pw/#dwcpw_p011 really supposed to be 'publisGardenZooAquaria' and not 'publicGardenZooAquaria'?
1.0
pathway controlled value question - Is the controlled value for https://dwc.tdwg.org/pw/#dwcpw_p011 really supposed to be 'publisGardenZooAquaria' and not 'publicGardenZooAquaria'?
process
pathway controlled value question is the controlled value for really supposed to be publisgardenzooaquaria and not publicgardenzooaquaria
1
399,848
27,258,782,915
IssuesEvent
2023-02-22 13:30:53
KinsonDigital/GotNuget
https://api.github.com/repos/KinsonDigital/GotNuget
closed
🚧Remove repo related docs
medium priority preview 📝documentation/product
### Complete The Item Below - [X] I have updated the title without removing the 🚧 emoji. ### Description Remove repo specific docs that exist at the org level. Update link in READme to point to the org level contribution document. ### Acceptance Criteria - [x] Contributing - [x] Code_of_Conduct - [x] ./documentation/branching.md - [x] Table of Contents folder - [x] EnvironmentSetup folder - [x] Funding ### ToDo Items - [X] Change type labels added to this issue. Refer to the _**Change Type Labels**_ section below. - [X] Priority label added to this issue. Refer to the _**Priority Type Labels**_ section below. - [X] Issue linked to the correct project _(if applicable)_. - [X] Issue linked to the correct milestone _(if applicable)_. - [x] Draft pull request created and linked to this issue _(only required with code changes)_. ### Issue Dependencies _No response_ ### Related Work _No response_ ### Additional Information: **_<details closed><summary>Change Type Labels</summary>_** | Change Type | Label | |---------------------|----------------------| | Bug Fixes | `🐛bug` | | Breaking Changes | `🧨breaking changes` | | New Feature | `✨new feature` | | Workflow Changes | `workflow` | | Code Doc Changes | `🗒️documentation/code` | | Product Doc Changes | `📝documentation/product` | </details> **_<details closed><summary>Priority Type Labels</summary>_** | Priority Type | Label | |---------------------|-------------------| | Low Priority | `low priority` | | Medium Priority | `medium priority` | | High Priority | `high priority` | </details> ### Code of Conduct - [X] I agree to follow this project's Code of Conduct.
1.0
🚧Remove repo related docs - ### Complete The Item Below - [X] I have updated the title without removing the 🚧 emoji. ### Description Remove repo specific docs that exist at the org level. Update link in READme to point to the org level contribution document. ### Acceptance Criteria - [x] Contributing - [x] Code_of_Conduct - [x] ./documentation/branching.md - [x] Table of Contents folder - [x] EnvironmentSetup folder - [x] Funding ### ToDo Items - [X] Change type labels added to this issue. Refer to the _**Change Type Labels**_ section below. - [X] Priority label added to this issue. Refer to the _**Priority Type Labels**_ section below. - [X] Issue linked to the correct project _(if applicable)_. - [X] Issue linked to the correct milestone _(if applicable)_. - [x] Draft pull request created and linked to this issue _(only required with code changes)_. ### Issue Dependencies _No response_ ### Related Work _No response_ ### Additional Information: **_<details closed><summary>Change Type Labels</summary>_** | Change Type | Label | |---------------------|----------------------| | Bug Fixes | `🐛bug` | | Breaking Changes | `🧨breaking changes` | | New Feature | `✨new feature` | | Workflow Changes | `workflow` | | Code Doc Changes | `🗒️documentation/code` | | Product Doc Changes | `📝documentation/product` | </details> **_<details closed><summary>Priority Type Labels</summary>_** | Priority Type | Label | |---------------------|-------------------| | Low Priority | `low priority` | | Medium Priority | `medium priority` | | High Priority | `high priority` | </details> ### Code of Conduct - [X] I agree to follow this project's Code of Conduct.
non_process
🚧remove repo related docs complete the item below i have updated the title without removing the 🚧 emoji description remove repo specific docs that exist at the org level update link in readme to point to the org level contribution document acceptance criteria contributing code of conduct documentation branching md table of contents folder environmentsetup folder funding todo items change type labels added to this issue refer to the change type labels section below priority label added to this issue refer to the priority type labels section below issue linked to the correct project if applicable issue linked to the correct milestone if applicable draft pull request created and linked to this issue only required with code changes issue dependencies no response related work no response additional information change type labels change type label bug fixes 🐛bug breaking changes 🧨breaking changes new feature ✨new feature workflow changes workflow code doc changes 🗒️documentation code product doc changes 📝documentation product priority type labels priority type label low priority low priority medium priority medium priority high priority high priority code of conduct i agree to follow this project s code of conduct
0
1,602
4,216,841,189
IssuesEvent
2016-06-30 10:44:25
Jumpscale/dockers
https://api.github.com/repos/Jumpscale/dockers
closed
need docker for js8 sandbox
priority_critical process_duplicate type_feature
- start from ubuntu1604 - install js8 + sandbox tool - build & push name: ubuntu1604_js8_sandbox
1.0
need docker for js8 sandbox - - start from ubuntu1604 - install js8 + sandbox tool - build & push name: ubuntu1604_js8_sandbox
process
need docker for sandbox start from install sandbox tool build push name sandbox
1
7,608
10,721,951,354
IssuesEvent
2019-10-27 08:01:38
KratosMultiphysics/Kratos
https://api.github.com/repos/KratosMultiphysics/Kratos
opened
Nodes/elements not found by point locator
Help Wanted Parallel-MPI Post Process
The brute_force_point_locator does not find the nodes/elements after a restart. The nodes were found in the first part of the simulation and also the files were written correctly. The problem only ocures only in the StructuralMechanicsApplication, everything works with FluidDynamics. Probably someone has an idea.
1.0
Nodes/elements not found by point locator - The brute_force_point_locator does not find the nodes/elements after a restart. The nodes were found in the first part of the simulation and also the files were written correctly. The problem only ocures only in the StructuralMechanicsApplication, everything works with FluidDynamics. Probably someone has an idea.
process
nodes elements not found by point locator the brute force point locator does not find the nodes elements after a restart the nodes were found in the first part of the simulation and also the files were written correctly the problem only ocures only in the structuralmechanicsapplication everything works with fluiddynamics probably someone has an idea
1
9,797
2,615,175,106
IssuesEvent
2015-03-01 06:58:18
chrsmith/reaver-wps
https://api.github.com/repos/chrsmith/reaver-wps
opened
WPS lock problem
auto-migrated Priority-Triage Type-Defect
``` Is this any way to avoid WPS lock problem. I also try delay and timeout option and also use fake mac but noting hapan ``` Original issue reported on code.google.com by `patilary...@gmail.com` on 16 Apr 2014 at 7:11
1.0
WPS lock problem - ``` Is this any way to avoid WPS lock problem. I also try delay and timeout option and also use fake mac but noting hapan ``` Original issue reported on code.google.com by `patilary...@gmail.com` on 16 Apr 2014 at 7:11
non_process
wps lock problem is this any way to avoid wps lock problem i also try delay and timeout option and also use fake mac but noting hapan original issue reported on code google com by patilary gmail com on apr at
0
13,936
16,703,727,363
IssuesEvent
2021-06-09 07:28:15
micado-eu/pa_application
https://api.github.com/repos/micado-eu/pa_application
closed
Guided processes: Marking process as translatable and sending to translation doesn't send the steps
guided processes
The steps are marked as editing We can't mark the steps as translatable either
1.0
Guided processes: Marking process as translatable and sending to translation doesn't send the steps - The steps are marked as editing We can't mark the steps as translatable either
process
guided processes marking process as translatable and sending to translation doesn t send the steps the steps are marked as editing we can t mark the steps as translatable either
1
19,555
10,369,104,486
IssuesEvent
2019-09-07 23:03:39
IncPlusPlus/betterstat-server
https://api.github.com/repos/IncPlusPlus/betterstat-server
closed
CVE-2015-9251 (Medium) detected in jquery-1.11.2.min.js
security vulnerability
## CVE-2015-9251 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.11.2.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.2/jquery.min.js</a></p> <p>Path to dependency file: /betterstat-server/src/main/resources/templates/registration.html</p> <p>Path to vulnerable library: /betterstat-server/src/main/resources/templates/registration.html</p> <p> Dependency Hierarchy: - :x: **jquery-1.11.2.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/IncPlusPlus/betterstat-server/commit/6c026eb8b67245c860f7cfd3312980b3081c3cbe">6c026eb8b67245c860f7cfd3312980b3081c3cbe</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed. <p>Publish Date: 2018-01-18 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-9251>CVE-2015-9251</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p> <p>Release Date: 2018-01-18</p> <p>Fix Resolution: jQuery - v3.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2015-9251 (Medium) detected in jquery-1.11.2.min.js - ## CVE-2015-9251 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.11.2.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.2/jquery.min.js</a></p> <p>Path to dependency file: /betterstat-server/src/main/resources/templates/registration.html</p> <p>Path to vulnerable library: /betterstat-server/src/main/resources/templates/registration.html</p> <p> Dependency Hierarchy: - :x: **jquery-1.11.2.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/IncPlusPlus/betterstat-server/commit/6c026eb8b67245c860f7cfd3312980b3081c3cbe">6c026eb8b67245c860f7cfd3312980b3081c3cbe</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed. <p>Publish Date: 2018-01-18 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-9251>CVE-2015-9251</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p> <p>Release Date: 2018-01-18</p> <p>Fix Resolution: jQuery - v3.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file betterstat server src main resources templates registration html path to vulnerable library betterstat server src main resources templates registration html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details jquery before is vulnerable to cross site scripting xss attacks when a cross domain ajax request is performed without the datatype option causing text javascript responses to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
0
112,730
4,536,461,290
IssuesEvent
2016-09-08 20:30:13
ELVIS-Project/vis-framework
https://api.github.com/repos/ELVIS-Project/vis-framework
closed
Find lighter replacement for matplotlib and scipy
Priority: LOW
The matplotlib and scipy dependencies in VIS are by far the most cumbersome. We use them, above all, to generate the data used to create dendrograms. This means that the D3.js extensions in the VIS-ualizer won't replace these dependencies because they would still be our only way of generating the data the VISualizer needs to be able to plot dendrograms. At the moment I don't know of any alternative libraries.
1.0
Find lighter replacement for matplotlib and scipy - The matplotlib and scipy dependencies in VIS are by far the most cumbersome. We use them, above all, to generate the data used to create dendrograms. This means that the D3.js extensions in the VIS-ualizer won't replace these dependencies because they would still be our only way of generating the data the VISualizer needs to be able to plot dendrograms. At the moment I don't know of any alternative libraries.
non_process
find lighter replacement for matplotlib and scipy the matplotlib and scipy dependencies in vis are by far the most cumbersome we use them above all to generate the data used to create dendrograms this means that the js extensions in the vis ualizer won t replace these dependencies because they would still be our only way of generating the data the visualizer needs to be able to plot dendrograms at the moment i don t know of any alternative libraries
0
2,790
5,721,958,008
IssuesEvent
2017-04-20 08:15:15
orbardugo/Hahot-Hameshulash
https://api.github.com/repos/orbardugo/Hahot-Hameshulash
opened
Create ZF
in process
Todo: delete all unnecessary items, add suitable labels (with priotity and estimation), milestone, assignee, etc. ## Feature Template #### Related Issues/Tasks - [ ] #1 - [ ] #2 #### Scenario (test) - [ ] Step 1 - [ ] ... - [ ] Expected result ## User Story Template - As a ... - I want ... - So that ... ## Bug Template #### Expected behavior #### Actual behavior #### Steps to reproduce the behavior ## Project Submission Template #### Iteration page: [here](../wiki/iter-6.md) #### Checklist, e.g.,: - [ ] Feature scenarios/tests passing - [ ] Iteration page updated, including: - [ ] Iteration retrospective - [ ] Code reviewes - [ ] Client review - [ ] Issues updates - [ ] Section on application of course materials - [ ] git tag - [ ] Next iteration: - [ ] Open page - [ ] Select stories and plan issues - [ ] Test scenarios - [ ] All engineers filled peer-review - [ ] Submitted - [ ] Announcement in chat room - [ ] Assign this issue to checker - [ ] Register for a review meeting
1.0
Create ZF - Todo: delete all unnecessary items, add suitable labels (with priotity and estimation), milestone, assignee, etc. ## Feature Template #### Related Issues/Tasks - [ ] #1 - [ ] #2 #### Scenario (test) - [ ] Step 1 - [ ] ... - [ ] Expected result ## User Story Template - As a ... - I want ... - So that ... ## Bug Template #### Expected behavior #### Actual behavior #### Steps to reproduce the behavior ## Project Submission Template #### Iteration page: [here](../wiki/iter-6.md) #### Checklist, e.g.,: - [ ] Feature scenarios/tests passing - [ ] Iteration page updated, including: - [ ] Iteration retrospective - [ ] Code reviewes - [ ] Client review - [ ] Issues updates - [ ] Section on application of course materials - [ ] git tag - [ ] Next iteration: - [ ] Open page - [ ] Select stories and plan issues - [ ] Test scenarios - [ ] All engineers filled peer-review - [ ] Submitted - [ ] Announcement in chat room - [ ] Assign this issue to checker - [ ] Register for a review meeting
process
create zf todo delete all unnecessary items add suitable labels with priotity and estimation milestone assignee etc feature template related issues tasks scenario test step expected result user story template as a i want so that bug template expected behavior actual behavior steps to reproduce the behavior project submission template iteration page wiki iter md checklist e g feature scenarios tests passing iteration page updated including iteration retrospective code reviewes client review issues updates section on application of course materials git tag next iteration open page select stories and plan issues test scenarios all engineers filled peer review submitted announcement in chat room assign this issue to checker register for a review meeting
1
393,313
11,613,378,948
IssuesEvent
2020-02-26 10:37:14
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
auto.hwupgrade.it - see bug description
browser-firefox-mobile engine-gecko priority-normal
<!-- @browser: Firefox Mobile 71.0 --> <!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:71.0) Gecko/71.0 Firefox/71.0 --> <!-- @reported_with: --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/48960 --> **URL**: https://auto.hwupgrade.it/news/tecnologia/newron-ev-1-al-via-gli-ordini-per-la-moto-elettrica-con-carene-in-legno-e-pacco-batteria-cilindrico_87239.html **Browser / Version**: Firefox Mobile 71.0 **Operating System**: Android **Tested Another Browser**: Unknown **Problem type**: Something else **Description**: keeps reloading when scrolling down **Steps to Reproduce**: <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
auto.hwupgrade.it - see bug description - <!-- @browser: Firefox Mobile 71.0 --> <!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:71.0) Gecko/71.0 Firefox/71.0 --> <!-- @reported_with: --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/48960 --> **URL**: https://auto.hwupgrade.it/news/tecnologia/newron-ev-1-al-via-gli-ordini-per-la-moto-elettrica-con-carene-in-legno-e-pacco-batteria-cilindrico_87239.html **Browser / Version**: Firefox Mobile 71.0 **Operating System**: Android **Tested Another Browser**: Unknown **Problem type**: Something else **Description**: keeps reloading when scrolling down **Steps to Reproduce**: <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
auto hwupgrade it see bug description url browser version firefox mobile operating system android tested another browser unknown problem type something else description keeps reloading when scrolling down steps to reproduce browser configuration none from with ❤️
0
20,814
3,636,364,309
IssuesEvent
2016-02-12 02:43:21
coala-analyzer/coala
https://api.github.com/repos/coala-analyzer/coala
closed
Linter Bears
importance/high status/needs design type/feature
So, Ive been looking at a lot of linter plugins in sublime, atom and also gnome-code-assistance. One thing they have in common is that they make a **Linter** class which is used to run multiple linters. The linter class basically runs an executable with some args and parses the output using a predefined regex pattern. So, the inputs to the LinterBear are `executable`, `args`, `regex` - thats it ! With this, we can easily make any linter's bear in a matter of minutes. Some references: - `__init__.py` files in https://github.com/GNOME/gnome-code-assistance/tree/master/backends/ - atom's linter - https://atom.io/packages/linter - sublime-linter - http://www.sublimelinter.com/en/latest/ Linters: - [x] - JSON : using python's `json` module - [ ] - JSON Schema validation: https://people.collabora.com/~pwith/walbottle/ ? - [x] - Python - PyLint - [x] - Javascript - JSHint - [x] - C/CPP - cpplint, clang - [x] - PHP - using `php -l` - [x] - Ruby - using `ruby -wc` - [ ] - Ruby (Bakeman) - [ ] - Java - using `java -Xlint` - [x] - Java - checkstyle - [x] - CSS - csslint - [ ] - reStructuredText - rst - [ ] - Markdown - mdast-lint or markdownlint - [x] - Natural Language: Alex, proselint - [x] - C# - using mono's `mcs` - [x] - Dockerfile - dockerfile_lint - [ ] - Bash : shellcheck - [ ] - HTML : HTML-lint or HTML tidy - [ ] - CSV : - [ ] - TEX : chktex - [ ] - Matlab : Matlab's mlint - [ ] - R : lintr - [ ] - XML : xmllint - [ ] - XML schema validation - [ ] - YAML : yaml-js - [ ] - CoffeScript - coffeelint - [ ] - Haskell : hlint - [ ] - Bootstrap - bootlint - [ ] - Gettext - Dennis - [ ] - Go : golint - [ ] - SQL - sqlint Auto fixers: - [ ] - Python : autopep8 - [ ] - JS : fixmyjs - [ ] - PHP : Php code sniffer
1.0
Linter Bears - So, Ive been looking at a lot of linter plugins in sublime, atom and also gnome-code-assistance. One thing they have in common is that they make a **Linter** class which is used to run multiple linters. The linter class basically runs an executable with some args and parses the output using a predefined regex pattern. So, the inputs to the LinterBear are `executable`, `args`, `regex` - thats it ! With this, we can easily make any linter's bear in a matter of minutes. Some references: - `__init__.py` files in https://github.com/GNOME/gnome-code-assistance/tree/master/backends/ - atom's linter - https://atom.io/packages/linter - sublime-linter - http://www.sublimelinter.com/en/latest/ Linters: - [x] - JSON : using python's `json` module - [ ] - JSON Schema validation: https://people.collabora.com/~pwith/walbottle/ ? - [x] - Python - PyLint - [x] - Javascript - JSHint - [x] - C/CPP - cpplint, clang - [x] - PHP - using `php -l` - [x] - Ruby - using `ruby -wc` - [ ] - Ruby (Bakeman) - [ ] - Java - using `java -Xlint` - [x] - Java - checkstyle - [x] - CSS - csslint - [ ] - reStructuredText - rst - [ ] - Markdown - mdast-lint or markdownlint - [x] - Natural Language: Alex, proselint - [x] - C# - using mono's `mcs` - [x] - Dockerfile - dockerfile_lint - [ ] - Bash : shellcheck - [ ] - HTML : HTML-lint or HTML tidy - [ ] - CSV : - [ ] - TEX : chktex - [ ] - Matlab : Matlab's mlint - [ ] - R : lintr - [ ] - XML : xmllint - [ ] - XML schema validation - [ ] - YAML : yaml-js - [ ] - CoffeScript - coffeelint - [ ] - Haskell : hlint - [ ] - Bootstrap - bootlint - [ ] - Gettext - Dennis - [ ] - Go : golint - [ ] - SQL - sqlint Auto fixers: - [ ] - Python : autopep8 - [ ] - JS : fixmyjs - [ ] - PHP : Php code sniffer
non_process
linter bears so ive been looking at a lot of linter plugins in sublime atom and also gnome code assistance one thing they have in common is that they make a linter class which is used to run multiple linters the linter class basically runs an executable with some args and parses the output using a predefined regex pattern so the inputs to the linterbear are executable args regex thats it with this we can easily make any linter s bear in a matter of minutes some references init py files in atom s linter sublime linter linters json using python s json module json schema validation python pylint javascript jshint c cpp cpplint clang php using php l ruby using ruby wc ruby bakeman java using java xlint java checkstyle css csslint restructuredtext rst markdown mdast lint or markdownlint natural language alex proselint c using mono s mcs dockerfile dockerfile lint bash shellcheck html html lint or html tidy csv tex chktex matlab matlab s mlint r lintr xml xmllint xml schema validation yaml yaml js coffescript coffeelint haskell hlint bootstrap bootlint gettext dennis go golint sql sqlint auto fixers python js fixmyjs php php code sniffer
0
9,211
12,240,085,937
IssuesEvent
2020-05-04 23:14:08
fluent/fluent-bit
https://api.github.com/repos/fluent/fluent-bit
closed
Config Maps
enhancement work-in-process
[disclaimer]: this is not related to Kubernetes Config Maps. Currently, we expose an API that plugins use to read expected configuration values that are coming either from the command line or a configuration file. This works as expected but there is a need to solve and improve in the following scenarios: - If a plugin gets an unknown configuration key, we must provide a mechanism to warn the user about this. From an operational perspective, we need to reduce the risk of bad configuration due to typos or deprecated properties names. - Dynamic reload: the core is not aware of which are the expected configuration properties by each plugin, so this is not helpful for the further implementation of dynamic config reload. The core must be aware of all this details so it can perform validations before take any reload action. If we aim to implement dynamic reload, this step is mandatory. ## Work in process I have a current POC extending configuration which implements Config Maps. A short preview: API header definition: ```C /* Configuration types */ #define FLB_CONFIG_MAP_STR 0 /* string */ #define FLB_CONFIG_MAP_NUM 1 /* number */ #define FLB_CONFIG_MAP_BOOL 2 /* boolean */ #define FLB_CONFIG_MAP_CLIST 3 /* comma separated list of strings */ #define FLB_CONFIG_MAP_SLIST 4 /* empty space separated list of strings */ typedef union { double num; /* FLB_CONFIG_MAP_NUM */ int boolean; /* FLB_CONFIG_MAP_BOOL */ flb_sds_t str; /* FLB_CONFIG_MAP_STR */ struct mk_list list; /* FLB_CONFIG_MAP_CLIST and FLB_CONFIG_MAP_SLIST */ } config_map_val; struct flb_config_map { int type; /* type */ flb_sds_t name; /* property name */ flb_sds_t def_value; /* default value */ uintptr_t offset; /* member offset */ flb_sds_t desc; /* description */ /* Fields used when generating a new registration in the heap */ config_map_val value; /* lookup value */ struct mk_list _head; /* */ }; ``` How a plugin aims to register it config map: ```C /* Configuration properties map */ static struct flb_config_map config_map[] = { { FLB_CONFIG_MAP_STR, "format", "msgpack", NULL, NULL }, { FLB_CONFIG_MAP_STR, "json_date_key" , "date", offsetof(struct flb_stdout, json_date_key), NULL }, { FLB_CONFIG_MAP_NUM, "json_date_format", "0" offsetof(struct flb_stdout, json_date_format), NULL }, /* EOF */ {0, NULL, NULL, 0, NULL} }; /* Plugin registration */ struct flb_output_plugin out_stdout_plugin = { .name = "stdout", .description = "Prints events to STDOUT", .cb_init = cb_stdout_init, .cb_flush = cb_stdout_flush, .cb_exit = cb_stdout_exit, .flags = 0, .config_map = config_map }; ``` A direct registration can happen in the plugin context structure field, e.g: ``` ret = flb_output_config_map_set(ins, (void *) ctx); ``` This work is about the implementation in the core, all plugins will be backward compatible since old API is not deprecated. All plugins will be migrated.
1.0
Config Maps - [disclaimer]: this is not related to Kubernetes Config Maps. Currently, we expose an API that plugins use to read expected configuration values that are coming either from the command line or a configuration file. This works as expected but there is a need to solve and improve in the following scenarios: - If a plugin gets an unknown configuration key, we must provide a mechanism to warn the user about this. From an operational perspective, we need to reduce the risk of bad configuration due to typos or deprecated properties names. - Dynamic reload: the core is not aware of which are the expected configuration properties by each plugin, so this is not helpful for the further implementation of dynamic config reload. The core must be aware of all this details so it can perform validations before take any reload action. If we aim to implement dynamic reload, this step is mandatory. ## Work in process I have a current POC extending configuration which implements Config Maps. A short preview: API header definition: ```C /* Configuration types */ #define FLB_CONFIG_MAP_STR 0 /* string */ #define FLB_CONFIG_MAP_NUM 1 /* number */ #define FLB_CONFIG_MAP_BOOL 2 /* boolean */ #define FLB_CONFIG_MAP_CLIST 3 /* comma separated list of strings */ #define FLB_CONFIG_MAP_SLIST 4 /* empty space separated list of strings */ typedef union { double num; /* FLB_CONFIG_MAP_NUM */ int boolean; /* FLB_CONFIG_MAP_BOOL */ flb_sds_t str; /* FLB_CONFIG_MAP_STR */ struct mk_list list; /* FLB_CONFIG_MAP_CLIST and FLB_CONFIG_MAP_SLIST */ } config_map_val; struct flb_config_map { int type; /* type */ flb_sds_t name; /* property name */ flb_sds_t def_value; /* default value */ uintptr_t offset; /* member offset */ flb_sds_t desc; /* description */ /* Fields used when generating a new registration in the heap */ config_map_val value; /* lookup value */ struct mk_list _head; /* */ }; ``` How a plugin aims to register it config map: ```C /* Configuration properties map */ static struct flb_config_map config_map[] = { { FLB_CONFIG_MAP_STR, "format", "msgpack", NULL, NULL }, { FLB_CONFIG_MAP_STR, "json_date_key" , "date", offsetof(struct flb_stdout, json_date_key), NULL }, { FLB_CONFIG_MAP_NUM, "json_date_format", "0" offsetof(struct flb_stdout, json_date_format), NULL }, /* EOF */ {0, NULL, NULL, 0, NULL} }; /* Plugin registration */ struct flb_output_plugin out_stdout_plugin = { .name = "stdout", .description = "Prints events to STDOUT", .cb_init = cb_stdout_init, .cb_flush = cb_stdout_flush, .cb_exit = cb_stdout_exit, .flags = 0, .config_map = config_map }; ``` A direct registration can happen in the plugin context structure field, e.g: ``` ret = flb_output_config_map_set(ins, (void *) ctx); ``` This work is about the implementation in the core, all plugins will be backward compatible since old API is not deprecated. All plugins will be migrated.
process
config maps this is not related to kubernetes config maps currently we expose an api that plugins use to read expected configuration values that are coming either from the command line or a configuration file this works as expected but there is a need to solve and improve in the following scenarios if a plugin gets an unknown configuration key we must provide a mechanism to warn the user about this from an operational perspective we need to reduce the risk of bad configuration due to typos or deprecated properties names dynamic reload the core is not aware of which are the expected configuration properties by each plugin so this is not helpful for the further implementation of dynamic config reload the core must be aware of all this details so it can perform validations before take any reload action if we aim to implement dynamic reload this step is mandatory work in process i have a current poc extending configuration which implements config maps a short preview api header definition c configuration types define flb config map str string define flb config map num number define flb config map bool boolean define flb config map clist comma separated list of strings define flb config map slist empty space separated list of strings typedef union double num flb config map num int boolean flb config map bool flb sds t str flb config map str struct mk list list flb config map clist and flb config map slist config map val struct flb config map int type type flb sds t name property name flb sds t def value default value uintptr t offset member offset flb sds t desc description fields used when generating a new registration in the heap config map val value lookup value struct mk list head how a plugin aims to register it config map c configuration properties map static struct flb config map config map flb config map str format msgpack null null flb config map str json date key date offsetof struct flb stdout json date key null flb config map num json date format offsetof struct flb stdout json date format null eof null null null plugin registration struct flb output plugin out stdout plugin name stdout description prints events to stdout cb init cb stdout init cb flush cb stdout flush cb exit cb stdout exit flags config map config map a direct registration can happen in the plugin context structure field e g ret flb output config map set ins void ctx this work is about the implementation in the core all plugins will be backward compatible since old api is not deprecated all plugins will be migrated
1
155,349
24,450,424,439
IssuesEvent
2022-10-06 22:15:23
GSA-TTS/FAC
https://api.github.com/repos/GSA-TTS/FAC
opened
Conduct cognitive walkthrough workshop to evaluate current prototype
design
### Background Given the highly specialized nature of single audits and the FAC workflow, we'd like to enlist the help of SMEs to do an [cognitive walkthrough workshop](https://www.nngroup.com/articles/cognitive-walkthrough-workshop/) of the current prototype to help us catch usability and content issues that are very difficult for people without deep experience and domain specific subject matter expertise to see. ### Acceptance Criteria (We'll know we're done when...) - [ ] We've completed the cognitive walkthrough - [ ] Notes on insights and task success/fail from the cognitive walkthrough are documented in a place the relevant folks can access ### Tasks - [ ] Schedule 90 minute cog walkthrough workshop w/ SMEs - [ ] Stitch together end to end prototype - [ ] Write walkthrough facilitation script and action sequences - [ ] Conduct workshop - [ ] Document insights
1.0
Conduct cognitive walkthrough workshop to evaluate current prototype - ### Background Given the highly specialized nature of single audits and the FAC workflow, we'd like to enlist the help of SMEs to do an [cognitive walkthrough workshop](https://www.nngroup.com/articles/cognitive-walkthrough-workshop/) of the current prototype to help us catch usability and content issues that are very difficult for people without deep experience and domain specific subject matter expertise to see. ### Acceptance Criteria (We'll know we're done when...) - [ ] We've completed the cognitive walkthrough - [ ] Notes on insights and task success/fail from the cognitive walkthrough are documented in a place the relevant folks can access ### Tasks - [ ] Schedule 90 minute cog walkthrough workshop w/ SMEs - [ ] Stitch together end to end prototype - [ ] Write walkthrough facilitation script and action sequences - [ ] Conduct workshop - [ ] Document insights
non_process
conduct cognitive walkthrough workshop to evaluate current prototype background given the highly specialized nature of single audits and the fac workflow we d like to enlist the help of smes to do an of the current prototype to help us catch usability and content issues that are very difficult for people without deep experience and domain specific subject matter expertise to see acceptance criteria we ll know we re done when we ve completed the cognitive walkthrough notes on insights and task success fail from the cognitive walkthrough are documented in a place the relevant folks can access tasks schedule minute cog walkthrough workshop w smes stitch together end to end prototype write walkthrough facilitation script and action sequences conduct workshop document insights
0
285,648
31,155,017,690
IssuesEvent
2023-08-16 12:37:04
Trinadh465/linux-4.1.15_CVE-2018-5873
https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2018-5873
opened
CVE-2017-5550 (Medium) detected in linuxlinux-4.1.52
Mend: dependency security vulnerability
## CVE-2017-5550 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.52</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2018-5873/commit/32145daf0c96b012284199f23418243e0168269f">32145daf0c96b012284199f23418243e0168269f</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lib/iov_iter.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> Off-by-one error in the pipe_advance function in lib/iov_iter.c in the Linux kernel before 4.9.5 allows local users to obtain sensitive information from uninitialized heap-memory locations in opportunistic circumstances by reading from a pipe after an incorrect buffer-release decision. <p>Publish Date: 2017-02-06 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-5550>CVE-2017-5550</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-5550">https://nvd.nist.gov/vuln/detail/CVE-2017-5550</a></p> <p>Release Date: 2017-02-06</p> <p>Fix Resolution: 4.9.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2017-5550 (Medium) detected in linuxlinux-4.1.52 - ## CVE-2017-5550 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.52</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2018-5873/commit/32145daf0c96b012284199f23418243e0168269f">32145daf0c96b012284199f23418243e0168269f</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lib/iov_iter.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> Off-by-one error in the pipe_advance function in lib/iov_iter.c in the Linux kernel before 4.9.5 allows local users to obtain sensitive information from uninitialized heap-memory locations in opportunistic circumstances by reading from a pipe after an incorrect buffer-release decision. <p>Publish Date: 2017-02-06 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-5550>CVE-2017-5550</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-5550">https://nvd.nist.gov/vuln/detail/CVE-2017-5550</a></p> <p>Release Date: 2017-02-06</p> <p>Fix Resolution: 4.9.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files lib iov iter c vulnerability details off by one error in the pipe advance function in lib iov iter c in the linux kernel before allows local users to obtain sensitive information from uninitialized heap memory locations in opportunistic circumstances by reading from a pipe after an incorrect buffer release decision publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
22,667
31,896,016,070
IssuesEvent
2023-09-18 01:49:42
tdwg/dwc
https://api.github.com/repos/tdwg/dwc
closed
How to contribute is too hidden
task Docs - Quick Reference Guide Docs - Contributing non-normative Process - complete
The [link](https://github.com/tdwg/dwc) to the page on how to contribute is buried too deep in the Darwin Core body of web pages. It should be prominent on the most used page - the [Quick Reference Guide](https://dwc.tdwg.org/terms/) and it should be clear that it is an explanation of how to submit issues if one believes there is something missing or needs to be changed. The same should be done for the [Chronometric Age Extension Quick Reference Guide](https://tdwg.github.io/chrono/terms/).
1.0
How to contribute is too hidden - The [link](https://github.com/tdwg/dwc) to the page on how to contribute is buried too deep in the Darwin Core body of web pages. It should be prominent on the most used page - the [Quick Reference Guide](https://dwc.tdwg.org/terms/) and it should be clear that it is an explanation of how to submit issues if one believes there is something missing or needs to be changed. The same should be done for the [Chronometric Age Extension Quick Reference Guide](https://tdwg.github.io/chrono/terms/).
process
how to contribute is too hidden the to the page on how to contribute is buried too deep in the darwin core body of web pages it should be prominent on the most used page the and it should be clear that it is an explanation of how to submit issues if one believes there is something missing or needs to be changed the same should be done for the
1
25,656
4,164,694,476
IssuesEvent
2016-06-19 00:18:23
Truncatis-Life/Server_Files
https://api.github.com/repos/Truncatis-Life/Server_Files
closed
Disable Cellphone & Change Earplugs
bug Testing
Disable the Shift + Y cellphone Change the Earplugs hotkey to something like Shift + O
1.0
Disable Cellphone & Change Earplugs - Disable the Shift + Y cellphone Change the Earplugs hotkey to something like Shift + O
non_process
disable cellphone change earplugs disable the shift y cellphone change the earplugs hotkey to something like shift o
0
2,355
5,164,868,649
IssuesEvent
2017-01-17 11:54:55
sSarahlei/Watch-It
https://api.github.com/repos/sSarahlei/Watch-It
opened
convert html to pfd
in process
Todo: delete all unnecessary items, add suitable labels (with priotity and estimation), milestone, assignee, etc. ## Feature Template #### Related Issues/Tasks - [ ] #1 - [ ] #2 #### Scenario (test) - [ ] Step 1 - [ ] ... - [ ] Expected result ## User Story Template - As a ... - I want ... - So that ... ## Bug Template #### Expected behavior #### Actual behavior #### Steps to reproduce the behavior ## Project Submission Template #### Iteration page: [here](../wiki/iter-6.md) #### Checklist, e.g.,: - [ ] Feature scenarios/tests passing - [ ] Iteration page updated, including: - [ ] Iteration retrospective - [ ] Client review - [ ] Issues updates - [ ] Section on application of course materials - [ ] git tag - [ ] Next iteration: - [ ] Open page - [ ] Select stories and plan issues - [ ] Test scenarios - [ ] All engineers filled peer-review - [ ] Submitted - [ ] Announcement in chat room - [ ] Assign this issue to checker - [ ] Register for a review meeting
1.0
convert html to pfd - Todo: delete all unnecessary items, add suitable labels (with priotity and estimation), milestone, assignee, etc. ## Feature Template #### Related Issues/Tasks - [ ] #1 - [ ] #2 #### Scenario (test) - [ ] Step 1 - [ ] ... - [ ] Expected result ## User Story Template - As a ... - I want ... - So that ... ## Bug Template #### Expected behavior #### Actual behavior #### Steps to reproduce the behavior ## Project Submission Template #### Iteration page: [here](../wiki/iter-6.md) #### Checklist, e.g.,: - [ ] Feature scenarios/tests passing - [ ] Iteration page updated, including: - [ ] Iteration retrospective - [ ] Client review - [ ] Issues updates - [ ] Section on application of course materials - [ ] git tag - [ ] Next iteration: - [ ] Open page - [ ] Select stories and plan issues - [ ] Test scenarios - [ ] All engineers filled peer-review - [ ] Submitted - [ ] Announcement in chat room - [ ] Assign this issue to checker - [ ] Register for a review meeting
process
convert html to pfd todo delete all unnecessary items add suitable labels with priotity and estimation milestone assignee etc feature template related issues tasks scenario test step expected result user story template as a i want so that bug template expected behavior actual behavior steps to reproduce the behavior project submission template iteration page wiki iter md checklist e g feature scenarios tests passing iteration page updated including iteration retrospective client review issues updates section on application of course materials git tag next iteration open page select stories and plan issues test scenarios all engineers filled peer review submitted announcement in chat room assign this issue to checker register for a review meeting
1
11,166
13,957,694,356
IssuesEvent
2020-10-24 08:11:13
alexanderkotsev/geoportal
https://api.github.com/repos/alexanderkotsev/geoportal
opened
SE - Lantmäteriet: Harvesting request
Geoportal Harvesting process SE - Sweden
Hi! I understand that you are running a weekly harvest of the national nodes. So if it is running during the weekend (or today) for Sweden we don&#39;t need one extra run today. So If nothing is planned Friday-Sunday for the Swedish Node I would like to order a harvesting of the Swedish node today. Kind regards Bj&ouml;rn Olofsson, support for the Swedish Geoportal.
1.0
SE - Lantmäteriet: Harvesting request - Hi! I understand that you are running a weekly harvest of the national nodes. So if it is running during the weekend (or today) for Sweden we don&#39;t need one extra run today. So If nothing is planned Friday-Sunday for the Swedish Node I would like to order a harvesting of the Swedish node today. Kind regards Bj&ouml;rn Olofsson, support for the Swedish Geoportal.
process
se lantmäteriet harvesting request hi i understand that you are running a weekly harvest of the national nodes so if it is running during the weekend or today for sweden we don t need one extra run today so if nothing is planned friday sunday for the swedish node i would like to order a harvesting of the swedish node today kind regards bj ouml rn olofsson support for the swedish geoportal
1
17,482
23,299,703,167
IssuesEvent
2022-08-07 05:53:20
googleapis/python-texttospeech
https://api.github.com/repos/googleapis/python-texttospeech
closed
protobuf pinning to < 4.0.0dev
api: texttospeech type: process priority: p2
Is it possible to remove or relax the `protobuf` pinning? `google-cloud-texttospeech 2.12.0 has requirement protobuf<4.0.0dev,>=3.19.0, but you have protobuf 4.21.4.` Thanks
1.0
protobuf pinning to < 4.0.0dev - Is it possible to remove or relax the `protobuf` pinning? `google-cloud-texttospeech 2.12.0 has requirement protobuf<4.0.0dev,>=3.19.0, but you have protobuf 4.21.4.` Thanks
process
protobuf pinning to is it possible to remove or relax the protobuf pinning google cloud texttospeech has requirement protobuf but you have protobuf thanks
1
247,154
7,904,139,264
IssuesEvent
2018-07-02 02:19:04
magda-io/magda
https://api.github.com/repos/magda-io/magda
closed
"Ask a Question About a Dataset" modal doesn't work on small screens
bug priority: high
### Problem description If you tap the "Ask a question about a dataset" button, the resulting modal is too big to fit on my phone's screen (Iphone SE). Because you can't see the X button or the submit button there's no way to even close the modal, let alone send it. I think the modal needs scrollbars. ### Problem reproduction steps Go to any dataset page and tap "Ask a question about a dataset"
1.0
"Ask a Question About a Dataset" modal doesn't work on small screens - ### Problem description If you tap the "Ask a question about a dataset" button, the resulting modal is too big to fit on my phone's screen (Iphone SE). Because you can't see the X button or the submit button there's no way to even close the modal, let alone send it. I think the modal needs scrollbars. ### Problem reproduction steps Go to any dataset page and tap "Ask a question about a dataset"
non_process
ask a question about a dataset modal doesn t work on small screens problem description if you tap the ask a question about a dataset button the resulting modal is too big to fit on my phone s screen iphone se because you can t see the x button or the submit button there s no way to even close the modal let alone send it i think the modal needs scrollbars problem reproduction steps go to any dataset page and tap ask a question about a dataset
0
187,244
22,040,701,482
IssuesEvent
2022-05-29 10:00:44
PowerShell/PowerShell
https://api.github.com/repos/PowerShell/PowerShell
closed
Let each RunSpace have an IServiceProvider
Issue-Enhancement WG-Engine WG-Security Resolution-Declined
### Summary of the new feature / enhancement Using dependency injection in PowerShell is hard today. (I'm thinking about C# modules) I'm considering adding an IServiceProvider property to Runspace/SessionState (and InitialSessionState) to help with getting hold of an IServiceProvider in a way that is accessible from cmdlets, and also easy to inject when writing tests. Is this something that has already been considered? Worth a PR? ### Proposed technical implementation details (optional) _No response_
True
Let each RunSpace have an IServiceProvider - ### Summary of the new feature / enhancement Using dependency injection in PowerShell is hard today. (I'm thinking about C# modules) I'm considering adding an IServiceProvider property to Runspace/SessionState (and InitialSessionState) to help with getting hold of an IServiceProvider in a way that is accessible from cmdlets, and also easy to inject when writing tests. Is this something that has already been considered? Worth a PR? ### Proposed technical implementation details (optional) _No response_
non_process
let each runspace have an iserviceprovider summary of the new feature enhancement using dependency injection in powershell is hard today i m thinking about c modules i m considering adding an iserviceprovider property to runspace sessionstate and initialsessionstate to help with getting hold of an iserviceprovider in a way that is accessible from cmdlets and also easy to inject when writing tests is this something that has already been considered worth a pr proposed technical implementation details optional no response
0
20,172
26,727,761,473
IssuesEvent
2023-01-29 22:43:03
evidence-dev/evidence
https://api.github.com/repos/evidence-dev/evidence
opened
Extract layout into a standalone component, and load function
dev-process
Depends on #540 * Provide a clean layout data object that users can ingest into their own layout components * Pull our built in layout into a standalone component
1.0
Extract layout into a standalone component, and load function - Depends on #540 * Provide a clean layout data object that users can ingest into their own layout components * Pull our built in layout into a standalone component
process
extract layout into a standalone component and load function depends on provide a clean layout data object that users can ingest into their own layout components pull our built in layout into a standalone component
1
16,532
21,559,890,451
IssuesEvent
2022-05-01 02:13:09
DevExpress/testcafe-hammerhead
https://api.github.com/repos/DevExpress/testcafe-hammerhead
closed
Hammerhead is not initialized in the rewrited iframe with same-domain url in IE
TYPE: bug AREA: client BROWSER: IE11 BROWSER: Edge FREQUENCY: level 1 SYSTEM: iframe processing STATE: Stale
related with https://github.com/DevExpress/testcafe-hammerhead/pull/1312
1.0
Hammerhead is not initialized in the rewrited iframe with same-domain url in IE - related with https://github.com/DevExpress/testcafe-hammerhead/pull/1312
process
hammerhead is not initialized in the rewrited iframe with same domain url in ie related with
1
130,497
18,159,506,015
IssuesEvent
2021-09-27 08:00:28
olympus-fergus/dvpwa
https://api.github.com/repos/olympus-fergus/dvpwa
opened
CVE-2020-11023 (Medium) detected in jquery-3.2.1.min.js
security vulnerability
## CVE-2020-11023 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-3.2.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js</a></p> <p>Path to vulnerable library: /sqli/static/js/jquery-3.2.1.min.js</p> <p> Dependency Hierarchy: - :x: **jquery-3.2.1.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/olympus-fergus/dvpwa/commit/82d4ff4d31fd67b726b0342e48d7cb37cfcce730">82d4ff4d31fd67b726b0342e48d7cb37cfcce730</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. <p>Publish Date: 2020-04-29 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p> <p>Release Date: 2020-04-29</p> <p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-11023 (Medium) detected in jquery-3.2.1.min.js - ## CVE-2020-11023 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-3.2.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js</a></p> <p>Path to vulnerable library: /sqli/static/js/jquery-3.2.1.min.js</p> <p> Dependency Hierarchy: - :x: **jquery-3.2.1.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/olympus-fergus/dvpwa/commit/82d4ff4d31fd67b726b0342e48d7cb37cfcce730">82d4ff4d31fd67b726b0342e48d7cb37cfcce730</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. <p>Publish Date: 2020-04-29 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p> <p>Release Date: 2020-04-29</p> <p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to vulnerable library sqli static js jquery min js dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery jquery rails step up your open source security game with whitesource
0
7,754
10,866,862,296
IssuesEvent
2019-11-14 22:15:27
ECP-WarpX/WarpX
https://api.github.com/repos/ECP-WarpX/WarpX
closed
Copy particles in boosted-frame simulations on the GPU
gpu post-processing
GetParticleSlice() currently copies data that cross the z-slice on the CPU. Implemented a corresponding GPU copy.
1.0
Copy particles in boosted-frame simulations on the GPU - GetParticleSlice() currently copies data that cross the z-slice on the CPU. Implemented a corresponding GPU copy.
process
copy particles in boosted frame simulations on the gpu getparticleslice currently copies data that cross the z slice on the cpu implemented a corresponding gpu copy
1
8,591
11,758,536,906
IssuesEvent
2020-03-13 15:35:13
MHRA/products
https://api.github.com/repos/MHRA/products
closed
AUTOMATIC BATCH PROCESS - Delete service informs State Manager of completion / error
EPIC - Auto Batch Process :oncoming_automobile: HIGH PRIORITY :arrow_double_up: TASK :rescue_worker_helmet:
### User want As a user I want to see up to date documents on the products website So I can make informed decisions **Customer acceptance criteria** **Technical acceptance criteria** Delete service calls sTATE mANAGER with the status of the delete. **Data acceptance criteria** **Testing acceptance criteria** **Size** S **Value** **Effort** ### Exit Criteria met - [ ] Backlog - [ ] Discovery - [ ] DUXD - [ ] Development - [ ] Quality Assurance - [ ] Release and Validate
1.0
AUTOMATIC BATCH PROCESS - Delete service informs State Manager of completion / error - ### User want As a user I want to see up to date documents on the products website So I can make informed decisions **Customer acceptance criteria** **Technical acceptance criteria** Delete service calls sTATE mANAGER with the status of the delete. **Data acceptance criteria** **Testing acceptance criteria** **Size** S **Value** **Effort** ### Exit Criteria met - [ ] Backlog - [ ] Discovery - [ ] DUXD - [ ] Development - [ ] Quality Assurance - [ ] Release and Validate
process
automatic batch process delete service informs state manager of completion error user want as a user i want to see up to date documents on the products website so i can make informed decisions customer acceptance criteria technical acceptance criteria delete service calls state manager with the status of the delete data acceptance criteria testing acceptance criteria size s value effort exit criteria met backlog discovery duxd development quality assurance release and validate
1
33,543
2,769,647,595
IssuesEvent
2015-05-01 04:03:46
mperrin/poppy
https://api.github.com/repos/mperrin/poppy
closed
avoid unnecessary array copies in wavefront.asFITS()
enhancement priority:low
The benchmarking I just did (https://github.com/mperrin/poppy/issues/23#issuecomment-94357101) turned up that we're wasting about 5% of the time in array copies in wavefront.asFITS() calls. I even noted that when I wrote the function: ``` # FIXME this is somewhat inefficient but easiest to code for now ``` 5% speedup is not much but we shouldn't leave it on the floor since it would be relatively easy to fix.
1.0
avoid unnecessary array copies in wavefront.asFITS() - The benchmarking I just did (https://github.com/mperrin/poppy/issues/23#issuecomment-94357101) turned up that we're wasting about 5% of the time in array copies in wavefront.asFITS() calls. I even noted that when I wrote the function: ``` # FIXME this is somewhat inefficient but easiest to code for now ``` 5% speedup is not much but we shouldn't leave it on the floor since it would be relatively easy to fix.
non_process
avoid unnecessary array copies in wavefront asfits the benchmarking i just did turned up that we re wasting about of the time in array copies in wavefront asfits calls i even noted that when i wrote the function fixme this is somewhat inefficient but easiest to code for now speedup is not much but we shouldn t leave it on the floor since it would be relatively easy to fix
0
67,774
9,099,646,431
IssuesEvent
2019-02-20 05:34:38
ampproject/amphtml
https://api.github.com/repos/ampproject/amphtml
closed
AMP project reference page fails AMP validation
Related to: Documentation Type: Bug good first issue
Go to page: https://www.ampproject.org/docs/reference/components/amp-addthis Expected: all AMP project pages are properly AMplified. Actual Result: the page fails AMP Validation.
1.0
AMP project reference page fails AMP validation - Go to page: https://www.ampproject.org/docs/reference/components/amp-addthis Expected: all AMP project pages are properly AMplified. Actual Result: the page fails AMP Validation.
non_process
amp project reference page fails amp validation go to page expected all amp project pages are properly amplified actual result the page fails amp validation
0
7,884
11,049,318,672
IssuesEvent
2019-12-09 23:20:49
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Updating VM exclusions
Pri1 automation/svc cxp process-automation/subsvc product-question resolved-by-customer triaged
is there a way to update the vm exclusions when its already deployed? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 225c9d05-83dd-b006-0025-3753f5ab25bf * Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096 * Content: [Start/Stop VMs during off-hours solution](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management) * Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**
1.0
Updating VM exclusions - is there a way to update the vm exclusions when its already deployed? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 225c9d05-83dd-b006-0025-3753f5ab25bf * Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096 * Content: [Start/Stop VMs during off-hours solution](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management) * Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**
process
updating vm exclusions is there a way to update the vm exclusions when its already deployed document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte
1
13,171
22,208,968,188
IssuesEvent
2022-06-07 17:17:53
renovatebot/renovate
https://api.github.com/repos/renovatebot/renovate
opened
Add tests to per manager known list of issues code
priority-3-normal type:refactor status:requirements
### Describe the proposed change(s). We merged code that generates a list of GitHub issues that are labeled `type:bug` or `type:feature`. But this code does not come with any tests. We should test if the code is correct. :wink: https://github.com/renovatebot/renovate/pull/15791#discussion_r890935334
1.0
Add tests to per manager known list of issues code - ### Describe the proposed change(s). We merged code that generates a list of GitHub issues that are labeled `type:bug` or `type:feature`. But this code does not come with any tests. We should test if the code is correct. :wink: https://github.com/renovatebot/renovate/pull/15791#discussion_r890935334
non_process
add tests to per manager known list of issues code describe the proposed change s we merged code that generates a list of github issues that are labeled type bug or type feature but this code does not come with any tests we should test if the code is correct wink
0
12,312
7,800,966,281
IssuesEvent
2018-06-09 15:34:59
gama-platform/gama
https://api.github.com/repos/gama-platform/gama
closed
Revert to original values is gray does nothing if manual changes are made
> Bug Affects Usability Concerns Interface OS OSX
### Steps to reproduce 1. Open any model that has parameters. I tried regression.gaml in the library. 2. change x or y values that have input boxes, I did not check with dropdown or slider. 3. then try change values by manual entering a different value and not by + or - ### Expected behavior If I press revert to original, original values should be restored. Basically the revert button is gray. It is active only when used with + or - ### Actual behavior revert button is gray and therefore does not detect manual changes. **The same applies to the large red revert button for whole model.** ### System and version MacOS Sierra, GAMA standalone of 5 march 2018
True
Revert to original values is gray does nothing if manual changes are made - ### Steps to reproduce 1. Open any model that has parameters. I tried regression.gaml in the library. 2. change x or y values that have input boxes, I did not check with dropdown or slider. 3. then try change values by manual entering a different value and not by + or - ### Expected behavior If I press revert to original, original values should be restored. Basically the revert button is gray. It is active only when used with + or - ### Actual behavior revert button is gray and therefore does not detect manual changes. **The same applies to the large red revert button for whole model.** ### System and version MacOS Sierra, GAMA standalone of 5 march 2018
non_process
revert to original values is gray does nothing if manual changes are made steps to reproduce open any model that has parameters i tried regression gaml in the library change x or y values that have input boxes i did not check with dropdown or slider then try change values by manual entering a different value and not by or expected behavior if i press revert to original original values should be restored basically the revert button is gray it is active only when used with or actual behavior revert button is gray and therefore does not detect manual changes the same applies to the large red revert button for whole model system and version macos sierra gama standalone of march
0
250
2,671,341,287
IssuesEvent
2015-03-24 05:16:01
GsDevKit/gsUpgrader
https://api.github.com/repos/GsDevKit/gsUpgrader
closed
gsUpgrader does not honor locked GLASS1 project on install ...
in process
The following loads glass from GitHub, despite the lock: ```Smalltalk GsDeployer bulkMigrate: [ Metacello new baseline: 'GLASS1'; repository: 'filetree://${gitRoot}/glass/repository'; lock ]. Gofer new package: 'GsUpgrader-Core'; url: 'http://ss3.gemtalksystems.com/ss/gsUpgrader'; load. (Smalltalk at: #'GsUpgrader') upgradeGLASS1 ```
1.0
gsUpgrader does not honor locked GLASS1 project on install ... - The following loads glass from GitHub, despite the lock: ```Smalltalk GsDeployer bulkMigrate: [ Metacello new baseline: 'GLASS1'; repository: 'filetree://${gitRoot}/glass/repository'; lock ]. Gofer new package: 'GsUpgrader-Core'; url: 'http://ss3.gemtalksystems.com/ss/gsUpgrader'; load. (Smalltalk at: #'GsUpgrader') upgradeGLASS1 ```
process
gsupgrader does not honor locked project on install the following loads glass from github despite the lock smalltalk gsdeployer bulkmigrate metacello new baseline repository filetree gitroot glass repository lock gofer new package gsupgrader core url load smalltalk at gsupgrader
1
53,052
6,291,056,286
IssuesEvent
2017-07-19 23:17:59
metabase/metabase
https://api.github.com/repos/metabase/metabase
opened
CI can pass when two nodes fail at the same time
CI & Tests
CI can potentially pass when it should have failed if two nodes "fail fast" at the same time. Here's an example: https://circleci.com/gh/metabase/metabase/12597#tests/containers/5 In the link I posted nodes 4 and 5 both failed and got to the “stopping other nodes…” part, but then both exited with 0. Maybe they both stopped each other (race condition)? We need to make sure only one fail-fast signal happens per CI run.
1.0
CI can pass when two nodes fail at the same time - CI can potentially pass when it should have failed if two nodes "fail fast" at the same time. Here's an example: https://circleci.com/gh/metabase/metabase/12597#tests/containers/5 In the link I posted nodes 4 and 5 both failed and got to the “stopping other nodes…” part, but then both exited with 0. Maybe they both stopped each other (race condition)? We need to make sure only one fail-fast signal happens per CI run.
non_process
ci can pass when two nodes fail at the same time ci can potentially pass when it should have failed if two nodes fail fast at the same time here s an example in the link i posted nodes and both failed and got to the “stopping other nodes…” part but then both exited with maybe they both stopped each other race condition we need to make sure only one fail fast signal happens per ci run
0
283,849
24,566,387,732
IssuesEvent
2022-10-13 03:42:41
bitcoin/bitcoin
https://api.github.com/repos/bitcoin/bitcoin
closed
ThreadSanitizer: data race on vptr (ctor/dtor vs virtual call) in BaseIndex
Bug Tests
https://cirrus-ci.com/task/6564394053140480?logs=ci#L3875: ```bash WARNING: ThreadSanitizer: data race on vptr (ctor/dtor vs virtual call) (pid=24158) Write of size 8 at 0x7ffe0efae9f8 by main thread: #0 BaseIndex::~BaseIndex() src/index/base.cpp:53:1 (test_bitcoin+0xcc6b69) #1 CoinStatsIndex::~CoinStatsIndex() src/./index/coinstatsindex.h:17:7 (test_bitcoin+0x3b9b21) #2 coinstatsindex_tests::coinstatsindex_initial_sync::test_method() src/test/coinstatsindex_tests.cpp:84:1 (test_bitcoin+0x3b9b21) #3 coinstatsindex_tests::coinstatsindex_initial_sync_invoker() src/test/coinstatsindex_tests.cpp:32:1 (test_bitcoin+0x3b814b) #4 boost::detail::function::void_function_invoker0<void (*)(), void>::invoke(boost::detail::function::function_buffer&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:117:11 (test_bitcoin+0x2bbf1d) #5 boost::function0<void>::operator()() const /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:763:14 (test_bitcoin+0x220877) #6 boost::detail::forward::operator()() /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:1388:32 (test_bitcoin+0x220877) #7 boost::detail::function::function_obj_invoker0<boost::detail::forward, int>::invoke(boost::detail::function::function_buffer&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:137:18 (test_bitcoin+0x220877) #8 boost::function0<int>::operator()() const /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:763:14 (test_bitcoin+0x1ae59e) #9 int boost::detail::do_invoke<boost::shared_ptr<boost::detail::translator_holder_base>, boost::function<int ()> >(boost::shared_ptr<boost::detail::translator_holder_base> const&, boost::function<int ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:301:30 (test_bitcoin+0x1ae59e) #10 boost::execution_monitor::catch_signals(boost::function<int ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:903:16 (test_bitcoin+0x1ae59e) #11 boost::execution_monitor::execute(boost::function<int ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:1301:16 (test_bitcoin+0x1ae8c0) #12 boost::execution_monitor::vexecute(boost::function<void ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:1397:5 (test_bitcoin+0x1aa21b) #13 boost::unit_test::unit_test_monitor_t::execute_and_translate(boost::function<void ()> const&, unsigned long) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/unit_test_monitor.ipp:49:9 (test_bitcoin+0x1aa21b) #14 boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned long, boost::unit_test::framework::state::random_generator_helper const*) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:815:44 (test_bitcoin+0x1ddb63) #15 boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned long, boost::unit_test::framework::state::random_generator_helper const*) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:784:58 (test_bitcoin+0x1de1d8) #16 boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned long, boost::unit_test::framework::state::random_generator_helper const*) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:784:58 (test_bitcoin+0x1de1d8) #17 boost::unit_test::framework::run(unsigned long, bool) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:1721:29 (test_bitcoin+0x1a8e66) #18 boost::unit_test::unit_test_main(boost::unit_test::test_suite* (*)(int, char**), int, char**) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/unit_test_main.ipp:250:9 (test_bitcoin+0x1c19c6) #19 main /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/unit_test_main.ipp:306:12 (test_bitcoin+0x1c1ff6) Previous read of size 8 at 0x7ffe0efae9f8 by thread T1 (mutexes: write M603): #0 BaseIndex::SetBestBlockIndex(CBlockIndex const*)::$_1::operator()() const src/index/base.cpp:388:9 (test_bitcoin+0xcc74e6) #1 BaseIndex::SetBestBlockIndex(CBlockIndex const*) src/index/base.cpp:388:9 (test_bitcoin+0xcc74e6) #2 BaseIndex::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*) src/index/base.cpp:273:9 (test_bitcoin+0xcc9759) #3 CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_8::operator()() const::'lambda'(CValidationInterface&)::operator()(CValidationInterface&) const src/validationinterface.cpp:225:79 (test_bitcoin+0x10223a4) #4 void MainSignalsImpl::Iterate<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_8::operator()() const::'lambda'(CValidationInterface&)>(CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_8::operator()() const::'lambda'(CValidationInterface&)&&) src/validationinterface.cpp:86:17 (test_bitcoin+0x10223a4) #5 CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_8::operator()() const src/validationinterface.cpp:225:22 (test_bitcoin+0x10223a4) #6 CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9::operator()() const src/validationinterface.cpp:227:5 (test_bitcoin+0x10223a4) #7 decltype(static_cast<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9&>(fp)()) std::__1::__invoke<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9&>(CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9&) /usr/lib/llvm-13/bin/../include/c++/v1/type_traits:3918:1 (test_bitcoin+0x10223a4) #8 void std::__1::__invoke_void_return_wrapper<void, true>::__call<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9&>(CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9&) /usr/lib/llvm-13/bin/../include/c++/v1/__functional/invoke.h:61:9 (test_bitcoin+0x10223a4) #9 std::__1::__function::__alloc_func<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9, std::__1::allocator<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9>, void ()>::operator()() /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:171:16 (test_bitcoin+0x10223a4) #10 std::__1::__function::__func<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9, std::__1::allocator<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9>, void ()>::operator()() /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:345:12 (test_bitcoin+0x10223a4) #11 std::__1::__function::__value_func<void ()>::operator()() const /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:498:16 (test_bitcoin+0x10b6b71) #12 std::__1::function<void ()>::operator()() const /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:1175:12 (test_bitcoin+0x10b6b71) #13 SingleThreadedSchedulerClient::ProcessQueue() src/scheduler.cpp:175:5 (test_bitcoin+0x10b6b71) #14 SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1::operator()() const src/scheduler.cpp:144:41 (test_bitcoin+0x10b8875) #15 decltype(static_cast<SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1&>(fp)()) std::__1::__invoke<SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1&>(SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1&) /usr/lib/llvm-13/bin/../include/c++/v1/type_traits:3918:1 (test_bitcoin+0x10b8875) #16 void std::__1::__invoke_void_return_wrapper<void, true>::__call<SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1&>(SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1&) /usr/lib/llvm-13/bin/../include/c++/v1/__functional/invoke.h:61:9 (test_bitcoin+0x10b8875) #17 std::__1::__function::__alloc_func<SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1, std::__1::allocator<SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1>, void ()>::operator()() /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:171:16 (test_bitcoin+0x10b8875) #18 std::__1::__function::__func<SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1, std::__1::allocator<SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1>, void ()>::operator()() /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:345:12 (test_bitcoin+0x10b8875) #19 std::__1::__function::__value_func<void ()>::operator()() const /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:498:16 (test_bitcoin+0x10b5b5c) #20 std::__1::function<void ()>::operator()() const /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:1175:12 (test_bitcoin+0x10b5b5c) #21 CScheduler::serviceQueue() src/scheduler.cpp:62:17 (test_bitcoin+0x10b5b5c) #22 ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0::operator()() const src/test/util/setup_common.cpp:160:110 (test_bitcoin+0xa4e7b8) #23 decltype(static_cast<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0&>(fp)()) std::__1::__invoke<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0&>(ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0&) /usr/lib/llvm-13/bin/../include/c++/v1/type_traits:3918:1 (test_bitcoin+0xa4e7b8) #24 void std::__1::__invoke_void_return_wrapper<void, true>::__call<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0&>(ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0&) /usr/lib/llvm-13/bin/../include/c++/v1/__functional/invoke.h:61:9 (test_bitcoin+0xa4e7b8) #25 std::__1::__function::__alloc_func<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0, std::__1::allocator<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0>, void ()>::operator()() /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:171:16 (test_bitcoin+0xa4e7b8) #26 std::__1::__function::__func<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0, std::__1::allocator<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0>, void ()>::operator()() /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:345:12 (test_bitcoin+0xa4e7b8) #27 std::__1::__function::__value_func<void ()>::operator()() const /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:498:16 (test_bitcoin+0x115760f) #28 std::__1::function<void ()>::operator()() const /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:1175:12 (test_bitcoin+0x115760f) #29 util::TraceThread(char const*, std::__1::function<void ()>) src/util/thread.cpp:18:9 (test_bitcoin+0x115760f) #30 decltype(static_cast<void (*>(fp)(static_cast<char const*>(fp0), static_cast<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0>(fp0))) std::__1::__invoke<void (*)(char const*, std::__1::function<void ()>), char const*, ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0>(void (*&&)(char const*, std::__1::function<void ()>), char const*&&, ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0&&) /usr/lib/llvm-13/bin/../include/c++/v1/type_traits:3918:1 (test_bitcoin+0xa4e3b1) #31 void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void (*)(char const*, std::__1::function<void ()>), char const*, ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0, 2ul, 3ul>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void (*)(char const*, std::__1::function<void ()>), char const*, ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0>&, std::__1::__tuple_indices<2ul, 3ul>) /usr/lib/llvm-13/bin/../include/c++/v1/thread:280:5 (test_bitcoin+0xa4e3b1) #32 void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void (*)(char const*, std::__1::function<void ()>), char const*, ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0> >(void*) /usr/lib/llvm-13/bin/../include/c++/v1/thread:291:5 (test_bitcoin+0xa4e3b1) Location is stack of main thread. Location is global '??' at 0x7ffe0ef91000 ([stack]+0x00000001d9f8) Mutex M603 (0x558df2c934a0) created at: #0 pthread_mutex_init <null> (test_bitcoin+0x11cf6f) #1 std::__1::recursive_mutex::recursive_mutex() <null> (libc++.so.1+0x49fb3) #2 __libc_start_main <null> (libc.so.6+0x29eba) Thread T1 'b-scheduler' (tid=24216, running) created by main thread at: #0 pthread_create <null> (test_bitcoin+0x11b7fd) #1 std::__1::__libcpp_thread_create(unsigned long*, void* (*)(void*), void*) /usr/lib/llvm-13/bin/../include/c++/v1/__threading_support:443:10 (test_bitcoin+0xa47a76) #2 std::__1::thread::thread<void (&)(char const*, std::__1::function<void ()>), char const (&) [10], ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0, void>(void (&)(char const*, std::__1::function<void ()>), char const (&) [10], ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0&&) /usr/lib/llvm-13/bin/../include/c++/v1/thread:307:16 (test_bitcoin+0xa47a76) #3 ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&) src/test/util/setup_common.cpp:160:42 (test_bitcoin+0xa47a76) #4 TestingSetup::TestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&) src/test/util/setup_common.cpp:198:7 (test_bitcoin+0xa47ed9) #5 TestChain100Setup::TestChain100Setup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&) src/test/util/setup_common.cpp:246:7 (test_bitcoin+0xa48be3) #6 coinstatsindex_tests::coinstatsindex_initial_sync::coinstatsindex_initial_sync() src/test/coinstatsindex_tests.cpp:32:1 (test_bitcoin+0x3b7c8b) #7 coinstatsindex_tests::coinstatsindex_initial_sync_invoker() src/test/coinstatsindex_tests.cpp:32:1 (test_bitcoin+0x3b7c8b) #8 boost::detail::function::void_function_invoker0<void (*)(), void>::invoke(boost::detail::function::function_buffer&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:117:11 (test_bitcoin+0x2bbf1d) #9 boost::function0<void>::operator()() const /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:763:14 (test_bitcoin+0x220877) #10 boost::detail::forward::operator()() /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:1388:32 (test_bitcoin+0x220877) #11 boost::detail::function::function_obj_invoker0<boost::detail::forward, int>::invoke(boost::detail::function::function_buffer&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:137:18 (test_bitcoin+0x220877) #12 boost::function0<int>::operator()() const /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:763:14 (test_bitcoin+0x1ae59e) #13 int boost::detail::do_invoke<boost::shared_ptr<boost::detail::translator_holder_base>, boost::function<int ()> >(boost::shared_ptr<boost::detail::translator_holder_base> const&, boost::function<int ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:301:30 (test_bitcoin+0x1ae59e) #14 boost::execution_monitor::catch_signals(boost::function<int ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:903:16 (test_bitcoin+0x1ae59e) #15 boost::execution_monitor::execute(boost::function<int ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:1301:16 (test_bitcoin+0x1ae8c0) #16 boost::execution_monitor::vexecute(boost::function<void ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:1397:5 (test_bitcoin+0x1aa21b) #17 boost::unit_test::unit_test_monitor_t::execute_and_translate(boost::function<void ()> const&, unsigned long) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/unit_test_monitor.ipp:49:9 (test_bitcoin+0x1aa21b) #18 boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned long, boost::unit_test::framework::state::random_generator_helper const*) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:815:44 (test_bitcoin+0x1ddb63) #19 boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned long, boost::unit_test::framework::state::random_generator_helper const*) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:784:58 (test_bitcoin+0x1de1d8) #20 boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned long, boost::unit_test::framework::state::random_generator_helper const*) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:784:58 (test_bitcoin+0x1de1d8) #21 boost::unit_test::framework::run(unsigned long, bool) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:1721:29 (test_bitcoin+0x1a8e66) #22 boost::unit_test::unit_test_main(boost::unit_test::test_suite* (*)(int, char**), int, char**) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/unit_test_main.ipp:250:9 (test_bitcoin+0x1c19c6) #23 main /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/unit_test_main.ipp:306:12 (test_bitcoin+0x1c1ff6) SUMMARY: ThreadSanitizer: data race on vptr (ctor/dtor vs virtual call) src/index/base.cpp:53:1 in BaseIndex::~BaseIndex() ================== Exit status: 2 ```
1.0
ThreadSanitizer: data race on vptr (ctor/dtor vs virtual call) in BaseIndex - https://cirrus-ci.com/task/6564394053140480?logs=ci#L3875: ```bash WARNING: ThreadSanitizer: data race on vptr (ctor/dtor vs virtual call) (pid=24158) Write of size 8 at 0x7ffe0efae9f8 by main thread: #0 BaseIndex::~BaseIndex() src/index/base.cpp:53:1 (test_bitcoin+0xcc6b69) #1 CoinStatsIndex::~CoinStatsIndex() src/./index/coinstatsindex.h:17:7 (test_bitcoin+0x3b9b21) #2 coinstatsindex_tests::coinstatsindex_initial_sync::test_method() src/test/coinstatsindex_tests.cpp:84:1 (test_bitcoin+0x3b9b21) #3 coinstatsindex_tests::coinstatsindex_initial_sync_invoker() src/test/coinstatsindex_tests.cpp:32:1 (test_bitcoin+0x3b814b) #4 boost::detail::function::void_function_invoker0<void (*)(), void>::invoke(boost::detail::function::function_buffer&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:117:11 (test_bitcoin+0x2bbf1d) #5 boost::function0<void>::operator()() const /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:763:14 (test_bitcoin+0x220877) #6 boost::detail::forward::operator()() /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:1388:32 (test_bitcoin+0x220877) #7 boost::detail::function::function_obj_invoker0<boost::detail::forward, int>::invoke(boost::detail::function::function_buffer&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:137:18 (test_bitcoin+0x220877) #8 boost::function0<int>::operator()() const /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:763:14 (test_bitcoin+0x1ae59e) #9 int boost::detail::do_invoke<boost::shared_ptr<boost::detail::translator_holder_base>, boost::function<int ()> >(boost::shared_ptr<boost::detail::translator_holder_base> const&, boost::function<int ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:301:30 (test_bitcoin+0x1ae59e) #10 boost::execution_monitor::catch_signals(boost::function<int ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:903:16 (test_bitcoin+0x1ae59e) #11 boost::execution_monitor::execute(boost::function<int ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:1301:16 (test_bitcoin+0x1ae8c0) #12 boost::execution_monitor::vexecute(boost::function<void ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:1397:5 (test_bitcoin+0x1aa21b) #13 boost::unit_test::unit_test_monitor_t::execute_and_translate(boost::function<void ()> const&, unsigned long) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/unit_test_monitor.ipp:49:9 (test_bitcoin+0x1aa21b) #14 boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned long, boost::unit_test::framework::state::random_generator_helper const*) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:815:44 (test_bitcoin+0x1ddb63) #15 boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned long, boost::unit_test::framework::state::random_generator_helper const*) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:784:58 (test_bitcoin+0x1de1d8) #16 boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned long, boost::unit_test::framework::state::random_generator_helper const*) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:784:58 (test_bitcoin+0x1de1d8) #17 boost::unit_test::framework::run(unsigned long, bool) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:1721:29 (test_bitcoin+0x1a8e66) #18 boost::unit_test::unit_test_main(boost::unit_test::test_suite* (*)(int, char**), int, char**) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/unit_test_main.ipp:250:9 (test_bitcoin+0x1c19c6) #19 main /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/unit_test_main.ipp:306:12 (test_bitcoin+0x1c1ff6) Previous read of size 8 at 0x7ffe0efae9f8 by thread T1 (mutexes: write M603): #0 BaseIndex::SetBestBlockIndex(CBlockIndex const*)::$_1::operator()() const src/index/base.cpp:388:9 (test_bitcoin+0xcc74e6) #1 BaseIndex::SetBestBlockIndex(CBlockIndex const*) src/index/base.cpp:388:9 (test_bitcoin+0xcc74e6) #2 BaseIndex::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*) src/index/base.cpp:273:9 (test_bitcoin+0xcc9759) #3 CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_8::operator()() const::'lambda'(CValidationInterface&)::operator()(CValidationInterface&) const src/validationinterface.cpp:225:79 (test_bitcoin+0x10223a4) #4 void MainSignalsImpl::Iterate<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_8::operator()() const::'lambda'(CValidationInterface&)>(CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_8::operator()() const::'lambda'(CValidationInterface&)&&) src/validationinterface.cpp:86:17 (test_bitcoin+0x10223a4) #5 CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_8::operator()() const src/validationinterface.cpp:225:22 (test_bitcoin+0x10223a4) #6 CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9::operator()() const src/validationinterface.cpp:227:5 (test_bitcoin+0x10223a4) #7 decltype(static_cast<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9&>(fp)()) std::__1::__invoke<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9&>(CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9&) /usr/lib/llvm-13/bin/../include/c++/v1/type_traits:3918:1 (test_bitcoin+0x10223a4) #8 void std::__1::__invoke_void_return_wrapper<void, true>::__call<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9&>(CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9&) /usr/lib/llvm-13/bin/../include/c++/v1/__functional/invoke.h:61:9 (test_bitcoin+0x10223a4) #9 std::__1::__function::__alloc_func<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9, std::__1::allocator<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9>, void ()>::operator()() /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:171:16 (test_bitcoin+0x10223a4) #10 std::__1::__function::__func<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9, std::__1::allocator<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9>, void ()>::operator()() /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:345:12 (test_bitcoin+0x10223a4) #11 std::__1::__function::__value_func<void ()>::operator()() const /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:498:16 (test_bitcoin+0x10b6b71) #12 std::__1::function<void ()>::operator()() const /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:1175:12 (test_bitcoin+0x10b6b71) #13 SingleThreadedSchedulerClient::ProcessQueue() src/scheduler.cpp:175:5 (test_bitcoin+0x10b6b71) #14 SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1::operator()() const src/scheduler.cpp:144:41 (test_bitcoin+0x10b8875) #15 decltype(static_cast<SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1&>(fp)()) std::__1::__invoke<SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1&>(SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1&) /usr/lib/llvm-13/bin/../include/c++/v1/type_traits:3918:1 (test_bitcoin+0x10b8875) #16 void std::__1::__invoke_void_return_wrapper<void, true>::__call<SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1&>(SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1&) /usr/lib/llvm-13/bin/../include/c++/v1/__functional/invoke.h:61:9 (test_bitcoin+0x10b8875) #17 std::__1::__function::__alloc_func<SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1, std::__1::allocator<SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1>, void ()>::operator()() /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:171:16 (test_bitcoin+0x10b8875) #18 std::__1::__function::__func<SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1, std::__1::allocator<SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1>, void ()>::operator()() /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:345:12 (test_bitcoin+0x10b8875) #19 std::__1::__function::__value_func<void ()>::operator()() const /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:498:16 (test_bitcoin+0x10b5b5c) #20 std::__1::function<void ()>::operator()() const /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:1175:12 (test_bitcoin+0x10b5b5c) #21 CScheduler::serviceQueue() src/scheduler.cpp:62:17 (test_bitcoin+0x10b5b5c) #22 ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0::operator()() const src/test/util/setup_common.cpp:160:110 (test_bitcoin+0xa4e7b8) #23 decltype(static_cast<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0&>(fp)()) std::__1::__invoke<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0&>(ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0&) /usr/lib/llvm-13/bin/../include/c++/v1/type_traits:3918:1 (test_bitcoin+0xa4e7b8) #24 void std::__1::__invoke_void_return_wrapper<void, true>::__call<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0&>(ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0&) /usr/lib/llvm-13/bin/../include/c++/v1/__functional/invoke.h:61:9 (test_bitcoin+0xa4e7b8) #25 std::__1::__function::__alloc_func<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0, std::__1::allocator<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0>, void ()>::operator()() /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:171:16 (test_bitcoin+0xa4e7b8) #26 std::__1::__function::__func<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0, std::__1::allocator<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0>, void ()>::operator()() /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:345:12 (test_bitcoin+0xa4e7b8) #27 std::__1::__function::__value_func<void ()>::operator()() const /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:498:16 (test_bitcoin+0x115760f) #28 std::__1::function<void ()>::operator()() const /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:1175:12 (test_bitcoin+0x115760f) #29 util::TraceThread(char const*, std::__1::function<void ()>) src/util/thread.cpp:18:9 (test_bitcoin+0x115760f) #30 decltype(static_cast<void (*>(fp)(static_cast<char const*>(fp0), static_cast<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0>(fp0))) std::__1::__invoke<void (*)(char const*, std::__1::function<void ()>), char const*, ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0>(void (*&&)(char const*, std::__1::function<void ()>), char const*&&, ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0&&) /usr/lib/llvm-13/bin/../include/c++/v1/type_traits:3918:1 (test_bitcoin+0xa4e3b1) #31 void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void (*)(char const*, std::__1::function<void ()>), char const*, ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0, 2ul, 3ul>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void (*)(char const*, std::__1::function<void ()>), char const*, ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0>&, std::__1::__tuple_indices<2ul, 3ul>) /usr/lib/llvm-13/bin/../include/c++/v1/thread:280:5 (test_bitcoin+0xa4e3b1) #32 void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void (*)(char const*, std::__1::function<void ()>), char const*, ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0> >(void*) /usr/lib/llvm-13/bin/../include/c++/v1/thread:291:5 (test_bitcoin+0xa4e3b1) Location is stack of main thread. Location is global '??' at 0x7ffe0ef91000 ([stack]+0x00000001d9f8) Mutex M603 (0x558df2c934a0) created at: #0 pthread_mutex_init <null> (test_bitcoin+0x11cf6f) #1 std::__1::recursive_mutex::recursive_mutex() <null> (libc++.so.1+0x49fb3) #2 __libc_start_main <null> (libc.so.6+0x29eba) Thread T1 'b-scheduler' (tid=24216, running) created by main thread at: #0 pthread_create <null> (test_bitcoin+0x11b7fd) #1 std::__1::__libcpp_thread_create(unsigned long*, void* (*)(void*), void*) /usr/lib/llvm-13/bin/../include/c++/v1/__threading_support:443:10 (test_bitcoin+0xa47a76) #2 std::__1::thread::thread<void (&)(char const*, std::__1::function<void ()>), char const (&) [10], ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0, void>(void (&)(char const*, std::__1::function<void ()>), char const (&) [10], ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0&&) /usr/lib/llvm-13/bin/../include/c++/v1/thread:307:16 (test_bitcoin+0xa47a76) #3 ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&) src/test/util/setup_common.cpp:160:42 (test_bitcoin+0xa47a76) #4 TestingSetup::TestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&) src/test/util/setup_common.cpp:198:7 (test_bitcoin+0xa47ed9) #5 TestChain100Setup::TestChain100Setup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&) src/test/util/setup_common.cpp:246:7 (test_bitcoin+0xa48be3) #6 coinstatsindex_tests::coinstatsindex_initial_sync::coinstatsindex_initial_sync() src/test/coinstatsindex_tests.cpp:32:1 (test_bitcoin+0x3b7c8b) #7 coinstatsindex_tests::coinstatsindex_initial_sync_invoker() src/test/coinstatsindex_tests.cpp:32:1 (test_bitcoin+0x3b7c8b) #8 boost::detail::function::void_function_invoker0<void (*)(), void>::invoke(boost::detail::function::function_buffer&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:117:11 (test_bitcoin+0x2bbf1d) #9 boost::function0<void>::operator()() const /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:763:14 (test_bitcoin+0x220877) #10 boost::detail::forward::operator()() /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:1388:32 (test_bitcoin+0x220877) #11 boost::detail::function::function_obj_invoker0<boost::detail::forward, int>::invoke(boost::detail::function::function_buffer&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:137:18 (test_bitcoin+0x220877) #12 boost::function0<int>::operator()() const /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:763:14 (test_bitcoin+0x1ae59e) #13 int boost::detail::do_invoke<boost::shared_ptr<boost::detail::translator_holder_base>, boost::function<int ()> >(boost::shared_ptr<boost::detail::translator_holder_base> const&, boost::function<int ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:301:30 (test_bitcoin+0x1ae59e) #14 boost::execution_monitor::catch_signals(boost::function<int ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:903:16 (test_bitcoin+0x1ae59e) #15 boost::execution_monitor::execute(boost::function<int ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:1301:16 (test_bitcoin+0x1ae8c0) #16 boost::execution_monitor::vexecute(boost::function<void ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:1397:5 (test_bitcoin+0x1aa21b) #17 boost::unit_test::unit_test_monitor_t::execute_and_translate(boost::function<void ()> const&, unsigned long) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/unit_test_monitor.ipp:49:9 (test_bitcoin+0x1aa21b) #18 boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned long, boost::unit_test::framework::state::random_generator_helper const*) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:815:44 (test_bitcoin+0x1ddb63) #19 boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned long, boost::unit_test::framework::state::random_generator_helper const*) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:784:58 (test_bitcoin+0x1de1d8) #20 boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned long, boost::unit_test::framework::state::random_generator_helper const*) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:784:58 (test_bitcoin+0x1de1d8) #21 boost::unit_test::framework::run(unsigned long, bool) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:1721:29 (test_bitcoin+0x1a8e66) #22 boost::unit_test::unit_test_main(boost::unit_test::test_suite* (*)(int, char**), int, char**) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/unit_test_main.ipp:250:9 (test_bitcoin+0x1c19c6) #23 main /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/unit_test_main.ipp:306:12 (test_bitcoin+0x1c1ff6) SUMMARY: ThreadSanitizer: data race on vptr (ctor/dtor vs virtual call) src/index/base.cpp:53:1 in BaseIndex::~BaseIndex() ================== Exit status: 2 ```
non_process
threadsanitizer data race on vptr ctor dtor vs virtual call in baseindex bash warning threadsanitizer data race on vptr ctor dtor vs virtual call pid write of size at by main thread baseindex baseindex src index base cpp test bitcoin coinstatsindex coinstatsindex src index coinstatsindex h test bitcoin coinstatsindex tests coinstatsindex initial sync test method src test coinstatsindex tests cpp test bitcoin coinstatsindex tests coinstatsindex initial sync invoker src test coinstatsindex tests cpp test bitcoin boost detail function void function invoke boost detail function function buffer tmp cirrus ci build depends pc linux gnu include boost function function template hpp test bitcoin boost operator const tmp cirrus ci build depends pc linux gnu include boost function function template hpp test bitcoin boost detail forward operator tmp cirrus ci build depends pc linux gnu include boost test impl execution monitor ipp test bitcoin boost detail function function obj invoke boost detail function function buffer tmp cirrus ci build depends pc linux gnu include boost function function template hpp test bitcoin boost operator const tmp cirrus ci build depends pc linux gnu include boost function function template hpp test bitcoin int boost detail do invoke boost function boost shared ptr const boost function const tmp cirrus ci build depends pc linux gnu include boost test impl execution monitor ipp test bitcoin boost execution monitor catch signals boost function const tmp cirrus ci build depends pc linux gnu include boost test impl execution monitor ipp test bitcoin boost execution monitor execute boost function const tmp cirrus ci build depends pc linux gnu include boost test impl execution monitor ipp test bitcoin boost execution monitor vexecute boost function const tmp cirrus ci build depends pc linux gnu include boost test impl execution monitor ipp test bitcoin boost unit test unit test monitor t execute and translate boost function const unsigned long tmp cirrus ci build depends pc linux gnu include boost test impl unit test monitor ipp test bitcoin boost unit test framework state execute test tree unsigned long unsigned long boost unit test framework state random generator helper const tmp cirrus ci build depends pc linux gnu include boost test impl framework ipp test bitcoin boost unit test framework state execute test tree unsigned long unsigned long boost unit test framework state random generator helper const tmp cirrus ci build depends pc linux gnu include boost test impl framework ipp test bitcoin boost unit test framework state execute test tree unsigned long unsigned long boost unit test framework state random generator helper const tmp cirrus ci build depends pc linux gnu include boost test impl framework ipp test bitcoin boost unit test framework run unsigned long bool tmp cirrus ci build depends pc linux gnu include boost test impl framework ipp test bitcoin boost unit test unit test main boost unit test test suite int char int char tmp cirrus ci build depends pc linux gnu include boost test impl unit test main ipp test bitcoin main tmp cirrus ci build depends pc linux gnu include boost test impl unit test main ipp test bitcoin previous read of size at by thread mutexes write baseindex setbestblockindex cblockindex const operator const src index base cpp test bitcoin baseindex setbestblockindex cblockindex const src index base cpp test bitcoin baseindex blockconnected std shared ptr const cblockindex const src index base cpp test bitcoin cmainsignals blockconnected std shared ptr const cblockindex const operator const lambda cvalidationinterface operator cvalidationinterface const src validationinterface cpp test bitcoin void mainsignalsimpl iterate const cblockindex const operator const lambda cvalidationinterface cmainsignals blockconnected std shared ptr const cblockindex const operator const lambda cvalidationinterface src validationinterface cpp test bitcoin cmainsignals blockconnected std shared ptr const cblockindex const operator const src validationinterface cpp test bitcoin cmainsignals blockconnected std shared ptr const cblockindex const operator const src validationinterface cpp test bitcoin decltype static cast const cblockindex const fp std invoke const cblockindex const cmainsignals blockconnected std shared ptr const cblockindex const usr lib llvm bin include c type traits test bitcoin void std invoke void return wrapper call const cblockindex const cmainsignals blockconnected std shared ptr const cblockindex const usr lib llvm bin include c functional invoke h test bitcoin std function alloc func const cblockindex const std allocator const cblockindex const void operator usr lib llvm bin include c functional function h test bitcoin std function func const cblockindex const std allocator const cblockindex const void operator usr lib llvm bin include c functional function h test bitcoin std function value func operator const usr lib llvm bin include c functional function h test bitcoin std function operator const usr lib llvm bin include c functional function h test bitcoin singlethreadedschedulerclient processqueue src scheduler cpp test bitcoin singlethreadedschedulerclient maybescheduleprocessqueue operator const src scheduler cpp test bitcoin decltype static cast fp std invoke singlethreadedschedulerclient maybescheduleprocessqueue usr lib llvm bin include c type traits test bitcoin void std invoke void return wrapper call singlethreadedschedulerclient maybescheduleprocessqueue usr lib llvm bin include c functional invoke h test bitcoin std function alloc func void operator usr lib llvm bin include c functional function h test bitcoin std function func void operator usr lib llvm bin include c functional function h test bitcoin std function value func operator const usr lib llvm bin include c functional function h test bitcoin std function operator const usr lib llvm bin include c functional function h test bitcoin cscheduler servicequeue src scheduler cpp test bitcoin chaintestingsetup chaintestingsetup std basic string std allocator const std vector const operator const src test util setup common cpp test bitcoin decltype static cast std allocator const std vector const fp std invoke std allocator const std vector const chaintestingsetup chaintestingsetup std basic string std allocator const std vector const usr lib llvm bin include c type traits test bitcoin void std invoke void return wrapper call std allocator const std vector const chaintestingsetup chaintestingsetup std basic string std allocator const std vector const usr lib llvm bin include c functional invoke h test bitcoin std function alloc func std allocator const std vector const std allocator std allocator const std vector const void operator usr lib llvm bin include c functional function h test bitcoin std function func std allocator const std vector const std allocator std allocator const std vector const void operator usr lib llvm bin include c functional function h test bitcoin std function value func operator const usr lib llvm bin include c functional function h test bitcoin std function operator const usr lib llvm bin include c functional function h test bitcoin util tracethread char const std function src util thread cpp test bitcoin decltype static cast fp static cast static cast std allocator const std vector const std invoke char const chaintestingsetup chaintestingsetup std basic string std allocator const std vector const void char const std function char const chaintestingsetup chaintestingsetup std basic string std allocator const std vector const usr lib llvm bin include c type traits test bitcoin void std thread execute void char const std function char const chaintestingsetup chaintestingsetup std basic string std allocator const std vector const std tuple void char const std function char const chaintestingsetup chaintestingsetup std basic string std allocator const std vector const std tuple indices usr lib llvm bin include c thread test bitcoin void std thread proxy void char const std function char const chaintestingsetup chaintestingsetup std basic string std allocator const std vector const void usr lib llvm bin include c thread test bitcoin location is stack of main thread location is global at mutex created at pthread mutex init test bitcoin std recursive mutex recursive mutex libc so libc start main libc so thread b scheduler tid running created by main thread at pthread create test bitcoin std libcpp thread create unsigned long void void void usr lib llvm bin include c threading support test bitcoin std thread thread char const chaintestingsetup chaintestingsetup std basic string std allocator const std vector const void void char const std function char const chaintestingsetup chaintestingsetup std basic string std allocator const std vector const usr lib llvm bin include c thread test bitcoin chaintestingsetup chaintestingsetup std basic string std allocator const std vector const src test util setup common cpp test bitcoin testingsetup testingsetup std basic string std allocator const std vector const src test util setup common cpp test bitcoin std basic string std allocator const std vector const src test util setup common cpp test bitcoin coinstatsindex tests coinstatsindex initial sync coinstatsindex initial sync src test coinstatsindex tests cpp test bitcoin coinstatsindex tests coinstatsindex initial sync invoker src test coinstatsindex tests cpp test bitcoin boost detail function void function invoke boost detail function function buffer tmp cirrus ci build depends pc linux gnu include boost function function template hpp test bitcoin boost operator const tmp cirrus ci build depends pc linux gnu include boost function function template hpp test bitcoin boost detail forward operator tmp cirrus ci build depends pc linux gnu include boost test impl execution monitor ipp test bitcoin boost detail function function obj invoke boost detail function function buffer tmp cirrus ci build depends pc linux gnu include boost function function template hpp test bitcoin boost operator const tmp cirrus ci build depends pc linux gnu include boost function function template hpp test bitcoin int boost detail do invoke boost function boost shared ptr const boost function const tmp cirrus ci build depends pc linux gnu include boost test impl execution monitor ipp test bitcoin boost execution monitor catch signals boost function const tmp cirrus ci build depends pc linux gnu include boost test impl execution monitor ipp test bitcoin boost execution monitor execute boost function const tmp cirrus ci build depends pc linux gnu include boost test impl execution monitor ipp test bitcoin boost execution monitor vexecute boost function const tmp cirrus ci build depends pc linux gnu include boost test impl execution monitor ipp test bitcoin boost unit test unit test monitor t execute and translate boost function const unsigned long tmp cirrus ci build depends pc linux gnu include boost test impl unit test monitor ipp test bitcoin boost unit test framework state execute test tree unsigned long unsigned long boost unit test framework state random generator helper const tmp cirrus ci build depends pc linux gnu include boost test impl framework ipp test bitcoin boost unit test framework state execute test tree unsigned long unsigned long boost unit test framework state random generator helper const tmp cirrus ci build depends pc linux gnu include boost test impl framework ipp test bitcoin boost unit test framework state execute test tree unsigned long unsigned long boost unit test framework state random generator helper const tmp cirrus ci build depends pc linux gnu include boost test impl framework ipp test bitcoin boost unit test framework run unsigned long bool tmp cirrus ci build depends pc linux gnu include boost test impl framework ipp test bitcoin boost unit test unit test main boost unit test test suite int char int char tmp cirrus ci build depends pc linux gnu include boost test impl unit test main ipp test bitcoin main tmp cirrus ci build depends pc linux gnu include boost test impl unit test main ipp test bitcoin summary threadsanitizer data race on vptr ctor dtor vs virtual call src index base cpp in baseindex baseindex exit status
0
34,474
14,406,997,111
IssuesEvent
2020-12-03 21:06:58
elastic/kibana
https://api.github.com/repos/elastic/kibana
opened
Research flatten type as a possible solution to making alert params searchable
Feature:Alerting Team:Alerting Services
We should do an investigation to see if https://www.elastic.co/guide/en/elasticsearch/reference/7.x/flattened.html can be used as a solution to make our alert params searchable, sortable and filterable (https://github.com/elastic/kibana/issues/50213).
1.0
Research flatten type as a possible solution to making alert params searchable - We should do an investigation to see if https://www.elastic.co/guide/en/elasticsearch/reference/7.x/flattened.html can be used as a solution to make our alert params searchable, sortable and filterable (https://github.com/elastic/kibana/issues/50213).
non_process
research flatten type as a possible solution to making alert params searchable we should do an investigation to see if can be used as a solution to make our alert params searchable sortable and filterable
0
21,898
30,345,555,632
IssuesEvent
2023-07-11 15:12:31
hashgraph/hedera-mirror-node
https://api.github.com/repos/hashgraph/hedera-mirror-node
closed
Release Checklist 0.83
enhancement process
### Problem We need a checklist to verify the release is rolled out successfully. ### Solution ## Preparation - [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc) - [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.83.0) - [x] GitHub checks for branch are passing - [x] No pre-release or snapshot dependencies present in build files - [x] Automated Kubernetes deployment successful - [x] Tag release - [x] Upload release artifacts - [x] Manual Submission for GCP Marketplace verification by google - [x] Publish marketplace release - [x] Publish release ## Performance - [x] Deployed - [x] gRPC API performance tests - [x] Importer performance tests - [x] REST API performance tests ## Previewnet - [x] Deployed ## Staging - [x] Deployed ## Testnet - [x] Deployed ## Mainnet - [x] Deployed to public - [x] Deployed to private ### Alternatives _No response_
1.0
Release Checklist 0.83 - ### Problem We need a checklist to verify the release is rolled out successfully. ### Solution ## Preparation - [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc) - [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.83.0) - [x] GitHub checks for branch are passing - [x] No pre-release or snapshot dependencies present in build files - [x] Automated Kubernetes deployment successful - [x] Tag release - [x] Upload release artifacts - [x] Manual Submission for GCP Marketplace verification by google - [x] Publish marketplace release - [x] Publish release ## Performance - [x] Deployed - [x] gRPC API performance tests - [x] Importer performance tests - [x] REST API performance tests ## Previewnet - [x] Deployed ## Staging - [x] Deployed ## Testnet - [x] Deployed ## Mainnet - [x] Deployed to public - [x] Deployed to private ### Alternatives _No response_
process
release checklist problem we need a checklist to verify the release is rolled out successfully solution preparation milestone field populated on relevant nothing open for github checks for branch are passing no pre release or snapshot dependencies present in build files automated kubernetes deployment successful tag release upload release artifacts manual submission for gcp marketplace verification by google publish marketplace release publish release performance deployed grpc api performance tests importer performance tests rest api performance tests previewnet deployed staging deployed testnet deployed mainnet deployed to public deployed to private alternatives no response
1
4,641
7,482,406,902
IssuesEvent
2018-04-05 01:10:55
UnbFeelings/unb-feelings-GQA
https://api.github.com/repos/UnbFeelings/unb-feelings-GQA
closed
Renomear o artefato "Resultado Padronizado"
process wiki
Na atividade 3.3 do processo, chamada Registrar resultados, existe uma saída chamada "Resultado Padronizado", o que não deixa muito claro o que é, sugeriu-se trocar para "Resultado Consolidado" ** Deve ser atualizado tanto no diagrama quanto na descrição da atividade do processo
1.0
Renomear o artefato "Resultado Padronizado" - Na atividade 3.3 do processo, chamada Registrar resultados, existe uma saída chamada "Resultado Padronizado", o que não deixa muito claro o que é, sugeriu-se trocar para "Resultado Consolidado" ** Deve ser atualizado tanto no diagrama quanto na descrição da atividade do processo
process
renomear o artefato resultado padronizado na atividade do processo chamada registrar resultados existe uma saída chamada resultado padronizado o que não deixa muito claro o que é sugeriu se trocar para resultado consolidado deve ser atualizado tanto no diagrama quanto na descrição da atividade do processo
1
71,868
15,209,863,605
IssuesEvent
2021-02-17 06:18:29
YJSoft/syntaxhighlighter
https://api.github.com/repos/YJSoft/syntaxhighlighter
opened
CVE-2020-7656 (Medium) detected in jquery-1.3.2.min.js
security vulnerability
## CVE-2020-7656 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.3.2.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.3.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.3.2/jquery.min.js</a></p> <p>Path to dependency file: syntaxhighlighter/node_modules/grunt-phplint/node_modules/underscore.string/test/test_underscore/test.html</p> <p>Path to vulnerable library: syntaxhighlighter/node_modules/grunt-phplint/node_modules/underscore.string/test/test_underscore/vendor/jquery.js,syntaxhighlighter/node_modules/grunt-phplint/node_modules/underscore.string/test/test_underscore/vendor/jquery.js</p> <p> Dependency Hierarchy: - :x: **jquery-1.3.2.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/YJSoft/syntaxhighlighter/commit/7161194500204a098f69d41f3418e91d5ff7cbb7">7161194500204a098f69d41f3418e91d5ff7cbb7</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jquery prior to 1.9.0 allows Cross-site Scripting attacks via the load method. The load method fails to recognize and remove "<script>" HTML tags that contain a whitespace character, i.e: "</script >", which results in the enclosed script logic to be executed. <p>Publish Date: 2020-05-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7656>CVE-2020-7656</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/rails/jquery-rails/commit/8f601cbfa08749ee5bbd2bffb6e509db9d753568">https://github.com/rails/jquery-rails/commit/8f601cbfa08749ee5bbd2bffb6e509db9d753568</a></p> <p>Release Date: 2020-05-19</p> <p>Fix Resolution: jquery-rails - 2.2.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-7656 (Medium) detected in jquery-1.3.2.min.js - ## CVE-2020-7656 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.3.2.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.3.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.3.2/jquery.min.js</a></p> <p>Path to dependency file: syntaxhighlighter/node_modules/grunt-phplint/node_modules/underscore.string/test/test_underscore/test.html</p> <p>Path to vulnerable library: syntaxhighlighter/node_modules/grunt-phplint/node_modules/underscore.string/test/test_underscore/vendor/jquery.js,syntaxhighlighter/node_modules/grunt-phplint/node_modules/underscore.string/test/test_underscore/vendor/jquery.js</p> <p> Dependency Hierarchy: - :x: **jquery-1.3.2.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/YJSoft/syntaxhighlighter/commit/7161194500204a098f69d41f3418e91d5ff7cbb7">7161194500204a098f69d41f3418e91d5ff7cbb7</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jquery prior to 1.9.0 allows Cross-site Scripting attacks via the load method. The load method fails to recognize and remove "<script>" HTML tags that contain a whitespace character, i.e: "</script >", which results in the enclosed script logic to be executed. <p>Publish Date: 2020-05-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7656>CVE-2020-7656</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/rails/jquery-rails/commit/8f601cbfa08749ee5bbd2bffb6e509db9d753568">https://github.com/rails/jquery-rails/commit/8f601cbfa08749ee5bbd2bffb6e509db9d753568</a></p> <p>Release Date: 2020-05-19</p> <p>Fix Resolution: jquery-rails - 2.2.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file syntaxhighlighter node modules grunt phplint node modules underscore string test test underscore test html path to vulnerable library syntaxhighlighter node modules grunt phplint node modules underscore string test test underscore vendor jquery js syntaxhighlighter node modules grunt phplint node modules underscore string test test underscore vendor jquery js dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details jquery prior to allows cross site scripting attacks via the load method the load method fails to recognize and remove html tags that contain a whitespace character i e which results in the enclosed script logic to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery rails step up your open source security game with whitesource
0
1,251
3,786,400,549
IssuesEvent
2016-03-21 02:19:36
ichiefboz/krlx-applications
https://api.github.com/repos/ichiefboz/krlx-applications
opened
Add "All Shows" page for board members
Processing
Board members should have access to an "All Shows" screen, and see what's been submitted. Additionally, administrative functions will be added. Though this page, the following things should be made possible: - [ ] Download a CSV file of all applications - [ ] Manually edit any aspect of an application - [ ] Assign a time to a show - [ ] Cancel an application Only the downloading of CSV is required for v1.0; additional features can be added in future releases.
1.0
Add "All Shows" page for board members - Board members should have access to an "All Shows" screen, and see what's been submitted. Additionally, administrative functions will be added. Though this page, the following things should be made possible: - [ ] Download a CSV file of all applications - [ ] Manually edit any aspect of an application - [ ] Assign a time to a show - [ ] Cancel an application Only the downloading of CSV is required for v1.0; additional features can be added in future releases.
process
add all shows page for board members board members should have access to an all shows screen and see what s been submitted additionally administrative functions will be added though this page the following things should be made possible download a csv file of all applications manually edit any aspect of an application assign a time to a show cancel an application only the downloading of csv is required for additional features can be added in future releases
1
7,372
10,512,647,135
IssuesEvent
2019-09-27 18:27:09
openopps/openopps-platform
https://api.github.com/repos/openopps/openopps-platform
closed
Bug: Experience Dates are displaying incorrectly during apply process
Apply Process Bug
Environment: Production Issue: Work experience dates are displaying one month off for applicants Steps to reproduce: 1) Jot down work experience start and end dates in USAJOBS 2) Apply for an internship in Open Opps. - on the work experience page a job that had a start date of 01/2018 displays as 12/2017 - end date also displays incorrectly Related ticket 3954 - dates display incorrectly on the application review page
1.0
Bug: Experience Dates are displaying incorrectly during apply process - Environment: Production Issue: Work experience dates are displaying one month off for applicants Steps to reproduce: 1) Jot down work experience start and end dates in USAJOBS 2) Apply for an internship in Open Opps. - on the work experience page a job that had a start date of 01/2018 displays as 12/2017 - end date also displays incorrectly Related ticket 3954 - dates display incorrectly on the application review page
process
bug experience dates are displaying incorrectly during apply process environment production issue work experience dates are displaying one month off for applicants steps to reproduce jot down work experience start and end dates in usajobs apply for an internship in open opps on the work experience page a job that had a start date of displays as end date also displays incorrectly related ticket dates display incorrectly on the application review page
1
19,515
25,828,799,352
IssuesEvent
2022-12-12 14:45:53
microsoft/vscode
https://api.github.com/repos/microsoft/vscode
closed
ptyHost process has memory leak
bug *not-reproducible perf confirmation-pending terminal-process
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions --> <!-- 🔎 Search existing issues to avoid creating duplicates. --> <!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ --> <!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. --> <!-- 🔧 Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes/No <!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. --> <!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. --> - VS Code Version: Version: 1.68.1 - OS Version: Windows_NT x64 10.0.19043 It seems like a memory leak ![image](https://user-images.githubusercontent.com/4336495/177452124-3ff0b415-29f0-4032-8a24-2daf584ab49f.png) The memory usage grows from hundreds of MB to more than 4000 MB in one night.
1.0
ptyHost process has memory leak - <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions --> <!-- 🔎 Search existing issues to avoid creating duplicates. --> <!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ --> <!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. --> <!-- 🔧 Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes/No <!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. --> <!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. --> - VS Code Version: Version: 1.68.1 - OS Version: Windows_NT x64 10.0.19043 It seems like a memory leak ![image](https://user-images.githubusercontent.com/4336495/177452124-3ff0b415-29f0-4032-8a24-2daf584ab49f.png) The memory usage grows from hundreds of MB to more than 4000 MB in one night.
process
ptyhost process has memory leak does this issue occur when all extensions are disabled yes no report issue dialog can assist with this vs code version version os version windows nt it seems like a memory leak the memory usage grows from hundreds of mb to more than mb in one night
1
20,388
27,045,122,465
IssuesEvent
2023-02-13 09:13:11
camunda/issues
https://api.github.com/repos/camunda/issues
opened
BPMN Signal Events(1): Top-level signal start events
component:desktopModeler component:operate component:optimize component:webModeler component:zeebe component:zeebe-process-automation public feature-parity potential:8.3
> This is an epic internal-docs issue. It bundles all activities we conduct around a certain initiative. It will typically links to various child issues from various repositories and can be spread across multiple teams. ### Value Proposition Statement Trigger a BPMN signal via API to start multiple different definitions at the same time. ### User Problem In process automation sometimes there is a need to broadcast a signal to one or multiple waiting process instances (as intermediate) or processes (as start). Example Use-Case: [Insurance Policy conditions change](https://docs.camunda.org/manual/7.16/reference/bpmn20/events/signal-events/) In this epic we focus on solving the first problem: Starting an instance for all process definitions with a signal start event matching the signal name. ### User Stories **Design**: - As a Developer, I can model Top-level signal start events and define a signal name. **Automate**: - As a Developer, I can deploy Top-level signal start events to the Engine (Zeebe). - As a Developer, I can be sure that the Engine uses signal events correctly and e.g. triggers Top-Level signal start events when using the gRPC API or Java Client. - As a Developer, I can see Signal execution in Operate. **Improve**: - Allow displaying processes with Signals in Optimize. ### Implementation Notes In the first stage, it will be possible for users to broadcast signals using the gRPC API, starting new process instances for the processes that have top-level signal start events. Model highlighting that only the top-level signal start event will be supported at this stage, the other signal events are marked as not yet supported ![image](https://user-images.githubusercontent.com/20283848/202141907-4091580f-0c20-4965-b309-822b28a3ed24.png) Signal broadcasts arriving in the gateway have to be written to a partition leader. The partition leader needs to relay the signal to other partitions, and it needs to start instances for each of the top-level signal start events that match the name of the broadcasted signal. The other partitions don't create new instances and don't have to relay the signal. A process is subscribed to a signal using a new SignalStartEventSubscription record. ### Links <!-- Additional links, for example: * Support Case X * Bug on team board Y * Concept document Z * Topic Slack Channel * Value Proposition Slide * Initiative in OKR sheet * --> #### Design ## Design Planning * Reviewed by design: 9 Feb 2023 * No design needed #### Implementation Tickets Optimize: https://jira.camunda.com/browse/OPT-6511 #### Doc Tickets
1.0
BPMN Signal Events(1): Top-level signal start events - > This is an epic internal-docs issue. It bundles all activities we conduct around a certain initiative. It will typically links to various child issues from various repositories and can be spread across multiple teams. ### Value Proposition Statement Trigger a BPMN signal via API to start multiple different definitions at the same time. ### User Problem In process automation sometimes there is a need to broadcast a signal to one or multiple waiting process instances (as intermediate) or processes (as start). Example Use-Case: [Insurance Policy conditions change](https://docs.camunda.org/manual/7.16/reference/bpmn20/events/signal-events/) In this epic we focus on solving the first problem: Starting an instance for all process definitions with a signal start event matching the signal name. ### User Stories **Design**: - As a Developer, I can model Top-level signal start events and define a signal name. **Automate**: - As a Developer, I can deploy Top-level signal start events to the Engine (Zeebe). - As a Developer, I can be sure that the Engine uses signal events correctly and e.g. triggers Top-Level signal start events when using the gRPC API or Java Client. - As a Developer, I can see Signal execution in Operate. **Improve**: - Allow displaying processes with Signals in Optimize. ### Implementation Notes In the first stage, it will be possible for users to broadcast signals using the gRPC API, starting new process instances for the processes that have top-level signal start events. Model highlighting that only the top-level signal start event will be supported at this stage, the other signal events are marked as not yet supported ![image](https://user-images.githubusercontent.com/20283848/202141907-4091580f-0c20-4965-b309-822b28a3ed24.png) Signal broadcasts arriving in the gateway have to be written to a partition leader. The partition leader needs to relay the signal to other partitions, and it needs to start instances for each of the top-level signal start events that match the name of the broadcasted signal. The other partitions don't create new instances and don't have to relay the signal. A process is subscribed to a signal using a new SignalStartEventSubscription record. ### Links <!-- Additional links, for example: * Support Case X * Bug on team board Y * Concept document Z * Topic Slack Channel * Value Proposition Slide * Initiative in OKR sheet * --> #### Design ## Design Planning * Reviewed by design: 9 Feb 2023 * No design needed #### Implementation Tickets Optimize: https://jira.camunda.com/browse/OPT-6511 #### Doc Tickets
process
bpmn signal events top level signal start events this is an epic internal docs issue it bundles all activities we conduct around a certain initiative it will typically links to various child issues from various repositories and can be spread across multiple teams value proposition statement trigger a bpmn signal via api to start multiple different definitions at the same time user problem in process automation sometimes there is a need to broadcast a signal to one or multiple waiting process instances as intermediate or processes as start example use case in this epic we focus on solving the first problem starting an instance for all process definitions with a signal start event matching the signal name user stories design as a developer i can model top level signal start events and define a signal name automate as a developer i can deploy top level signal start events to the engine zeebe as a developer i can be sure that the engine uses signal events correctly and e g triggers top level signal start events when using the grpc api or java client as a developer i can see signal execution in operate improve allow displaying processes with signals in optimize implementation notes in the first stage it will be possible for users to broadcast signals using the grpc api starting new process instances for the processes that have top level signal start events model highlighting that only the top level signal start event will be supported at this stage the other signal events are marked as not yet supported signal broadcasts arriving in the gateway have to be written to a partition leader the partition leader needs to relay the signal to other partitions and it needs to start instances for each of the top level signal start events that match the name of the broadcasted signal the other partitions don t create new instances and don t have to relay the signal a process is subscribed to a signal using a new signalstarteventsubscription record links additional links for example support case x bug on team board y concept document z topic slack channel value proposition slide initiative in okr sheet design design planning reviewed by design feb no design needed implementation tickets optimize doc tickets
1
21,159
28,133,547,529
IssuesEvent
2023-04-01 05:16:53
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
filterprocessor doesn't return an error when an invalid match_type is set in config
bug Stale processor/filter closed as inactive
**Describe the bug** filterprocessor doesn't return an error when an invalid match_type is set in config **Steps to reproduce** 1. Use the following config or similar with invalid config - different than `strict`/`regexp` for logs or different than `strict`/`regexp`/`expr` for metrics: ```yaml receivers: nop: processors: filter/invalid: logs: include: match_type: invalid exporters: nop: service: pipelines: logs: receivers: [nop] processors: [filter/invalid] exporters: [nop] ``` **What did you expect to see?** Collector to fail to start because of invalid config. **What did you see instead?** Collector starting successfully **What version did you use?** Version: https://github.com/open-telemetry/opentelemetry-collector-contrib/commit/372e040eb0b9331cb4b2ca370c8777eb820a854c **What config did you use?** Config: as in the **Steps to reproduce** section **Environment** OS: Mac Compiler: `go1.17.1`
1.0
filterprocessor doesn't return an error when an invalid match_type is set in config - **Describe the bug** filterprocessor doesn't return an error when an invalid match_type is set in config **Steps to reproduce** 1. Use the following config or similar with invalid config - different than `strict`/`regexp` for logs or different than `strict`/`regexp`/`expr` for metrics: ```yaml receivers: nop: processors: filter/invalid: logs: include: match_type: invalid exporters: nop: service: pipelines: logs: receivers: [nop] processors: [filter/invalid] exporters: [nop] ``` **What did you expect to see?** Collector to fail to start because of invalid config. **What did you see instead?** Collector starting successfully **What version did you use?** Version: https://github.com/open-telemetry/opentelemetry-collector-contrib/commit/372e040eb0b9331cb4b2ca370c8777eb820a854c **What config did you use?** Config: as in the **Steps to reproduce** section **Environment** OS: Mac Compiler: `go1.17.1`
process
filterprocessor doesn t return an error when an invalid match type is set in config describe the bug filterprocessor doesn t return an error when an invalid match type is set in config steps to reproduce use the following config or similar with invalid config different than strict regexp for logs or different than strict regexp expr for metrics yaml receivers nop processors filter invalid logs include match type invalid exporters nop service pipelines logs receivers processors exporters what did you expect to see collector to fail to start because of invalid config what did you see instead collector starting successfully what version did you use version what config did you use config as in the steps to reproduce section environment os mac compiler
1
80,559
10,024,129,824
IssuesEvent
2019-07-16 21:00:20
AguaClara/aguaclara
https://api.github.com/repos/AguaClara/aguaclara
closed
Sedimentation tank channel design code
aide_design feature medium
Take the CEE 4540 final project done by H2Flows to calculate the dimensions of the inlet and exit channels and convert it to a python script which could be used to calculate channel dimensions. Because the channels will be constant width we will not have to use most of this code, just the portion that calculates the length of the channel. However, it will be good to have the code there in case we want to change the width of the channels in the future and to document how the constant width was originally calculated.
1.0
Sedimentation tank channel design code - Take the CEE 4540 final project done by H2Flows to calculate the dimensions of the inlet and exit channels and convert it to a python script which could be used to calculate channel dimensions. Because the channels will be constant width we will not have to use most of this code, just the portion that calculates the length of the channel. However, it will be good to have the code there in case we want to change the width of the channels in the future and to document how the constant width was originally calculated.
non_process
sedimentation tank channel design code take the cee final project done by to calculate the dimensions of the inlet and exit channels and convert it to a python script which could be used to calculate channel dimensions because the channels will be constant width we will not have to use most of this code just the portion that calculates the length of the channel however it will be good to have the code there in case we want to change the width of the channels in the future and to document how the constant width was originally calculated
0
69,363
9,300,792,896
IssuesEvent
2019-03-23 16:36:35
brightcove/brine
https://api.github.com/repos/brightcove/brine
closed
Make header selection more obvious in docs
documentation
the |response_attribute| substitution doesn't seem to be working in the step reference and there' snot much information elsewhere.
1.0
Make header selection more obvious in docs - the |response_attribute| substitution doesn't seem to be working in the step reference and there' snot much information elsewhere.
non_process
make header selection more obvious in docs the response attribute substitution doesn t seem to be working in the step reference and there snot much information elsewhere
0
10,301
8,877,002,049
IssuesEvent
2019-01-12 20:11:48
badges/shields
https://api.github.com/repos/badges/shields
closed
Is it possible to connect to a private repo in bitbucket?
self-hosting service-badge
HI, I have a private bitbucket repo . all my servers are local | how can I use shields ? Thanks for helping <!-- Love Shields? Please consider donating $10 to sustain our activities: 👉 https://opencollective.com/shields -->
1.0
Is it possible to connect to a private repo in bitbucket? - HI, I have a private bitbucket repo . all my servers are local | how can I use shields ? Thanks for helping <!-- Love Shields? Please consider donating $10 to sustain our activities: 👉 https://opencollective.com/shields -->
non_process
is it possible to connect to a private repo in bitbucket hi i have a private bitbucket repo all my servers are local how can i use shields thanks for helping love shields please consider donating to sustain our activities 👉
0
3,652
6,690,718,398
IssuesEvent
2017-10-09 10:13:16
DynareTeam/dynare
https://api.github.com/repos/DynareTeam/dynare
closed
Add interface for flexible IRF-generation
preprocessor
Create block ``` generate_irfs(options_list); (groupname1), exo_name1=1, exo_name 2=-0.5; (groupname2), exo_name1=2, exo_name 3=-0.5; end; ``` or alternatively (as suggested in #115 ) ``` generate_irfs(options_list); [ name='groupname1' ] exo_name1=1, exo_name 2=-0.5; [ name='groupname1' ] exo_name1=2, exo_name 3=-0.5; end; ``` where `options_list` can be (for now) - `stderr_multiples` translating to `options_.irf_opt.stderr_multiples` - `diagonal_only` translating to `options_.irf_opt.diagonal_only` and where each line translates into 1. a cell array `options_.irf_opt.irf_shock_graphtitles` storing the group_name along the rows 2. a column of a matrix `options_.irf_opt.irf_shocks` of size M_.exo_nbr*n_lines where non-specified `var_exo` get a 0 entry. If no line is provided, leave those fields empty. The block translates into 1. setting of options_ 2. a call to `oo_.irfs=generate_irfs(M_,options_,oo_)`
1.0
Add interface for flexible IRF-generation - Create block ``` generate_irfs(options_list); (groupname1), exo_name1=1, exo_name 2=-0.5; (groupname2), exo_name1=2, exo_name 3=-0.5; end; ``` or alternatively (as suggested in #115 ) ``` generate_irfs(options_list); [ name='groupname1' ] exo_name1=1, exo_name 2=-0.5; [ name='groupname1' ] exo_name1=2, exo_name 3=-0.5; end; ``` where `options_list` can be (for now) - `stderr_multiples` translating to `options_.irf_opt.stderr_multiples` - `diagonal_only` translating to `options_.irf_opt.diagonal_only` and where each line translates into 1. a cell array `options_.irf_opt.irf_shock_graphtitles` storing the group_name along the rows 2. a column of a matrix `options_.irf_opt.irf_shocks` of size M_.exo_nbr*n_lines where non-specified `var_exo` get a 0 entry. If no line is provided, leave those fields empty. The block translates into 1. setting of options_ 2. a call to `oo_.irfs=generate_irfs(M_,options_,oo_)`
process
add interface for flexible irf generation create block generate irfs options list exo exo name exo exo name end or alternatively as suggested in generate irfs options list exo exo name exo exo name end where options list can be for now stderr multiples translating to options irf opt stderr multiples diagonal only translating to options irf opt diagonal only and where each line translates into a cell array options irf opt irf shock graphtitles storing the group name along the rows a column of a matrix options irf opt irf shocks of size m exo nbr n lines where non specified var exo get a entry if no line is provided leave those fields empty the block translates into setting of options a call to oo irfs generate irfs m options oo
1
7,474
2,905,735,523
IssuesEvent
2015-06-19 03:17:35
piwik/piwik
https://api.github.com/repos/piwik/piwik
closed
2.14.0 pre-release: manual performance test of All Websites dashboard with thousands of websites
c: Tests & QA not-in-changelog Task
Shortly before we release 2.14.0 (likely in RC cycle) we shall run a manual test of Piwik with 20,000 websites and check that the dashboard loads reasonnably fast (since #6809). This is to prevent an easily introducable regression (eg #7877). sometime later we will create automated performance tests #7889. Ideally the dashboard should load under 5 seconds, when Piwik tracks 20,000 websites and setting 'browser archiving' is disabled. cc @tsteur
1.0
2.14.0 pre-release: manual performance test of All Websites dashboard with thousands of websites - Shortly before we release 2.14.0 (likely in RC cycle) we shall run a manual test of Piwik with 20,000 websites and check that the dashboard loads reasonnably fast (since #6809). This is to prevent an easily introducable regression (eg #7877). sometime later we will create automated performance tests #7889. Ideally the dashboard should load under 5 seconds, when Piwik tracks 20,000 websites and setting 'browser archiving' is disabled. cc @tsteur
non_process
pre release manual performance test of all websites dashboard with thousands of websites shortly before we release likely in rc cycle we shall run a manual test of piwik with websites and check that the dashboard loads reasonnably fast since this is to prevent an easily introducable regression eg sometime later we will create automated performance tests ideally the dashboard should load under seconds when piwik tracks websites and setting browser archiving is disabled cc tsteur
0
8,680
11,811,427,508
IssuesEvent
2020-03-19 18:11:07
pacificclimate/climate-explorer-data-prep
https://api.github.com/repos/pacificclimate/climate-explorer-data-prep
closed
Calculate baseline data for p2a summary table
process new data
The plan2adapt summary table shows a spread of how much possible variables change over time. In order to do this, we need to have "baseline" values, which are defined as the 1961-1990 period. The baseline dataset is the `anusplin` dataset. The following values are missing: anusplin hdd ------------- - [x] generate hdd climatologies - [x] index hdd climatological data anusplin prsn ------------- - [x] generate prsn data - [x] generate prsn climatologies - [x] index prsn climatological data anusplin fdETCCDI ------------------- - [x] generate fdETCCDI data - [x] generate fdETCCDI climatologies - [x] index fdETCCDI climatological data
1.0
Calculate baseline data for p2a summary table - The plan2adapt summary table shows a spread of how much possible variables change over time. In order to do this, we need to have "baseline" values, which are defined as the 1961-1990 period. The baseline dataset is the `anusplin` dataset. The following values are missing: anusplin hdd ------------- - [x] generate hdd climatologies - [x] index hdd climatological data anusplin prsn ------------- - [x] generate prsn data - [x] generate prsn climatologies - [x] index prsn climatological data anusplin fdETCCDI ------------------- - [x] generate fdETCCDI data - [x] generate fdETCCDI climatologies - [x] index fdETCCDI climatological data
process
calculate baseline data for summary table the summary table shows a spread of how much possible variables change over time in order to do this we need to have baseline values which are defined as the period the baseline dataset is the anusplin dataset the following values are missing anusplin hdd generate hdd climatologies index hdd climatological data anusplin prsn generate prsn data generate prsn climatologies index prsn climatological data anusplin fdetccdi generate fdetccdi data generate fdetccdi climatologies index fdetccdi climatological data
1
21,161
28,136,019,458
IssuesEvent
2023-04-01 11:48:02
calaldees/KaraKara
https://api.github.com/repos/calaldees/KaraKara
closed
Consider removing preview videos
browser2 processmedia2 thought
Do people actually use the preview videos? - If yes, maybe we should replace "one full-size encode and two previews" with "two full-size encodes" - If no, maybe we should delete the preview encodes But first, let's add logging to see what is actually used...
1.0
Consider removing preview videos - Do people actually use the preview videos? - If yes, maybe we should replace "one full-size encode and two previews" with "two full-size encodes" - If no, maybe we should delete the preview encodes But first, let's add logging to see what is actually used...
process
consider removing preview videos do people actually use the preview videos if yes maybe we should replace one full size encode and two previews with two full size encodes if no maybe we should delete the preview encodes but first let s add logging to see what is actually used
1
22,639
31,886,625,579
IssuesEvent
2023-09-17 02:00:09
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Fri, 15 Sep 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events ### mEBAL2 Database and Benchmark: Image-based Multispectral Eyeblink Detection - **Authors:** Roberto Daza, Aythami Morales, Julian Fierrez, Ruben Tolosana, Ruben Vera-Rodriguez - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Human-Computer Interaction (cs.HC) - **Arxiv link:** https://arxiv.org/abs/2309.07880 - **Pdf link:** https://arxiv.org/pdf/2309.07880 - **Abstract** This work introduces a new multispectral database and novel approaches for eyeblink detection in RGB and Near-Infrared (NIR) individual images. Our contributed dataset (mEBAL2, multimodal Eye Blink and Attention Level estimation, Version 2) is the largest existing eyeblink database, representing a great opportunity to improve data-driven multispectral approaches for blink detection and related applications (e.g., attention level estimation and presentation attack detection in face biometrics). mEBAL2 includes 21,100 image sequences from 180 different students (more than 2 million labeled images in total) while conducting a number of e-learning tasks of varying difficulty or taking a real course on HTML initiation through the edX MOOC platform. mEBAL2 uses multiple sensors, including two Near-Infrared (NIR) and one RGB camera to capture facial gestures during the execution of the tasks, as well as an Electroencephalogram (EEG) band to get the cognitive activity of the user and blinking events. Furthermore, this work proposes a Convolutional Neural Network architecture as benchmark for blink detection on mEBAL2 with performances up to 97%. Different training methodologies are implemented using the RGB spectrum, NIR spectrum, and the combination of both to enhance the performance on existing eyeblink detectors. We demonstrate that combining NIR and RGB images during training improves the performance of RGB eyeblink detectors (i.e., detection based only on a RGB image). Finally, the generalization capacity of the proposed eyeblink detectors is validated in wilder and more challenging environments like the HUST-LEBW dataset to show the usefulness of mEBAL2 to train a new generation of data-driven approaches for eyeblink detection. ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB ### A Multi-scale Generalized Shrinkage Threshold Network for Image Blind Deblurring in Remote Sensing - **Authors:** Yujie Feng, Yin Yang, Xiaohong Fan, Zhengpeng Zhang, Jianping Zhang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Information Theory (cs.IT); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2309.07524 - **Pdf link:** https://arxiv.org/pdf/2309.07524 - **Abstract** Remote sensing images are essential for many earth science applications, but their quality can be degraded due to limitations in sensor technology and complex imaging environments. To address this, various remote sensing image deblurring methods have been developed to restore sharp, high-quality images from degraded observational data. However, most traditional model-based deblurring methods usually require predefined hand-craft prior assumptions, which are difficult to handle in complex applications, and most deep learning-based deblurring methods are designed as a black box, lacking transparency and interpretability. In this work, we propose a novel blind deblurring learning framework based on alternating iterations of shrinkage thresholds, alternately updating blurring kernels and images, with the theoretical foundation of network design. Additionally, we propose a learnable blur kernel proximal mapping module to improve the blur kernel evaluation in the kernel domain. Then, we proposed a deep proximal mapping module in the image domain, which combines a generalized shrinkage threshold operator and a multi-scale prior feature extraction block. This module also introduces an attention mechanism to adaptively adjust the prior importance, thus avoiding the drawbacks of hand-crafted image prior terms. Thus, a novel multi-scale generalized shrinkage threshold network (MGSTNet) is designed to specifically focus on learning deep geometric prior features to enhance image restoration. Experiments demonstrate the superiority of our MGSTNet framework on remote sensing image datasets compared to existing deblurring methods. ## Keyword: ISP ### Unbiased Face Synthesis With Diffusion Models: Are We There Yet? - **Authors:** Harrison Rosenberg, Shimaa Ahmed, Guruprasad V Ramesh, Ramya Korlakai Vinayak, Kassem Fawaz - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2309.07277 - **Pdf link:** https://arxiv.org/pdf/2309.07277 - **Abstract** Text-to-image diffusion models have achieved widespread popularity due to their unprecedented image generation capability. In particular, their ability to synthesize and modify human faces has spurred research into using generated face images in both training data augmentation and model performance assessments. In this paper, we study the efficacy and shortcomings of generative models in the context of face generation. Utilizing a combination of qualitative and quantitative measures, including embedding-based metrics and user studies, we present a framework to audit the characteristics of generated faces conditioned on a set of social attributes. We applied our framework on faces generated through state-of-the-art text-to-image diffusion models. We identify several limitations of face image generation that include faithfulness to the text prompt, demographic disparities, and distributional shifts. Furthermore, we present an analytical model that provides insights into how training data selection contributes to the performance of generative models. ### Co-Salient Object Detection with Semantic-Level Consensus Extraction and Dispersion - **Authors:** Peiran Xu, Yadong Mu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2309.07753 - **Pdf link:** https://arxiv.org/pdf/2309.07753 - **Abstract** Given a group of images, co-salient object detection (CoSOD) aims to highlight the common salient object in each image. There are two factors closely related to the success of this task, namely consensus extraction, and the dispersion of consensus to each image. Most previous works represent the group consensus using local features, while we instead utilize a hierarchical Transformer module for extracting semantic-level consensus. Therefore, it can obtain a more comprehensive representation of the common object category, and exclude interference from other objects that share local similarities with the target object. In addition, we propose a Transformer-based dispersion module that takes into account the variation of the co-salient object in different scenes. It distributes the consensus to the image feature maps in an image-specific way while making full use of interactions within the group. These two modules are integrated with a ViT encoder and an FPN-like decoder to form an end-to-end trainable network, without additional branch and auxiliary loss. The proposed method is evaluated on three commonly used CoSOD datasets and achieves state-of-the-art performance. ### Gradient constrained sharpness-aware prompt learning for vision-language models - **Authors:** Liangchen Liu, Nannan Wang, Dawei Zhou, Xinbo Gao, Decheng Liu, Xi Yang, Tongliang Liu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2309.07866 - **Pdf link:** https://arxiv.org/pdf/2309.07866 - **Abstract** This paper targets a novel trade-off problem in generalizable prompt learning for vision-language models (VLM), i.e., improving the performance on unseen classes while maintaining the performance on seen classes. Comparing with existing generalizable methods that neglect the seen classes degradation, the setting of this problem is more strict and fits more closely with practical applications. To solve this problem, we start from the optimization perspective, and leverage the relationship between loss landscape geometry and model generalization ability. By analyzing the loss landscape of the state-of-the-art method and the widely-used Sharpness-aware Minimization (SAM), we conclude that the trade-off performance correlates to both loss value and loss sharpness, while each of them are indispensable. However, we find the optimizing gradient of existing methods cannot always maintain high consistency with both loss value and loss sharpness during the whole optimization procedure. To this end, we propose an novel SAM-based method for prompt learning, denoted as Gradient Constrained Sharpness-aware Context Optimization (GCSCoOp), to dynamically constrains the optimizing gradient, thus achieving above two-fold optimization objective simultaneously. Extensive experiments verify the effectiveness of GCSCoOp in the trade-off problem. ### mEBAL2 Database and Benchmark: Image-based Multispectral Eyeblink Detection - **Authors:** Roberto Daza, Aythami Morales, Julian Fierrez, Ruben Tolosana, Ruben Vera-Rodriguez - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Human-Computer Interaction (cs.HC) - **Arxiv link:** https://arxiv.org/abs/2309.07880 - **Pdf link:** https://arxiv.org/pdf/2309.07880 - **Abstract** This work introduces a new multispectral database and novel approaches for eyeblink detection in RGB and Near-Infrared (NIR) individual images. Our contributed dataset (mEBAL2, multimodal Eye Blink and Attention Level estimation, Version 2) is the largest existing eyeblink database, representing a great opportunity to improve data-driven multispectral approaches for blink detection and related applications (e.g., attention level estimation and presentation attack detection in face biometrics). mEBAL2 includes 21,100 image sequences from 180 different students (more than 2 million labeled images in total) while conducting a number of e-learning tasks of varying difficulty or taking a real course on HTML initiation through the edX MOOC platform. mEBAL2 uses multiple sensors, including two Near-Infrared (NIR) and one RGB camera to capture facial gestures during the execution of the tasks, as well as an Electroencephalogram (EEG) band to get the cognitive activity of the user and blinking events. Furthermore, this work proposes a Convolutional Neural Network architecture as benchmark for blink detection on mEBAL2 with performances up to 97%. Different training methodologies are implemented using the RGB spectrum, NIR spectrum, and the combination of both to enhance the performance on existing eyeblink detectors. We demonstrate that combining NIR and RGB images during training improves the performance of RGB eyeblink detectors (i.e., detection based only on a RGB image). Finally, the generalization capacity of the proposed eyeblink detectors is validated in wilder and more challenging environments like the HUST-LEBW dataset to show the usefulness of mEBAL2 to train a new generation of data-driven approaches for eyeblink detection. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### Judging a video by its bitstream cover - **Authors:** Yuxing Han, Yunan Ding, Jiangtao Wen, Chen Ye Gan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2309.07361 - **Pdf link:** https://arxiv.org/pdf/2309.07361 - **Abstract** Classifying videos into distinct categories, such as Sport and Music Video, is crucial for multimedia understanding and retrieval, especially in an age where an immense volume of video content is constantly being generated. Traditional methods require video decompression to extract pixel-level features like color, texture, and motion, thereby increasing computational and storage demands. Moreover, these methods often suffer from performance degradation in low-quality videos. We present a novel approach that examines only the post-compression bitstream of a video to perform classification, eliminating the need for bitstream. We validate our approach using a custom-built data set comprising over 29,000 YouTube video clips, totaling 6,000 hours and spanning 11 distinct categories. Our preliminary evaluations indicate precision, accuracy, and recall rates well over 80%. The algorithm operates approximately 15,000 times faster than real-time for 30fps videos, outperforming traditional Dynamic Time Warping (DTW) algorithm by six orders of magnitude. ## Keyword: RAW ### A Multi-scale Generalized Shrinkage Threshold Network for Image Blind Deblurring in Remote Sensing - **Authors:** Yujie Feng, Yin Yang, Xiaohong Fan, Zhengpeng Zhang, Jianping Zhang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Information Theory (cs.IT); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2309.07524 - **Pdf link:** https://arxiv.org/pdf/2309.07524 - **Abstract** Remote sensing images are essential for many earth science applications, but their quality can be degraded due to limitations in sensor technology and complex imaging environments. To address this, various remote sensing image deblurring methods have been developed to restore sharp, high-quality images from degraded observational data. However, most traditional model-based deblurring methods usually require predefined hand-craft prior assumptions, which are difficult to handle in complex applications, and most deep learning-based deblurring methods are designed as a black box, lacking transparency and interpretability. In this work, we propose a novel blind deblurring learning framework based on alternating iterations of shrinkage thresholds, alternately updating blurring kernels and images, with the theoretical foundation of network design. Additionally, we propose a learnable blur kernel proximal mapping module to improve the blur kernel evaluation in the kernel domain. Then, we proposed a deep proximal mapping module in the image domain, which combines a generalized shrinkage threshold operator and a multi-scale prior feature extraction block. This module also introduces an attention mechanism to adaptively adjust the prior importance, thus avoiding the drawbacks of hand-crafted image prior terms. Thus, a novel multi-scale generalized shrinkage threshold network (MGSTNet) is designed to specifically focus on learning deep geometric prior features to enhance image restoration. Experiments demonstrate the superiority of our MGSTNet framework on remote sensing image datasets compared to existing deblurring methods. ## Keyword: raw image There is no result
2.0
New submissions for Fri, 15 Sep 23 - ## Keyword: events ### mEBAL2 Database and Benchmark: Image-based Multispectral Eyeblink Detection - **Authors:** Roberto Daza, Aythami Morales, Julian Fierrez, Ruben Tolosana, Ruben Vera-Rodriguez - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Human-Computer Interaction (cs.HC) - **Arxiv link:** https://arxiv.org/abs/2309.07880 - **Pdf link:** https://arxiv.org/pdf/2309.07880 - **Abstract** This work introduces a new multispectral database and novel approaches for eyeblink detection in RGB and Near-Infrared (NIR) individual images. Our contributed dataset (mEBAL2, multimodal Eye Blink and Attention Level estimation, Version 2) is the largest existing eyeblink database, representing a great opportunity to improve data-driven multispectral approaches for blink detection and related applications (e.g., attention level estimation and presentation attack detection in face biometrics). mEBAL2 includes 21,100 image sequences from 180 different students (more than 2 million labeled images in total) while conducting a number of e-learning tasks of varying difficulty or taking a real course on HTML initiation through the edX MOOC platform. mEBAL2 uses multiple sensors, including two Near-Infrared (NIR) and one RGB camera to capture facial gestures during the execution of the tasks, as well as an Electroencephalogram (EEG) band to get the cognitive activity of the user and blinking events. Furthermore, this work proposes a Convolutional Neural Network architecture as benchmark for blink detection on mEBAL2 with performances up to 97%. Different training methodologies are implemented using the RGB spectrum, NIR spectrum, and the combination of both to enhance the performance on existing eyeblink detectors. We demonstrate that combining NIR and RGB images during training improves the performance of RGB eyeblink detectors (i.e., detection based only on a RGB image). Finally, the generalization capacity of the proposed eyeblink detectors is validated in wilder and more challenging environments like the HUST-LEBW dataset to show the usefulness of mEBAL2 to train a new generation of data-driven approaches for eyeblink detection. ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB ### A Multi-scale Generalized Shrinkage Threshold Network for Image Blind Deblurring in Remote Sensing - **Authors:** Yujie Feng, Yin Yang, Xiaohong Fan, Zhengpeng Zhang, Jianping Zhang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Information Theory (cs.IT); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2309.07524 - **Pdf link:** https://arxiv.org/pdf/2309.07524 - **Abstract** Remote sensing images are essential for many earth science applications, but their quality can be degraded due to limitations in sensor technology and complex imaging environments. To address this, various remote sensing image deblurring methods have been developed to restore sharp, high-quality images from degraded observational data. However, most traditional model-based deblurring methods usually require predefined hand-craft prior assumptions, which are difficult to handle in complex applications, and most deep learning-based deblurring methods are designed as a black box, lacking transparency and interpretability. In this work, we propose a novel blind deblurring learning framework based on alternating iterations of shrinkage thresholds, alternately updating blurring kernels and images, with the theoretical foundation of network design. Additionally, we propose a learnable blur kernel proximal mapping module to improve the blur kernel evaluation in the kernel domain. Then, we proposed a deep proximal mapping module in the image domain, which combines a generalized shrinkage threshold operator and a multi-scale prior feature extraction block. This module also introduces an attention mechanism to adaptively adjust the prior importance, thus avoiding the drawbacks of hand-crafted image prior terms. Thus, a novel multi-scale generalized shrinkage threshold network (MGSTNet) is designed to specifically focus on learning deep geometric prior features to enhance image restoration. Experiments demonstrate the superiority of our MGSTNet framework on remote sensing image datasets compared to existing deblurring methods. ## Keyword: ISP ### Unbiased Face Synthesis With Diffusion Models: Are We There Yet? - **Authors:** Harrison Rosenberg, Shimaa Ahmed, Guruprasad V Ramesh, Ramya Korlakai Vinayak, Kassem Fawaz - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2309.07277 - **Pdf link:** https://arxiv.org/pdf/2309.07277 - **Abstract** Text-to-image diffusion models have achieved widespread popularity due to their unprecedented image generation capability. In particular, their ability to synthesize and modify human faces has spurred research into using generated face images in both training data augmentation and model performance assessments. In this paper, we study the efficacy and shortcomings of generative models in the context of face generation. Utilizing a combination of qualitative and quantitative measures, including embedding-based metrics and user studies, we present a framework to audit the characteristics of generated faces conditioned on a set of social attributes. We applied our framework on faces generated through state-of-the-art text-to-image diffusion models. We identify several limitations of face image generation that include faithfulness to the text prompt, demographic disparities, and distributional shifts. Furthermore, we present an analytical model that provides insights into how training data selection contributes to the performance of generative models. ### Co-Salient Object Detection with Semantic-Level Consensus Extraction and Dispersion - **Authors:** Peiran Xu, Yadong Mu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2309.07753 - **Pdf link:** https://arxiv.org/pdf/2309.07753 - **Abstract** Given a group of images, co-salient object detection (CoSOD) aims to highlight the common salient object in each image. There are two factors closely related to the success of this task, namely consensus extraction, and the dispersion of consensus to each image. Most previous works represent the group consensus using local features, while we instead utilize a hierarchical Transformer module for extracting semantic-level consensus. Therefore, it can obtain a more comprehensive representation of the common object category, and exclude interference from other objects that share local similarities with the target object. In addition, we propose a Transformer-based dispersion module that takes into account the variation of the co-salient object in different scenes. It distributes the consensus to the image feature maps in an image-specific way while making full use of interactions within the group. These two modules are integrated with a ViT encoder and an FPN-like decoder to form an end-to-end trainable network, without additional branch and auxiliary loss. The proposed method is evaluated on three commonly used CoSOD datasets and achieves state-of-the-art performance. ### Gradient constrained sharpness-aware prompt learning for vision-language models - **Authors:** Liangchen Liu, Nannan Wang, Dawei Zhou, Xinbo Gao, Decheng Liu, Xi Yang, Tongliang Liu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2309.07866 - **Pdf link:** https://arxiv.org/pdf/2309.07866 - **Abstract** This paper targets a novel trade-off problem in generalizable prompt learning for vision-language models (VLM), i.e., improving the performance on unseen classes while maintaining the performance on seen classes. Comparing with existing generalizable methods that neglect the seen classes degradation, the setting of this problem is more strict and fits more closely with practical applications. To solve this problem, we start from the optimization perspective, and leverage the relationship between loss landscape geometry and model generalization ability. By analyzing the loss landscape of the state-of-the-art method and the widely-used Sharpness-aware Minimization (SAM), we conclude that the trade-off performance correlates to both loss value and loss sharpness, while each of them are indispensable. However, we find the optimizing gradient of existing methods cannot always maintain high consistency with both loss value and loss sharpness during the whole optimization procedure. To this end, we propose an novel SAM-based method for prompt learning, denoted as Gradient Constrained Sharpness-aware Context Optimization (GCSCoOp), to dynamically constrains the optimizing gradient, thus achieving above two-fold optimization objective simultaneously. Extensive experiments verify the effectiveness of GCSCoOp in the trade-off problem. ### mEBAL2 Database and Benchmark: Image-based Multispectral Eyeblink Detection - **Authors:** Roberto Daza, Aythami Morales, Julian Fierrez, Ruben Tolosana, Ruben Vera-Rodriguez - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Human-Computer Interaction (cs.HC) - **Arxiv link:** https://arxiv.org/abs/2309.07880 - **Pdf link:** https://arxiv.org/pdf/2309.07880 - **Abstract** This work introduces a new multispectral database and novel approaches for eyeblink detection in RGB and Near-Infrared (NIR) individual images. Our contributed dataset (mEBAL2, multimodal Eye Blink and Attention Level estimation, Version 2) is the largest existing eyeblink database, representing a great opportunity to improve data-driven multispectral approaches for blink detection and related applications (e.g., attention level estimation and presentation attack detection in face biometrics). mEBAL2 includes 21,100 image sequences from 180 different students (more than 2 million labeled images in total) while conducting a number of e-learning tasks of varying difficulty or taking a real course on HTML initiation through the edX MOOC platform. mEBAL2 uses multiple sensors, including two Near-Infrared (NIR) and one RGB camera to capture facial gestures during the execution of the tasks, as well as an Electroencephalogram (EEG) band to get the cognitive activity of the user and blinking events. Furthermore, this work proposes a Convolutional Neural Network architecture as benchmark for blink detection on mEBAL2 with performances up to 97%. Different training methodologies are implemented using the RGB spectrum, NIR spectrum, and the combination of both to enhance the performance on existing eyeblink detectors. We demonstrate that combining NIR and RGB images during training improves the performance of RGB eyeblink detectors (i.e., detection based only on a RGB image). Finally, the generalization capacity of the proposed eyeblink detectors is validated in wilder and more challenging environments like the HUST-LEBW dataset to show the usefulness of mEBAL2 to train a new generation of data-driven approaches for eyeblink detection. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### Judging a video by its bitstream cover - **Authors:** Yuxing Han, Yunan Ding, Jiangtao Wen, Chen Ye Gan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2309.07361 - **Pdf link:** https://arxiv.org/pdf/2309.07361 - **Abstract** Classifying videos into distinct categories, such as Sport and Music Video, is crucial for multimedia understanding and retrieval, especially in an age where an immense volume of video content is constantly being generated. Traditional methods require video decompression to extract pixel-level features like color, texture, and motion, thereby increasing computational and storage demands. Moreover, these methods often suffer from performance degradation in low-quality videos. We present a novel approach that examines only the post-compression bitstream of a video to perform classification, eliminating the need for bitstream. We validate our approach using a custom-built data set comprising over 29,000 YouTube video clips, totaling 6,000 hours and spanning 11 distinct categories. Our preliminary evaluations indicate precision, accuracy, and recall rates well over 80%. The algorithm operates approximately 15,000 times faster than real-time for 30fps videos, outperforming traditional Dynamic Time Warping (DTW) algorithm by six orders of magnitude. ## Keyword: RAW ### A Multi-scale Generalized Shrinkage Threshold Network for Image Blind Deblurring in Remote Sensing - **Authors:** Yujie Feng, Yin Yang, Xiaohong Fan, Zhengpeng Zhang, Jianping Zhang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Information Theory (cs.IT); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2309.07524 - **Pdf link:** https://arxiv.org/pdf/2309.07524 - **Abstract** Remote sensing images are essential for many earth science applications, but their quality can be degraded due to limitations in sensor technology and complex imaging environments. To address this, various remote sensing image deblurring methods have been developed to restore sharp, high-quality images from degraded observational data. However, most traditional model-based deblurring methods usually require predefined hand-craft prior assumptions, which are difficult to handle in complex applications, and most deep learning-based deblurring methods are designed as a black box, lacking transparency and interpretability. In this work, we propose a novel blind deblurring learning framework based on alternating iterations of shrinkage thresholds, alternately updating blurring kernels and images, with the theoretical foundation of network design. Additionally, we propose a learnable blur kernel proximal mapping module to improve the blur kernel evaluation in the kernel domain. Then, we proposed a deep proximal mapping module in the image domain, which combines a generalized shrinkage threshold operator and a multi-scale prior feature extraction block. This module also introduces an attention mechanism to adaptively adjust the prior importance, thus avoiding the drawbacks of hand-crafted image prior terms. Thus, a novel multi-scale generalized shrinkage threshold network (MGSTNet) is designed to specifically focus on learning deep geometric prior features to enhance image restoration. Experiments demonstrate the superiority of our MGSTNet framework on remote sensing image datasets compared to existing deblurring methods. ## Keyword: raw image There is no result
process
new submissions for fri sep keyword events database and benchmark image based multispectral eyeblink detection authors roberto daza aythami morales julian fierrez ruben tolosana ruben vera rodriguez subjects computer vision and pattern recognition cs cv human computer interaction cs hc arxiv link pdf link abstract this work introduces a new multispectral database and novel approaches for eyeblink detection in rgb and near infrared nir individual images our contributed dataset multimodal eye blink and attention level estimation version is the largest existing eyeblink database representing a great opportunity to improve data driven multispectral approaches for blink detection and related applications e g attention level estimation and presentation attack detection in face biometrics includes image sequences from different students more than million labeled images in total while conducting a number of e learning tasks of varying difficulty or taking a real course on html initiation through the edx mooc platform uses multiple sensors including two near infrared nir and one rgb camera to capture facial gestures during the execution of the tasks as well as an electroencephalogram eeg band to get the cognitive activity of the user and blinking events furthermore this work proposes a convolutional neural network architecture as benchmark for blink detection on with performances up to different training methodologies are implemented using the rgb spectrum nir spectrum and the combination of both to enhance the performance on existing eyeblink detectors we demonstrate that combining nir and rgb images during training improves the performance of rgb eyeblink detectors i e detection based only on a rgb image finally the generalization capacity of the proposed eyeblink detectors is validated in wilder and more challenging environments like the hust lebw dataset to show the usefulness of to train a new generation of data driven approaches for eyeblink detection keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb a multi scale generalized shrinkage threshold network for image blind deblurring in remote sensing authors yujie feng yin yang xiaohong fan zhengpeng zhang jianping zhang subjects computer vision and pattern recognition cs cv information theory cs it image and video processing eess iv arxiv link pdf link abstract remote sensing images are essential for many earth science applications but their quality can be degraded due to limitations in sensor technology and complex imaging environments to address this various remote sensing image deblurring methods have been developed to restore sharp high quality images from degraded observational data however most traditional model based deblurring methods usually require predefined hand craft prior assumptions which are difficult to handle in complex applications and most deep learning based deblurring methods are designed as a black box lacking transparency and interpretability in this work we propose a novel blind deblurring learning framework based on alternating iterations of shrinkage thresholds alternately updating blurring kernels and images with the theoretical foundation of network design additionally we propose a learnable blur kernel proximal mapping module to improve the blur kernel evaluation in the kernel domain then we proposed a deep proximal mapping module in the image domain which combines a generalized shrinkage threshold operator and a multi scale prior feature extraction block this module also introduces an attention mechanism to adaptively adjust the prior importance thus avoiding the drawbacks of hand crafted image prior terms thus a novel multi scale generalized shrinkage threshold network mgstnet is designed to specifically focus on learning deep geometric prior features to enhance image restoration experiments demonstrate the superiority of our mgstnet framework on remote sensing image datasets compared to existing deblurring methods keyword isp unbiased face synthesis with diffusion models are we there yet authors harrison rosenberg shimaa ahmed guruprasad v ramesh ramya korlakai vinayak kassem fawaz subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract text to image diffusion models have achieved widespread popularity due to their unprecedented image generation capability in particular their ability to synthesize and modify human faces has spurred research into using generated face images in both training data augmentation and model performance assessments in this paper we study the efficacy and shortcomings of generative models in the context of face generation utilizing a combination of qualitative and quantitative measures including embedding based metrics and user studies we present a framework to audit the characteristics of generated faces conditioned on a set of social attributes we applied our framework on faces generated through state of the art text to image diffusion models we identify several limitations of face image generation that include faithfulness to the text prompt demographic disparities and distributional shifts furthermore we present an analytical model that provides insights into how training data selection contributes to the performance of generative models co salient object detection with semantic level consensus extraction and dispersion authors peiran xu yadong mu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract given a group of images co salient object detection cosod aims to highlight the common salient object in each image there are two factors closely related to the success of this task namely consensus extraction and the dispersion of consensus to each image most previous works represent the group consensus using local features while we instead utilize a hierarchical transformer module for extracting semantic level consensus therefore it can obtain a more comprehensive representation of the common object category and exclude interference from other objects that share local similarities with the target object in addition we propose a transformer based dispersion module that takes into account the variation of the co salient object in different scenes it distributes the consensus to the image feature maps in an image specific way while making full use of interactions within the group these two modules are integrated with a vit encoder and an fpn like decoder to form an end to end trainable network without additional branch and auxiliary loss the proposed method is evaluated on three commonly used cosod datasets and achieves state of the art performance gradient constrained sharpness aware prompt learning for vision language models authors liangchen liu nannan wang dawei zhou xinbo gao decheng liu xi yang tongliang liu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract this paper targets a novel trade off problem in generalizable prompt learning for vision language models vlm i e improving the performance on unseen classes while maintaining the performance on seen classes comparing with existing generalizable methods that neglect the seen classes degradation the setting of this problem is more strict and fits more closely with practical applications to solve this problem we start from the optimization perspective and leverage the relationship between loss landscape geometry and model generalization ability by analyzing the loss landscape of the state of the art method and the widely used sharpness aware minimization sam we conclude that the trade off performance correlates to both loss value and loss sharpness while each of them are indispensable however we find the optimizing gradient of existing methods cannot always maintain high consistency with both loss value and loss sharpness during the whole optimization procedure to this end we propose an novel sam based method for prompt learning denoted as gradient constrained sharpness aware context optimization gcscoop to dynamically constrains the optimizing gradient thus achieving above two fold optimization objective simultaneously extensive experiments verify the effectiveness of gcscoop in the trade off problem database and benchmark image based multispectral eyeblink detection authors roberto daza aythami morales julian fierrez ruben tolosana ruben vera rodriguez subjects computer vision and pattern recognition cs cv human computer interaction cs hc arxiv link pdf link abstract this work introduces a new multispectral database and novel approaches for eyeblink detection in rgb and near infrared nir individual images our contributed dataset multimodal eye blink and attention level estimation version is the largest existing eyeblink database representing a great opportunity to improve data driven multispectral approaches for blink detection and related applications e g attention level estimation and presentation attack detection in face biometrics includes image sequences from different students more than million labeled images in total while conducting a number of e learning tasks of varying difficulty or taking a real course on html initiation through the edx mooc platform uses multiple sensors including two near infrared nir and one rgb camera to capture facial gestures during the execution of the tasks as well as an electroencephalogram eeg band to get the cognitive activity of the user and blinking events furthermore this work proposes a convolutional neural network architecture as benchmark for blink detection on with performances up to different training methodologies are implemented using the rgb spectrum nir spectrum and the combination of both to enhance the performance on existing eyeblink detectors we demonstrate that combining nir and rgb images during training improves the performance of rgb eyeblink detectors i e detection based only on a rgb image finally the generalization capacity of the proposed eyeblink detectors is validated in wilder and more challenging environments like the hust lebw dataset to show the usefulness of to train a new generation of data driven approaches for eyeblink detection keyword image signal processing there is no result keyword image signal process there is no result keyword compression judging a video by its bitstream cover authors yuxing han yunan ding jiangtao wen chen ye gan subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract classifying videos into distinct categories such as sport and music video is crucial for multimedia understanding and retrieval especially in an age where an immense volume of video content is constantly being generated traditional methods require video decompression to extract pixel level features like color texture and motion thereby increasing computational and storage demands moreover these methods often suffer from performance degradation in low quality videos we present a novel approach that examines only the post compression bitstream of a video to perform classification eliminating the need for bitstream we validate our approach using a custom built data set comprising over youtube video clips totaling hours and spanning distinct categories our preliminary evaluations indicate precision accuracy and recall rates well over the algorithm operates approximately times faster than real time for videos outperforming traditional dynamic time warping dtw algorithm by six orders of magnitude keyword raw a multi scale generalized shrinkage threshold network for image blind deblurring in remote sensing authors yujie feng yin yang xiaohong fan zhengpeng zhang jianping zhang subjects computer vision and pattern recognition cs cv information theory cs it image and video processing eess iv arxiv link pdf link abstract remote sensing images are essential for many earth science applications but their quality can be degraded due to limitations in sensor technology and complex imaging environments to address this various remote sensing image deblurring methods have been developed to restore sharp high quality images from degraded observational data however most traditional model based deblurring methods usually require predefined hand craft prior assumptions which are difficult to handle in complex applications and most deep learning based deblurring methods are designed as a black box lacking transparency and interpretability in this work we propose a novel blind deblurring learning framework based on alternating iterations of shrinkage thresholds alternately updating blurring kernels and images with the theoretical foundation of network design additionally we propose a learnable blur kernel proximal mapping module to improve the blur kernel evaluation in the kernel domain then we proposed a deep proximal mapping module in the image domain which combines a generalized shrinkage threshold operator and a multi scale prior feature extraction block this module also introduces an attention mechanism to adaptively adjust the prior importance thus avoiding the drawbacks of hand crafted image prior terms thus a novel multi scale generalized shrinkage threshold network mgstnet is designed to specifically focus on learning deep geometric prior features to enhance image restoration experiments demonstrate the superiority of our mgstnet framework on remote sensing image datasets compared to existing deblurring methods keyword raw image there is no result
1
71,881
30,922,038,886
IssuesEvent
2023-08-06 02:31:35
Zahlungsmittel/Zahlungsmittel
https://api.github.com/repos/Zahlungsmittel/Zahlungsmittel
opened
[CLOSED] refactor(frontend): text juni to juli
service: wallet frontend refactor imported
<a href="https://github.com/ogerly"><img src="https://avatars.githubusercontent.com/u/1324583?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [ogerly](https://github.com/ogerly)** _Monday Jul 03, 2023 at 13:21 GMT_ _Originally opened as https://github.com/gradido/gradido/pull/3116_ ---- <!-- You can find the latest issue templates here https://github.com/ulfgebhardt/issue-templates --> ## 🍰 Pullrequest text changed june to july ---- _**[ogerly](https://github.com/ogerly)** included the following code: https://github.com/gradido/gradido/pull/3116/commits_
1.0
[CLOSED] refactor(frontend): text juni to juli - <a href="https://github.com/ogerly"><img src="https://avatars.githubusercontent.com/u/1324583?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [ogerly](https://github.com/ogerly)** _Monday Jul 03, 2023 at 13:21 GMT_ _Originally opened as https://github.com/gradido/gradido/pull/3116_ ---- <!-- You can find the latest issue templates here https://github.com/ulfgebhardt/issue-templates --> ## 🍰 Pullrequest text changed june to july ---- _**[ogerly](https://github.com/ogerly)** included the following code: https://github.com/gradido/gradido/pull/3116/commits_
non_process
refactor frontend text juni to juli issue by monday jul at gmt originally opened as 🍰 pullrequest text changed june to july included the following code
0
36,252
5,040,453,402
IssuesEvent
2016-12-19 05:39:35
RMajewski/InstrumenteSoftware
https://api.github.com/repos/RMajewski/InstrumenteSoftware
closed
TestDataName
completed test todo
Testet die Datenklasse "DataName" auf die richtige Verarbeitung der Daten: - ID - Name
1.0
TestDataName - Testet die Datenklasse "DataName" auf die richtige Verarbeitung der Daten: - ID - Name
non_process
testdataname testet die datenklasse dataname auf die richtige verarbeitung der daten id name
0
16,447
21,327,213,132
IssuesEvent
2022-04-18 01:30:49
nodejs/node
https://api.github.com/repos/nodejs/node
closed
Allow to ensure a process will exit and warn for remaining async operations if any
feature request process stale
**Is your feature request related to a problem? Please describe.** When you write a complex node program, with a lot of lib & external service usage (databases...), you still want your process to exit cleanly. The recommended way is of course to stop all async actions, active connection & socket listen and let the process stop ```javascript async onSignal() { try { await server.stop(); await database.stop(); console.log('All resources cleaned up, process should now exit'); // Process should now exit } catch (e) { // Undefined behaviour console.error(e); process.exit(1); } } ``` Sometimes, because of badly cleaned resources, the process won't exit at all. This could lead to service degradation, as the process won't restart until it's stopped, but is not responding to user requests anymore. It might then be tempting to add a `process.exit()` at the end of the normal stop process. It would solve the service degradation issue, but also hides the underlying issue (service not cleaned up). In my experience, those issues mostly happen in production, with resources not cleaned after database / upstream service reconnection due to network errors and long running processes. Those issues are not easily reproducible in a test environment. **Describe the solution you'd like** I would see 2 possible solutions: * Have a `process.ensureExit()` function, that would check for remaining async operations, warn if there is any, and exit anyway. * Have a `process.tryExit()` function, that would throw if a clean exit is not possible, letting the user handle this case (writing special logs, more clean-up attempts...)
1.0
Allow to ensure a process will exit and warn for remaining async operations if any - **Is your feature request related to a problem? Please describe.** When you write a complex node program, with a lot of lib & external service usage (databases...), you still want your process to exit cleanly. The recommended way is of course to stop all async actions, active connection & socket listen and let the process stop ```javascript async onSignal() { try { await server.stop(); await database.stop(); console.log('All resources cleaned up, process should now exit'); // Process should now exit } catch (e) { // Undefined behaviour console.error(e); process.exit(1); } } ``` Sometimes, because of badly cleaned resources, the process won't exit at all. This could lead to service degradation, as the process won't restart until it's stopped, but is not responding to user requests anymore. It might then be tempting to add a `process.exit()` at the end of the normal stop process. It would solve the service degradation issue, but also hides the underlying issue (service not cleaned up). In my experience, those issues mostly happen in production, with resources not cleaned after database / upstream service reconnection due to network errors and long running processes. Those issues are not easily reproducible in a test environment. **Describe the solution you'd like** I would see 2 possible solutions: * Have a `process.ensureExit()` function, that would check for remaining async operations, warn if there is any, and exit anyway. * Have a `process.tryExit()` function, that would throw if a clean exit is not possible, letting the user handle this case (writing special logs, more clean-up attempts...)
process
allow to ensure a process will exit and warn for remaining async operations if any is your feature request related to a problem please describe when you write a complex node program with a lot of lib external service usage databases you still want your process to exit cleanly the recommended way is of course to stop all async actions active connection socket listen and let the process stop javascript async onsignal try await server stop await database stop console log all resources cleaned up process should now exit process should now exit catch e undefined behaviour console error e process exit sometimes because of badly cleaned resources the process won t exit at all this could lead to service degradation as the process won t restart until it s stopped but is not responding to user requests anymore it might then be tempting to add a process exit at the end of the normal stop process it would solve the service degradation issue but also hides the underlying issue service not cleaned up in my experience those issues mostly happen in production with resources not cleaned after database upstream service reconnection due to network errors and long running processes those issues are not easily reproducible in a test environment describe the solution you d like i would see possible solutions have a process ensureexit function that would check for remaining async operations warn if there is any and exit anyway have a process tryexit function that would throw if a clean exit is not possible letting the user handle this case writing special logs more clean up attempts
1
12,747
15,107,544,908
IssuesEvent
2021-02-08 15:32:43
alphagov/govuk-design-system
https://api.github.com/repos/alphagov/govuk-design-system
closed
Publish cookies page pattern
cookie banner guidance process 🕔 hours
## What When we publish the cookie banner component in 3.11.0 release, publish cookies page pattern at the same time. ## Why Pattern has been drafted and reviewed. ## Who needs to know about this Whoever is doing the release ## Done when - [ ] PR created - [ ] Pattern published
1.0
Publish cookies page pattern - ## What When we publish the cookie banner component in 3.11.0 release, publish cookies page pattern at the same time. ## Why Pattern has been drafted and reviewed. ## Who needs to know about this Whoever is doing the release ## Done when - [ ] PR created - [ ] Pattern published
process
publish cookies page pattern what when we publish the cookie banner component in release publish cookies page pattern at the same time why pattern has been drafted and reviewed who needs to know about this whoever is doing the release done when pr created pattern published
1
10,167
13,044,162,710
IssuesEvent
2020-07-29 03:47:35
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `ValuesReal` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `ValuesReal` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @andylokandy ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `ValuesReal` from TiDB - ## Description Port the scalar function `ValuesReal` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @andylokandy ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function valuesreal from tidb description port the scalar function valuesreal from tidb to coprocessor score mentor s andylokandy recommended skills rust programming learning materials already implemented expressions ported from tidb
1
81,402
15,618,517,100
IssuesEvent
2021-03-20 01:16:14
rvvergara/haiku-my-next-frontend
https://api.github.com/repos/rvvergara/haiku-my-next-frontend
opened
CVE-2021-23337 (High) detected in lodash-4.17.15.tgz
security vulnerability
## CVE-2021-23337 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.15.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/igaku-my-next-frontend/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/igaku-my-next-frontend/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - enzyme-to-json-3.4.3.tgz (Root Library) - :x: **lodash-4.17.15.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> All versions of package lodash; all versions of package org.fujion.webjars:lodash are vulnerable to Command Injection via template. <p>Publish Date: 2021-02-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p> <p>Release Date: 2021-02-15</p> <p>Fix Resolution: lodash - 4.17.21</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-23337 (High) detected in lodash-4.17.15.tgz - ## CVE-2021-23337 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.15.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/igaku-my-next-frontend/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/igaku-my-next-frontend/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - enzyme-to-json-3.4.3.tgz (Root Library) - :x: **lodash-4.17.15.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> All versions of package lodash; all versions of package org.fujion.webjars:lodash are vulnerable to Command Injection via template. <p>Publish Date: 2021-02-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p> <p>Release Date: 2021-02-15</p> <p>Fix Resolution: lodash - 4.17.21</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in lodash tgz cve high severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file tmp ws scm igaku my next frontend package json path to vulnerable library tmp ws scm igaku my next frontend node modules lodash package json dependency hierarchy enzyme to json tgz root library x lodash tgz vulnerable library vulnerability details all versions of package lodash all versions of package org fujion webjars lodash are vulnerable to command injection via template publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash step up your open source security game with whitesource
0
3,027
6,029,105,963
IssuesEvent
2017-06-08 17:14:06
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
RemoteExecutorConsoleApp not restoring on some Fedora test runs
area-System.Diagnostics.Process blocking-clean-ci test bug test-run-core
The `RemoteExecutorConsoleApp.exe` that is used in the `System.Diagnostics.Process` tests is either missing or can't be accessed on some Fedora test runs. It happens infrequently, but enough to cause these failures: - https://github.com/dotnet/corefx/issues/16488 - https://github.com/dotnet/corefx/issues/16400 - https://github.com/dotnet/corefx/issues/15844 FYI @stephentoub @Priya91
1.0
RemoteExecutorConsoleApp not restoring on some Fedora test runs - The `RemoteExecutorConsoleApp.exe` that is used in the `System.Diagnostics.Process` tests is either missing or can't be accessed on some Fedora test runs. It happens infrequently, but enough to cause these failures: - https://github.com/dotnet/corefx/issues/16488 - https://github.com/dotnet/corefx/issues/16400 - https://github.com/dotnet/corefx/issues/15844 FYI @stephentoub @Priya91
process
remoteexecutorconsoleapp not restoring on some fedora test runs the remoteexecutorconsoleapp exe that is used in the system diagnostics process tests is either missing or can t be accessed on some fedora test runs it happens infrequently but enough to cause these failures fyi stephentoub
1
391,856
11,579,496,523
IssuesEvent
2020-02-21 18:02:59
cybercongress/dot-cyber
https://api.github.com/repos/cybercongress/dot-cyber
closed
Remove # from links
error priority
In 21st century its pity to have redundant hash in the urls. We need to get rid of it
1.0
Remove # from links - In 21st century its pity to have redundant hash in the urls. We need to get rid of it
non_process
remove from links in century its pity to have redundant hash in the urls we need to get rid of it
0
11,781
14,614,255,833
IssuesEvent
2020-12-22 09:36:26
prisma/prisma
https://api.github.com/repos/prisma/prisma
opened
Implement `react-prisma` prototype
process/candidate team/client tech/typescript
In the same fashion as `react-pg`, we should provide a `react-prisma` package, that supports React suspense and therefore React Server Components https://github.com/facebook/react/tree/master/packages/react-pg
1.0
Implement `react-prisma` prototype - In the same fashion as `react-pg`, we should provide a `react-prisma` package, that supports React suspense and therefore React Server Components https://github.com/facebook/react/tree/master/packages/react-pg
process
implement react prisma prototype in the same fashion as react pg we should provide a react prisma package that supports react suspense and therefore react server components
1
190,087
15,217,018,959
IssuesEvent
2021-02-17 16:08:34
terminusdb/terminusdb
https://api.github.com/repos/terminusdb/terminusdb
opened
Reference: CLI
documentation
We need material for the CLI reference section in the new TerminusDB docs. Not sure where you want to put it, but we need a link for the main navigation. Could you have a look @GavinMendelGleason
1.0
Reference: CLI - We need material for the CLI reference section in the new TerminusDB docs. Not sure where you want to put it, but we need a link for the main navigation. Could you have a look @GavinMendelGleason
non_process
reference cli we need material for the cli reference section in the new terminusdb docs not sure where you want to put it but we need a link for the main navigation could you have a look gavinmendelgleason
0
218,385
16,760,255,438
IssuesEvent
2021-06-13 16:32:59
bounswe/2021SpringGroup10
https://api.github.com/repos/bounswe/2021SpringGroup10
closed
Writing project description
Priority: High Status: Pending Type: Documentation
Write project description for milestone2 as described in the given documentation in the moodle
1.0
Writing project description - Write project description for milestone2 as described in the given documentation in the moodle
non_process
writing project description write project description for as described in the given documentation in the moodle
0
15,366
19,538,799,581
IssuesEvent
2021-12-31 14:33:51
joscha-alisch/dyve
https://api.github.com/repos/joscha-alisch/dyve
closed
Process: Release Pipeline
process
Add a workflow that triggers whenever a commit is tagged with a semver release number in the form `dyve-x.x.x`. It should run the same jobs as the build pipeline: * build the binaries and create a GitHub release with them * build the docker images and push them to the registry tagged with the version When the tag is on the latest main commit, delete the next release and tag.
1.0
Process: Release Pipeline - Add a workflow that triggers whenever a commit is tagged with a semver release number in the form `dyve-x.x.x`. It should run the same jobs as the build pipeline: * build the binaries and create a GitHub release with them * build the docker images and push them to the registry tagged with the version When the tag is on the latest main commit, delete the next release and tag.
process
process release pipeline add a workflow that triggers whenever a commit is tagged with a semver release number in the form dyve x x x it should run the same jobs as the build pipeline build the binaries and create a github release with them build the docker images and push them to the registry tagged with the version when the tag is on the latest main commit delete the next release and tag
1
77,752
14,915,762,492
IssuesEvent
2021-01-22 17:10:53
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
Support devirtualizing/inlining Comparer<T>.Default.Compare
area-CodeGen-coreclr untriaged
https://github.com/dotnet/coreclr/pull/14125 made `EqualityComparer<T>.Default` an intrinsic recognized by the JIT, teaching the JIT what concrete type of comparer would be used for any given T and enabling `EqualityComparer<T>.Default.Equals` to be fully devirtualized and inlined. This in turn led to optimizations in managed code, where code that was making heavy use of `EqualityComparer<T>.Default` as a default implementation added specialized code paths that used `EqualityComparer<T>.Default`, e.g. in various collections like `Dictionary<>`. `Comparer<T>.Default` could benefit from similar optimizations. `Comparer<T>.Default.Compare` doesn't show up quite as much as `EqualityComparer<T>.Default.Equals`, but it still shows up in a lot of places, and there are a variety of collections (e.g. the new `PriorityQueue<>` being added in .NET 6) that would similarly optimize for `Comparer<T>.Default` if `Comparer<T>.Default.Compare` could be devirtualized and inlined. Is this feasible? cc: @AndyAyersMS, @eiriktsarpalis
1.0
Support devirtualizing/inlining Comparer<T>.Default.Compare - https://github.com/dotnet/coreclr/pull/14125 made `EqualityComparer<T>.Default` an intrinsic recognized by the JIT, teaching the JIT what concrete type of comparer would be used for any given T and enabling `EqualityComparer<T>.Default.Equals` to be fully devirtualized and inlined. This in turn led to optimizations in managed code, where code that was making heavy use of `EqualityComparer<T>.Default` as a default implementation added specialized code paths that used `EqualityComparer<T>.Default`, e.g. in various collections like `Dictionary<>`. `Comparer<T>.Default` could benefit from similar optimizations. `Comparer<T>.Default.Compare` doesn't show up quite as much as `EqualityComparer<T>.Default.Equals`, but it still shows up in a lot of places, and there are a variety of collections (e.g. the new `PriorityQueue<>` being added in .NET 6) that would similarly optimize for `Comparer<T>.Default` if `Comparer<T>.Default.Compare` could be devirtualized and inlined. Is this feasible? cc: @AndyAyersMS, @eiriktsarpalis
non_process
support devirtualizing inlining comparer default compare made equalitycomparer default an intrinsic recognized by the jit teaching the jit what concrete type of comparer would be used for any given t and enabling equalitycomparer default equals to be fully devirtualized and inlined this in turn led to optimizations in managed code where code that was making heavy use of equalitycomparer default as a default implementation added specialized code paths that used equalitycomparer default e g in various collections like dictionary comparer default could benefit from similar optimizations comparer default compare doesn t show up quite as much as equalitycomparer default equals but it still shows up in a lot of places and there are a variety of collections e g the new priorityqueue being added in net that would similarly optimize for comparer default if comparer default compare could be devirtualized and inlined is this feasible cc andyayersms eiriktsarpalis
0
82,521
15,953,321,235
IssuesEvent
2021-04-15 12:18:24
gleam-lang/gleam
https://api.github.com/repos/gleam-lang/gleam
opened
Basic JS code generation
area:codegen
- [ ] Expressions - [ ] Ints - [ ] Floats - [ ] Strings - [ ] Case expressions - [ ] Patterns - [ ] Ints - [ ] Floats - [ ] Strings - [ ] Module level functions
1.0
Basic JS code generation - - [ ] Expressions - [ ] Ints - [ ] Floats - [ ] Strings - [ ] Case expressions - [ ] Patterns - [ ] Ints - [ ] Floats - [ ] Strings - [ ] Module level functions
non_process
basic js code generation expressions ints floats strings case expressions patterns ints floats strings module level functions
0
11,574
14,442,218,008
IssuesEvent
2020-12-07 17:50:19
pacificclimate/quail
https://api.github.com/repos/pacificclimate/quail
closed
Icing Days
process
## Description This function takes a climdexInput object as input and computes the ID (icing days) climdex index: that is, the annual count of days where daily maximum temperature is below 0 degrees Celsius. ## Function to wrap [`climdex.id`](https://github.com/pacificclimate/climdex.pcic/blob/master/R/climdex.r#L756)
1.0
Icing Days - ## Description This function takes a climdexInput object as input and computes the ID (icing days) climdex index: that is, the annual count of days where daily maximum temperature is below 0 degrees Celsius. ## Function to wrap [`climdex.id`](https://github.com/pacificclimate/climdex.pcic/blob/master/R/climdex.r#L756)
process
icing days description this function takes a climdexinput object as input and computes the id icing days climdex index that is the annual count of days where daily maximum temperature is below degrees celsius function to wrap
1
291,426
25,147,325,929
IssuesEvent
2022-11-10 07:01:46
reportportal/reportportal
https://api.github.com/repos/reportportal/reportportal
closed
Logs attachments imported by /v2/.../log API doesn't have correct name
bug Check: Test
**Describe the bug** 1. Create launch object 2. Import binary attachment to logs using `/api/v2/{projectName}/log` Send POST to `/api/v2/default_personal/log` with data: json_request_part -> json.json binary_part -> html-report.zip Inside json.json: `[{"file":{"name":"html-report.zip"},"level":"info","time":"2020-09-15T09:37:39.333Z","launchUuid":"45a1dfc3-6400-47f5-93c3-49fbd09d4dca","message":"html-report.zip"}]` **Expected behaviour:** Imported attachment name will match with value inside the request. **Actual behaviour:** Attachment is imported successfully. I was thinking that attachment name (while downloaded) will match with `name` parameter send on the request. Instead attachment is named after it's UUID and what is worse, without original extension. **Versions:** - Browser: Chrome Version 86.0.4240.75 (Official Build) (x86_64) - Report Portal hosted using docker, with default settings - Current version:API Service: 5.3.0;Index Service: 5.0.10;Authorization Service: 5.3.0;Service UI: 5.3.0; Edit. I updated Report Portal lib to 5.0.5, based on changelog it should be fixed, but it isn't just changed a bit. **Expected behaviour:** Imported attachment name will match with value inside the request. **Actual behaviour:** Attachment is imported successfully. I was thinking that attachment name (while downloaded) will match with `name` parameter send on the request. Instead attachment was `attachment_51049.form-data`, without original extension. Logs: ``` [10:15:10] [INFO] [ReportPortalHelper.addAttachmentToLaunch] Adding 'html-report.zip' to launchUuid '4f2dbf90-0dbd-45e4-a735-a7d011228cf1' [10:15:10] [INFO] [okhttp3.internal.platform.Platform log] --> POST http://reportportal.dummy:8080/api/v2/playground/log (1107347-byte body) [10:15:11] [INFO] [okhttp3.internal.platform.Platform log] <-- 201 Created http://reportportal.dummy:8080/api/v2/playground/log (686ms, 61-byte body) [10:15:11] [INFO] [CucumberExtentLibrary.services.ReportPortalService executeRequest] {"responses":[{"id":"b69cf438-afc0-4b4b-90a4-9c6d5dae9e0b"}]} [10:15:11] [INFO] [ReportPortalHelper.addAttachmentToLaunch] Attachment 'html-report.zip' successfully imported. ``` Same attachment visible at Report Portal UI <img width="963" alt="Screenshot 2021-05-13 at 13 10 28" src="https://user-images.githubusercontent.com/3976951/118118104-c3966800-b3ec-11eb-8a98-e46e8df37017.png">
1.0
Logs attachments imported by /v2/.../log API doesn't have correct name - **Describe the bug** 1. Create launch object 2. Import binary attachment to logs using `/api/v2/{projectName}/log` Send POST to `/api/v2/default_personal/log` with data: json_request_part -> json.json binary_part -> html-report.zip Inside json.json: `[{"file":{"name":"html-report.zip"},"level":"info","time":"2020-09-15T09:37:39.333Z","launchUuid":"45a1dfc3-6400-47f5-93c3-49fbd09d4dca","message":"html-report.zip"}]` **Expected behaviour:** Imported attachment name will match with value inside the request. **Actual behaviour:** Attachment is imported successfully. I was thinking that attachment name (while downloaded) will match with `name` parameter send on the request. Instead attachment is named after it's UUID and what is worse, without original extension. **Versions:** - Browser: Chrome Version 86.0.4240.75 (Official Build) (x86_64) - Report Portal hosted using docker, with default settings - Current version:API Service: 5.3.0;Index Service: 5.0.10;Authorization Service: 5.3.0;Service UI: 5.3.0; Edit. I updated Report Portal lib to 5.0.5, based on changelog it should be fixed, but it isn't just changed a bit. **Expected behaviour:** Imported attachment name will match with value inside the request. **Actual behaviour:** Attachment is imported successfully. I was thinking that attachment name (while downloaded) will match with `name` parameter send on the request. Instead attachment was `attachment_51049.form-data`, without original extension. Logs: ``` [10:15:10] [INFO] [ReportPortalHelper.addAttachmentToLaunch] Adding 'html-report.zip' to launchUuid '4f2dbf90-0dbd-45e4-a735-a7d011228cf1' [10:15:10] [INFO] [okhttp3.internal.platform.Platform log] --> POST http://reportportal.dummy:8080/api/v2/playground/log (1107347-byte body) [10:15:11] [INFO] [okhttp3.internal.platform.Platform log] <-- 201 Created http://reportportal.dummy:8080/api/v2/playground/log (686ms, 61-byte body) [10:15:11] [INFO] [CucumberExtentLibrary.services.ReportPortalService executeRequest] {"responses":[{"id":"b69cf438-afc0-4b4b-90a4-9c6d5dae9e0b"}]} [10:15:11] [INFO] [ReportPortalHelper.addAttachmentToLaunch] Attachment 'html-report.zip' successfully imported. ``` Same attachment visible at Report Portal UI <img width="963" alt="Screenshot 2021-05-13 at 13 10 28" src="https://user-images.githubusercontent.com/3976951/118118104-c3966800-b3ec-11eb-8a98-e46e8df37017.png">
non_process
logs attachments imported by log api doesn t have correct name describe the bug create launch object import binary attachment to logs using api projectname log send post to api default personal log with data json request part json json binary part html report zip inside json json expected behaviour imported attachment name will match with value inside the request actual behaviour attachment is imported successfully i was thinking that attachment name while downloaded will match with name parameter send on the request instead attachment is named after it s uuid and what is worse without original extension versions browser chrome version official build report portal hosted using docker with default settings current version api service index service authorization service service ui edit i updated report portal lib to based on changelog it should be fixed but it isn t just changed a bit expected behaviour imported attachment name will match with value inside the request actual behaviour attachment is imported successfully i was thinking that attachment name while downloaded will match with name parameter send on the request instead attachment was attachment form data without original extension logs adding html report zip to launchuuid post byte body created byte body responses attachment html report zip successfully imported same attachment visible at report portal ui img width alt screenshot at src
0
5,505
8,376,013,010
IssuesEvent
2018-10-05 18:20:34
googleapis/google-cloud-java
https://api.github.com/repos/googleapis/google-cloud-java
closed
Setup synth scripts for generating clients
type: process
This will enable us to automate the regeneration process outside of the release process.
1.0
Setup synth scripts for generating clients - This will enable us to automate the regeneration process outside of the release process.
process
setup synth scripts for generating clients this will enable us to automate the regeneration process outside of the release process
1
17,648
23,469,487,237
IssuesEvent
2022-08-16 20:12:29
ForNeVeR/Cesium
https://api.github.com/repos/ForNeVeR/Cesium
closed
Nested include support
kind:feature area:preprocessor
Look for the number `239` in the code to find clues to resolve this issue.
1.0
Nested include support - Look for the number `239` in the code to find clues to resolve this issue.
process
nested include support look for the number in the code to find clues to resolve this issue
1
18,181
25,145,601,167
IssuesEvent
2022-11-10 04:54:55
mkj/dropbear
https://api.github.com/repos/mkj/dropbear
closed
openwrt dropbear ssh timeout
compatibility
last time,we use dropbear 2020.81-3 version,we can ssh to my remote openwrt across ipsec-vpn; but this time ,i upgrade my openwrt, and fine the dropbear version is update to 2022.82 ;we can't ssh to my remote openwrt ;but between remote sever and remote openwrt , it can work ;so i think maybe IPQos influences this ?
True
openwrt dropbear ssh timeout - last time,we use dropbear 2020.81-3 version,we can ssh to my remote openwrt across ipsec-vpn; but this time ,i upgrade my openwrt, and fine the dropbear version is update to 2022.82 ;we can't ssh to my remote openwrt ;but between remote sever and remote openwrt , it can work ;so i think maybe IPQos influences this ?
non_process
openwrt dropbear ssh timeout last time we use dropbear version we can ssh to my remote openwrt across ipsec vpn but this time i upgrade my openwrt and fine the dropbear version is update to we can t ssh to my remote openwrt but between remote sever and remote openwrt it can work so i think maybe ipqos influences this
0