Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
113,748
11,813,322,274
IssuesEvent
2020-03-19 22:06:13
carla-simulator/carla
https://api.github.com/repos/carla-simulator/carla
closed
Remove Autopilot dependencies on CarlaMapGenerator
backlog documentation
Right now our autopilot needs a CarlaMapGenerator Built in the world for it to work correctly, even in maps using the new roadrunner pipeline. Remove those to clean the map generation pipeline.
1.0
Remove Autopilot dependencies on CarlaMapGenerator - Right now our autopilot needs a CarlaMapGenerator Built in the world for it to work correctly, even in maps using the new roadrunner pipeline. Remove those to clean the map generation pipeline.
non_process
remove autopilot dependencies on carlamapgenerator right now our autopilot needs a carlamapgenerator built in the world for it to work correctly even in maps using the new roadrunner pipeline remove those to clean the map generation pipeline
0
244,902
20,729,111,087
IssuesEvent
2022-03-14 07:31:16
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
rpc: TestHeartbeatHealthTransport failed
C-test-failure O-robot branch-master
rpc.TestHeartbeatHealthTransport [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4566826&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4566826&tab=artifacts#/) on master @ [231fd420c61e39b6ad08c982c496d82cf1910bc5](https://github.com/cockroachdb/cockroach/commits/231fd420c61e39b6ad08c982c496d82cf1910bc5): ``` === RUN TestHeartbeatHealthTransport context_test.go:757: rpc error: code = Unknown desc = client cluster ID "5745c6bc-64a8-40f4-9d60-a9dc253c27e8" doesn't match server cluster ID "f8521426-68e9-429d-abd3-99406a9db4de" --- FAIL: TestHeartbeatHealthTransport (0.05s) ``` <details><summary>Help</summary> <p> See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM) Parameters in this failure: - TAGS=bazel,gss,deadlock </p> </details> /cc @cockroachdb/server <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestHeartbeatHealthTransport.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
1.0
rpc: TestHeartbeatHealthTransport failed - rpc.TestHeartbeatHealthTransport [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4566826&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4566826&tab=artifacts#/) on master @ [231fd420c61e39b6ad08c982c496d82cf1910bc5](https://github.com/cockroachdb/cockroach/commits/231fd420c61e39b6ad08c982c496d82cf1910bc5): ``` === RUN TestHeartbeatHealthTransport context_test.go:757: rpc error: code = Unknown desc = client cluster ID "5745c6bc-64a8-40f4-9d60-a9dc253c27e8" doesn't match server cluster ID "f8521426-68e9-429d-abd3-99406a9db4de" --- FAIL: TestHeartbeatHealthTransport (0.05s) ``` <details><summary>Help</summary> <p> See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM) Parameters in this failure: - TAGS=bazel,gss,deadlock </p> </details> /cc @cockroachdb/server <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestHeartbeatHealthTransport.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
non_process
rpc testheartbeathealthtransport failed rpc testheartbeathealthtransport with on master run testheartbeathealthtransport context test go rpc error code unknown desc client cluster id doesn t match server cluster id fail testheartbeathealthtransport help see also parameters in this failure tags bazel gss deadlock cc cockroachdb server
0
33,002
4,454,519,174
IssuesEvent
2016-08-23 01:16:38
geetsisbac/WCVVENIXYFVIRBXH3BYTI6TE
https://api.github.com/repos/geetsisbac/WCVVENIXYFVIRBXH3BYTI6TE
reopened
TRCzT13XAfird313adxPKWEkiZ6/yzHdBaXmIZu7TqEOs2TFFdJUegDeYNAbMT48In0nMC2KqZrpbOren73zewmspGGp9Bdw0/5rf7vjuICQpmucJGuwpUgBIZ9EKgiWNiS4Kmz6io6bmfVqvh+VhlmjQfJFCX7Oi2+0/SolQ90=
design
HyGVnNiduvLezui3rYzmQR8X9fVvL0/wjW5BIXRxK3XCEW5pyxTQY0g11YLEqXc4/0eahojwjYr2/Thqp1ZwsCwIi3XEES0Mc3tEYRj0N7xkgj/7ZAwKXGMrvEsLZv4m+nLpempNtcqOH7x9/vGyvjiP0YP+iDqwu2+4OpaPGqd0N/BEcQHcj68i5eqeJkEOib0NSlj7nCYZ/4YaF0rE2znbC5sClxfKoOciCi/lStr87ZkoKBXgbwtdf3gct9ipp9R+aw4Iqdo0+m7ykLKU7qwUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zmRcukElC4rivVP19lyxxh6h1XX/sh1ESHTjfi2vE4ODsO89BYUP3HvqHsdA+lehvrBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXNz6IW4RgeQ+s4KVoUqmSufVqmH1OvJNObh2wujePgUBOc3sBG9n4wD9NIO4XrDz+SsFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc43G6UZckPpkV78htc8vpO5a0yykWPck2IBEBYQRPpQxB6EAwg6EwJGYoFBsEdK12awUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zXDy2ayvhJWOeA9HAZFUl7914c1MArdqCMlMkfOOCmWSO5POhI4767vfE+AaLNsL4rBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXMrr3QGgqrA7s27V76BtUeLWtMspFj3JNiARAWEET6UMQcA35pttCCran/DNoQbEhGsFF0PXuYLhVB8BpELqWQBYT5hGpyfQyizXqWNiPf5QZ9o7EYUTSfHv0VomAhgHbIE+8HXR8S7r9KlxUT8IhkoYf+LE0l+qlunls2IpFXf66wUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zMb8XETGh5brY0OkJasoGTlrTLKRY9yTYgEQFhBE+lDFvpaa4XHMc+jlxTZ27oEOzrBRdD17mC4VQfAaRC6lkATKsSHYDBjzeRCM1BjojNDyt0Pkfc+hvl8QTBb7dvnmzLkV7aV9jdABLBqdGN1K1Yybbl0YxyV2/CN7rCul12tSsFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc5LQWG+5dTxPEPELW0LgTqHLMeyfFfu5QDc0R+CdBxXL5iio62APQZYvvVnpYktvpawUXQ9e5guFUHwGkQupZAFeCaO3/hD8SwAq/CZIgPV9Lf8XhOADJGLj1N4Gjc0kv0nZadw2c2aZMOD2/EoZL7/ArDptIGwB1hhcyZnHNazyrBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXOLPGDlguCFm1DDJEGC7q6mSdlp3DZzZpkw4Pb8Shkvv8BQV7znkNcGevIDZsgT5xisFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc99NLTAj2LfetfDOOg6AxnpJ2WncNnNmmTDg9vxKGS+/yzTk7Y109K4pH+5ON21dR6wUXQ9e5guFUHwGkQupZAEyrEh2AwY83kQjNQY6IzQ8WHUn+eznEHSy2YrJ2TfER0nZadw2c2aZMOD2/EoZL79uv0pmOL/ojBmDWGTlaolOrBRdD17mC4VQfAaRC6lkATKsSHYDBjzeRCM1BjojNDw1dX9CBfQzYg+uNrSdXJf7Sdlp3DZzZpkw4Pb8Shkvv2jjvdtAGSTvjyYVpyBw4LWsFF0PXuYLhVB8BpELqWQBMqxIdgMGPN5EIzUGOiM0PI0SL0AJWEsSl3R/FD6u7X5J2WncNnNmmTDg9vxKGS+/8lCxlgrGa124zKWNOrBJ76wUXQ9e5guFUHwGkQupZAFJrjI1wQCD/+pM4xcPHOxFgplaElLKZIb4cG6pacOEh4uIbQTjrxZvebU+sG8YeDRW7t830tM0Izb0/V/DWQZurBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXMTgASVKIPGOsKO/S9s3niYSdlp3DZzZpkw4Pb8Shkvvwl4exY+UpjjMlLguhpg+5qsFF0PXuYLhVB8BpELqWQBMqxIdgMGPN5EIzUGOiM0PLSdbbYKjMch8ELmzsyHxW/OXtWSwH3jAAi45/bQnl/xHOlIKws138TD5aNlEcpsK6wUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zP1dmiRxjP0WPai6+0+8TPXXOUg65P7MBW9p3p8LY5JOwcBGvKGi7txANJeu8/AyNrBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXOtnuR+HaGYI1UvoKHtnKx2QIpNKAU84nw2PUsKHXqrlH7KZftA/7K25VqVYT8QraSsFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1famK/jocM/jVbbXDOW27uWqusn71X5p4WP+sN3tWHQxHsl/kVrelSh0W2F+T66mS9qwUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zy2WfBnp7G1ZrDcdxHI9Rg66yfvVfmnhY/6w3e1YdDEcOMmqpi5aO95d4TmvPdJTlrBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXM3+98HsVlh7raKXuKOl913rrJ+9V+aeFj/rDd7Vh0MRxKtrKBizOq4NsIDThU5lkCsFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc6LnrQORqu/OHn2WJQLPDe9J2WncNnNmmTDg9vxKGS+/YlqdJWvozDH5HAVQIrG9a6wUXQ9e5guFUHwGkQupZAEyrEh2AwY83kQjNQY6IzQ8OALFhOhEAzlJPz1P4oj3R0nZadw2c2aZMOD2/EoZL78XE5HoxwoeQ6YO4WMi59X3rBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXM51dvJTYM3lvmqRfMj3VIfsxtGSrW6sIX9Wyis0XHhBKL3GSS17EhJAoDgIRkeg52sFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc29qps4+LbeKSHIn/GLLuKnbVyBvUwpEZXxDbgl9IYVmj7nNOVzJ8xEGwsKB7D22U6wUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zkosBQPU8sNCegj/if3m0Rol+NAEE/kBH1Hx/dU+PLnM3ssm1e0YtI4O9Why2bR2KrBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXMS6Maq61hLNhMrZmCni7E8Sdlp3DZzZpkw4Pb8Shkvv5vCOW2MM7efsBgNdeLnHT2sFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1NcwllxyJy54PS/ELD15WAyhiKu1T4q6W6ajdsOKq4qTJ+lmtnoXX9IKlhcfPlA1styawUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zyNc+n02QDn1u0E0KF+CzzTE1a24ABCufUNMiEwgL6L7p7BYvJZnWNNfWxsDrHeDdrBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXOiEPQ824pvsiisMbq/e08ilBcSz4iPg2nJWrUqTvS+HYsmD+D7z3WoAYO18wcoUACsFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc1RcbhcYmIzU0vwbxdxlwsmnpsYVAyA+C9TMIw9k7/Y/OC1qs/VWM9LHOPFbww8YkawUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zMkXtB7hQSu0KbJaiDXgBUsJARZCY9njN2aEtDCmYb7D3fEfX8d5Gk1x06XLOqZPzrBRdD17mC4VQfAaRC6lkAcglDHPAygmsdYjx4vp9F8620nudhM9kcGsWtn763+8eSlkqEWMqIt26NoBTzl68Xpjb7OoPn/rGGT7jA7jsgn6sFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc4WXNKy3Ut0zoVvX7nvElWk6ek4SyOOvWrGyVxv8eAzhJSxAUm+tV7o61AEJEFAZbqwUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zBdb7xPShFzD3QcpCkRcQmUnZadw2c2aZMOD2/EoZL7/UqN2xj3+gcSIHE0Ku9SnirBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXPrQIWYhULAEdHaabK0OTBZhJfS/Y5kidl+2oxiTfaxdv++kxq5UJTBgvhIdn730EysFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc1nVz7of7Y472vEGoji0TG4JTrojFWXcxEwzLdlJvG1kFdKA37clay9zn6UnXAEhZKwUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zvt3+vC3dNyOWbd2uW/GNAS5Fe2lfY3QASwanRjdStWNtbIjqdcobvRMt8UtPQl9irBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXMI3cxcPkyyEG5K+LMteShY/t3rd8+g5fd5CNL0es/Wa8Ff+OsGz/4bjaWpNmu7hqusFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc7qcfB8Ulu0kG8HLhhR5E8zNpoGxe33sc40vXZ6JK3dQSOMByBEP1ILW3wY/OxPmI6wUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zDys3uz5KtIIJE0keca9uGaEIkaFv081Tm//vcdp33Gk7I32FXhZ1C0dlobF657OcrBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXPh+HibFYU0KAtjZ4yz/Wnt+smz+MyoMGNvxtefyhMz7sh+TwB550auKRxFn6JIW9qsFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc8jmV8S6Xif2LZ9/DwxAngREUS5k2qZ7fCednNqUy9HSDcN8g0LPh5GYf3l1YPz946wUXQ9e5guFUHwGkQupZAEyrEh2AwY83kQjNQY6IzQ8BcNqPDvBFu20OOE9+slD+5+CMlXC+Iwr8FXGg3dgMPKWTCgY2Q91j9hBHGULLb56rBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXOcAde+WuYESihXbtqS6S9axr5RBHSPGop1HDKe73kqqLQcKOsPHGbdE0sCdjWBHMysFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc/lYeHTFyOOawMyhDAQi6/v6ybP4zKgwY2/G15/KEzPu7Ml08UDY92qLsdn3BvP5EqwUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1ze0hQZxb9lo2TDz0Lbo4agUknlTGCzeusD6QnDle1ZtpzqiKvgiyEloveTHQRsxyIrBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXO/+UNLBpgCh3kYei//8tRYkUtso6pYIhML8LNjO9HEO3VcExx6+EkV1rjU+SgT3desFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc+eqqXevX1ET1l6tfR9+dEY0bTGS9tOh+7dkwKKcD93zy47/W0oDGbxDDx737bCvq6wUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zEellDVDiIQgeXd3HPklLsJDdwBSSDk7l8XUsMA9eMBmnRMTwmSMMsgYFV1LJwTFjrBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXO5D9nOMuvqZ5v3yyur2rT2Sdlp3DZzZpkw4Pb8Shkvv9JABCm23fpUkdk890ANMQ2sFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc+2GEBL8VpXMlzyvhSL5y4BJ2WncNnNmmTDg9vxKGS+/b1Wke2xpKAcaZSGqueRn/awUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zdomcXuNhzEv4WuvWfyZwXeNwkHTZq4LbR3onA57eTP+f7gfVfOmk8XwPgO0YskYurBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXOeOjVpCwRWnZDMAgJTZr1o0GLcYV/ECeA0wudBsK4Q7ehHSoXs52l75LjTiqAG6GisFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Ncxc/D8jWesS5zegvD1EgdlJJ2WncNnNmmTDg9vxKGS+/4AEX6UavbiO93vYC3XgAuawUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zfQPLOvHDPYbEJ88XMi+0iEnZadw2c2aZMOD2/EoZL7+P0XVpbevBqetwxX+LcDSfrBRdD17mC4VQfAaRC6lkAcglDHPAygmsdYjx4vp9F86PCCI55gnpqf7Teb66bNbBYZrDGc6CPf7Cds64Qr9/gINUDuUqSjn1nJ0DLs2gpq+sFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fU3Azv+Br7Gf5EtbMdb2FlPeR/jPissliz47ZYa6qnu5STA7cZbOJgJ8hYfSH1LKgqwUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zaTeZm47qLkYJlbHlV6bNtU4Wr7t+9AHEChZ0y01enmkZDc2g5TtPbhWqcVp8lc2VrBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXNEsJUpFoBh/HWoLRYzhrQYDTRrFHMexKFrxGR/8yiY7ZioUc2vTwQYIJn2slbxkmSsFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc2fwY/ERWd96opvNkW7v+lwLVm7exOyyJb0ykHwDjP/sQkdUmV+iQzdKHS/x3lo+CawUXQ9e5guFUHwGkQupZAEyrEh2AwY83kQjNQY6IzQ8GQ57dc4mlvim0dKFC1Fuadah/+XZYAY+tJTNOt9cFZgFm6YuUhXXKuhvv9jszU87rBRdD17mC4VQfAaRC6lkAUmuMjXBAIP/6kzjFw8c7EUAHly2iEEegiqKNv4+YQgwp/e4vfzvD0cYZyluR4tZ3rYCdnSj8AxHpwVnIbZ/HUWsFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1NcyoiDzZZ5YoFLNFRHq0Qpej0cPnWV5bwkOM00nDWUnhNqEIW7+qEXPvrJSuos9JnZKwUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1z7WI3ThJAaJ5c2ksbmHft+inZuNyomTbH/vpTTNRo4pR+k+2gJ/I23xir4acrl6yArBRdD17mC4VQfAaRC6lkAcglDHPAygmsdYjx4vp9F86kb7Ogr/wGD8OZx70d3AwJPNOlwEw802COS1gGoG0JINToy07STxgmajXQ5TLGuh6sFF0PXuYLhVB8BpELqWQBSa4yNcEAg//qTOMXDxzsRV5w8NQQDIFkOovMuMTdV2nonpt3YJrVtltexUii/+NgpqzNIhfrief/rQMv/NRS6KwUXQ9e5guFUHwGkQupZAEC9sTCPARY+0mZ1At+wyJyjCryPxlAjG06Jue4AwhmAmGawxnOgj3+wnbOuEK/f4CFbQRmnk9r+DPI9Egf/QQyrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X2n0AhCshDumM6tQGS5pt4jJd5PQOx8KmSNFitjVpEv1+Ei8Y4bKS/NhkY7Cw3Qh7+sFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc3Rcm5RMJZv8dOtpiCUdzi4uRXtpX2N0AEsGp0Y3UrVjpwxUYbnhz1goDtkDXz4liKwUXQ9e5guFUHwGkQupZAEfNaRQA1AvM7uhZutUbGM7NfBXUF7sGTiSGiYQMxc40BlNuoFO1WCoDJ0rc1CBRRfodcw0zUYRI6dcVT+bqZywrBRdD17mC4VQfAaRC6lkAe3YEe1VGqcqa7qWkSKkJw/M7erGckLBijuBmnRUnOOq7cx0B5Apgh1Y5/b9vYRWJBXJMqKRMpogqJwvaq6mscqsFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1ffSLjsio/S3t54PhhZkQwo4A08/KBjPmFfMufXs0yaCvcKMtxsjDDguCgmQ0Sg0Ug6wUXQ9e5guFUHwGkQupZAFeCaO3/hD8SwAq/CZIgPV943/hmw2RW8kjKQr1dDCn7mWrU5dSnj8xr8yUTn84s/AK7vvkwpmq4FSWaXgQxz2nrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X2XWrWQjmwsjRPTf9u0SuTMksQjyEMdNTfvRlAcHWXCZaU+UfMYNAwUd1WbIv4xTVesFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fZhfSlvzubC1CwRvhR9HEIhAik0oBTzifDY9SwodequU6Iq7FLePxJzAw7BIWWLdLKwUXQ9e5guFUHwGkQupZAFeCaO3/hD8SwAq/CZIgPV9JKMY1PVWePPZ8bOeGrSXE3uE1vFNBEkJ8tcH42ZtgwvtXwilcwU7z34b0RdN2UZdrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X3rQmiqqAvEQ9I1N1racUk3QIpNKAU84nw2PUsKHXqrlN0aKTBNS9QBCdUMDQTld02sFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fTQ0qM61UujQdXXSacX2MYVHA8pqUI4wJ6e1CpXnXHIrUG5kpmlTDw1Ktw0RFV0Fw6wUXQ9e5guFUHwGkQupZAFeCaO3/hD8SwAq/CZIgPV9U2AUpigvYSBVOl4pjhwHXE3M+ixmZe4YXLZepO3SrvSFcgizRUbpLYo2PscE1FysrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X2oX7MYtepzXXK0b+0loue5P+YWSJSbu3VkZZx4vGKl2GO5Ssr88tt269KKDvTbEbusFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fRXQIT7nlQGAs6C4jPKolw6zAFxwIHVuSya9ENWK5HkViXSEoemJ8YBBytPk5R6Z2qwUXQ9e5guFUHwGkQupZAFP4UYH34T3sjDCvY9oTmhsm2eGWr6YCx0oUT95XRcUCZG4A08WFmYlPJyMq32DzsiC1CfQceE2zkpIsWyZADzUrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X1ZUmmoj8+Sw98kHgNdvsymX0pImXPrZgt2yg7jjbcZCtA+yc5HEIss4rHSLXw/LSisFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fUUq/nOGxlGgeo8dSbwp9diYaswZRHMdYGN1cnl4QQK/rsy0IiY7WspYwdusOmIG+KwUXQ9e5guFUHwGkQupZAFeCaO3/hD8SwAq/CZIgPV9wm7kv18t1ZCwH854sQpeC9xRyaHGvAIC9hvL+Pl0ON+xXU8UyssoVScqBsGedcARrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X0bliE/QKFO+iWtadSshvZjtMnCDm6uYkQwtScsI7XyB5BrCsA6UXwALxF9WmuZFbmsFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fRCvmYXw7ByNZBTPYzRcOUdlxHvyjMADcu2AREECtA8ZOl+6bQo6OltueOjjKPTHlawUXQ9e5guFUHwGkQupZAFeCaO3/hD8SwAq/CZIgPV9aSUDKUwEMUasSrXgxYChkZudcCAsV2W6uTpVpe2VRRo36Pk5jR2wbLMG1KQ+EttMrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X2wAFFOYKs0t/hQYv1Ghm1NY3sXaI12mv+BhJ4s2DOFqDmIwijBnnvTCcztcc9bg6WsFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fZuyvanp6vScYmSNKeIzpkjH9mL4yCGDvFlwhd8Z18XdsfZtllr3ljhxSsprHDObT6wUXQ9e5guFUHwGkQupZAFeCaO3/hD8SwAq/CZIgPV98zEeC6FTX1/C4puAMaZskr+cnFTKgp+G87G3l37h2FMH3EjytkbRGlrEI5KiqXN7rBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X04qE6yL9BXVIHc+2sTzE3PTcz6LGZl7hhctl6k7dKu9G5+iC9bZa1s6Vuc+upOIOqsFF0PXuYLhVB8BpELqWQBHzWkUANQLzO7oWbrVGxjO6kka45QqsWW2sCwwoD/GPhREl+IijAwE66gPsPzsZNiJIsm5ImXbQujmv7xoZE6EKwUXQ9e5guFUHwGkQupZAFVSzlQGPhQFiAYshBB4uls1BNiyA7JHN8FJmpr/VHbQA+x/oOdABQd+d4v7lO9edfpaMx8v9mkazIU53eKf7evrBRdD17mC4VQfAaRC6lkAYao36Jwbo54ugnbNQhS9/eNOcjFlcwcZLoYbHJ5tcDvD7H+g50AFB353i/uU7151ylXsZZB0HxPzT6D4ctKJgisFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fWMAJjYyJeKUnLwPZ7+MvZIPsf6DnQAUHfneL+5TvXnXEEGLZfjC6jjS6Fv5liYtf6wUXQ9e5guFUHwGkQupZAFeCaO3/hD8SwAq/CZIgPV9k8UtqF/wW/aTC5xIuJd4G8OkSn2+iWG7Gx+iLxhcw3oVnR5slc/62YOkW2hAQi2rrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X34WjENiXd7UxQ7s5tvsTA0Sdlp3DZzZpkw4Pb8Shkvv8m2a/KnTQHWSBFFsLyJYqCsFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fTEBtwEGEK0T/VA44DZNKthwdQqLwuJbO8x+/5fqaPbZ4JC+lZ6NGHcr2Ip0BnkKjKwUXQ9e5guFUHwGkQupZAFVSzlQGPhQFiAYshBB4uls9HBfBPUpXy7bUDLI+IqrCUnZadw2c2aZMOD2/EoZL79VkyA5DZXXuWgO6dCwouoVrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X1x8LDBOIZDNdnL51bBnOY+javXYU5BdolH0gOMTv5DdvPTH90DGltrO1hX8wU514asFF0PXuYLhVB8BpELqWQBAvbEwjwEWPtJmdQLfsMiciDITkw+dYZ/uVtwYh4n0NBJ2WncNnNmmTDg9vxKGS+/2ZHobe7jOJGwXqcHbM5gxawUXQ9e5guFUHwGkQupZAFeCaO3/hD8SwAq/CZIgPV90vD16X/pQzFILZNybEZm1u3VQt7vFm6OYURAE+4WZ2Qplxyp9UzLUU7oDOOfVgdirBRdD17mC4VQfAaRC6lkAW44f30wNy1U7UgsWS7sqCfxUtlSirF6ZcZNM9ChpOdX7dVC3u8Wbo5hREAT7hZnZGP+FXFTcJ8IrT9F9ZhWI5qsFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fdqQfVxyr9kFAzry5IYR95Tt1ULe7xZujmFEQBPuFmdknzbYqUx++q5rQRojUJCIl6wUXQ9e5guFUHwGkQupZAEC9sTCPARY+0mZ1At+wyJyfvwpFowKCoGMpSUWivkYXO3VQt7vFm6OYURAE+4WZ2QdboLGnwxAekectyO2R711rBRdD17mC4VQfAaRC6lkAVVLOVAY+FAWIBiyEEHi6WzdEBmOT8bUJht2idy645Y/7dVC3u8Wbo5hREAT7hZnZGam1CctllWrpZRFG4l9FDGsFF0PXuYLhVB8BpELqWQBT+FGB9+E97Iwwr2PaE5obMG/C6mwxXwt3qhxYAANwuHt1ULe7xZujmFEQBPuFmdkKwl+tl+w1x9vyF4jVvbCt6wUXQ9e5guFUHwGkQupZAFeCaO3/hD8SwAq/CZIgPV9zbXI/6MElrtiFo2pFGHsGe3VQt7vFm6OYURAE+4WZ2Reqz3UdH1FsxuOh7jbdwDXrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X1CF3svOebHn4JEcUzalxo/7dVC3u8Wbo5hREAT7hZnZPtTXsj5QHa+ky044TmB2LysFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fVHX5N0wCCTh5k3rEc3Frmvt1ULe7xZujmFEQBPuFmdkuwonIKeE8djhOAH3S+nUcawUXQ9e5guFUHwGkQupZAGc3+52i90DTqmCmiWE1O56XbLH2yEoUN6RQmvH0DW0Le3VQt7vFm6OYURAE+4WZ2S5JVD66bqYYGMpN9DpfR4mrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X3KO58HX7or9cepKdFglQmh7dVC3u8Wbo5hREAT7hZnZIw+ePgUo6U43XjLT2MZhAqsFF0PXuYLhVB8BpELqWQBAvbEwjwEWPtJmdQLfsMiciGBHjozhX7wXRwKtdwX/Q/t1ULe7xZujmFEQBPuFmdktjNELrS6puVY+qEriJTdhqwUXQ9e5guFUHwGkQupZAEC9sTCPARY+0mZ1At+wyJycEfdcXBHvuQv2RIl6ijwqZsGb4/Iv1+q8GQEaxlDMXqC+Qjhuqku4ZtYAb+R9qfprBRdD17mC4VQfAaRC6lkAYwpnvEShFC+Q5g7kEOa/Up2XHPA307ZvKyVgvuE0aAwmwZvj8i/X6rwZARrGUMxeplzl82QvQt7EOzF0+3074CsFF0PXuYLhVB8BpELqWQB1Sx785bJrxfXwKlZj466R344avfCDEV2UgqPrwsUb/tPBiZ1UN2ApjsCxTMHA9bEENRFxQZK6mtGBm9XmLYWiKwUXQ9e5guFUHwGkQupZAEC9sTCPARY+0mZ1At+wyJyl/YWkWnYjziMdzMiQvIhQk8GJnVQ3YCmOwLFMwcD1sTxPiAQSRmSOwsUfz/iMG0grBRdD17mC4VQfAaRC6lkAf/595zh+UUCHwINqQ/ypMCxytWmUc0yctZgNva8MK3O8iXkz1YzfIHDskCs3R8sVmc5vZeaN8CmpBCUUc7ErFWsFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fd0VZwZEmRT0eOKipFiQ2XDt1ULe7xZujmFEQBPuFmdkPV+FL8bkgOEsmwdg3wRmgawUXQ9e5guFUHwGkQupZAHt2BHtVRqnKmu6lpEipCcPLHudseSqGOMrZJ6ek6+kZu3VQt7vFm6OYURAE+4WZ2QFB+ZiX5ravRYs7C/uMCfgrBRdD17mC4VQfAaRC6lkAR/cyZFU+GIEDP7e99Gk/4qSVzWVYmQREeKfWP+CIaI9UBzXfHR/GjStCpsUWW1u+A==
1.0
TRCzT13XAfird313adxPKWEkiZ6/yzHdBaXmIZu7TqEOs2TFFdJUegDeYNAbMT48In0nMC2KqZrpbOren73zewmspGGp9Bdw0/5rf7vjuICQpmucJGuwpUgBIZ9EKgiWNiS4Kmz6io6bmfVqvh+VhlmjQfJFCX7Oi2+0/SolQ90= - HyGVnNiduvLezui3rYzmQR8X9fVvL0/wjW5BIXRxK3XCEW5pyxTQY0g11YLEqXc4/0eahojwjYr2/Thqp1ZwsCwIi3XEES0Mc3tEYRj0N7xkgj/7ZAwKXGMrvEsLZv4m+nLpempNtcqOH7x9/vGyvjiP0YP+iDqwu2+4OpaPGqd0N/BEcQHcj68i5eqeJkEOib0NSlj7nCYZ/4YaF0rE2znbC5sClxfKoOciCi/lStr87ZkoKBXgbwtdf3gct9ipp9R+aw4Iqdo0+m7ykLKU7qwUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zmRcukElC4rivVP19lyxxh6h1XX/sh1ESHTjfi2vE4ODsO89BYUP3HvqHsdA+lehvrBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXNz6IW4RgeQ+s4KVoUqmSufVqmH1OvJNObh2wujePgUBOc3sBG9n4wD9NIO4XrDz+SsFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc43G6UZckPpkV78htc8vpO5a0yykWPck2IBEBYQRPpQxB6EAwg6EwJGYoFBsEdK12awUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zXDy2ayvhJWOeA9HAZFUl7914c1MArdqCMlMkfOOCmWSO5POhI4767vfE+AaLNsL4rBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXMrr3QGgqrA7s27V76BtUeLWtMspFj3JNiARAWEET6UMQcA35pttCCran/DNoQbEhGsFF0PXuYLhVB8BpELqWQBYT5hGpyfQyizXqWNiPf5QZ9o7EYUTSfHv0VomAhgHbIE+8HXR8S7r9KlxUT8IhkoYf+LE0l+qlunls2IpFXf66wUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zMb8XETGh5brY0OkJasoGTlrTLKRY9yTYgEQFhBE+lDFvpaa4XHMc+jlxTZ27oEOzrBRdD17mC4VQfAaRC6lkATKsSHYDBjzeRCM1BjojNDyt0Pkfc+hvl8QTBb7dvnmzLkV7aV9jdABLBqdGN1K1Yybbl0YxyV2/CN7rCul12tSsFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc5LQWG+5dTxPEPELW0LgTqHLMeyfFfu5QDc0R+CdBxXL5iio62APQZYvvVnpYktvpawUXQ9e5guFUHwGkQupZAFeCaO3/hD8SwAq/CZIgPV9Lf8XhOADJGLj1N4Gjc0kv0nZadw2c2aZMOD2/EoZL7/ArDptIGwB1hhcyZnHNazyrBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXOLPGDlguCFm1DDJEGC7q6mSdlp3DZzZpkw4Pb8Shkvv8BQV7znkNcGevIDZsgT5xisFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc99NLTAj2LfetfDOOg6AxnpJ2WncNnNmmTDg9vxKGS+/yzTk7Y109K4pH+5ON21dR6wUXQ9e5guFUHwGkQupZAEyrEh2AwY83kQjNQY6IzQ8WHUn+eznEHSy2YrJ2TfER0nZadw2c2aZMOD2/EoZL79uv0pmOL/ojBmDWGTlaolOrBRdD17mC4VQfAaRC6lkATKsSHYDBjzeRCM1BjojNDw1dX9CBfQzYg+uNrSdXJf7Sdlp3DZzZpkw4Pb8Shkvv2jjvdtAGSTvjyYVpyBw4LWsFF0PXuYLhVB8BpELqWQBMqxIdgMGPN5EIzUGOiM0PI0SL0AJWEsSl3R/FD6u7X5J2WncNnNmmTDg9vxKGS+/8lCxlgrGa124zKWNOrBJ76wUXQ9e5guFUHwGkQupZAFJrjI1wQCD/+pM4xcPHOxFgplaElLKZIb4cG6pacOEh4uIbQTjrxZvebU+sG8YeDRW7t830tM0Izb0/V/DWQZurBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXMTgASVKIPGOsKO/S9s3niYSdlp3DZzZpkw4Pb8Shkvvwl4exY+UpjjMlLguhpg+5qsFF0PXuYLhVB8BpELqWQBMqxIdgMGPN5EIzUGOiM0PLSdbbYKjMch8ELmzsyHxW/OXtWSwH3jAAi45/bQnl/xHOlIKws138TD5aNlEcpsK6wUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zP1dmiRxjP0WPai6+0+8TPXXOUg65P7MBW9p3p8LY5JOwcBGvKGi7txANJeu8/AyNrBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXOtnuR+HaGYI1UvoKHtnKx2QIpNKAU84nw2PUsKHXqrlH7KZftA/7K25VqVYT8QraSsFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1famK/jocM/jVbbXDOW27uWqusn71X5p4WP+sN3tWHQxHsl/kVrelSh0W2F+T66mS9qwUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zy2WfBnp7G1ZrDcdxHI9Rg66yfvVfmnhY/6w3e1YdDEcOMmqpi5aO95d4TmvPdJTlrBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXM3+98HsVlh7raKXuKOl913rrJ+9V+aeFj/rDd7Vh0MRxKtrKBizOq4NsIDThU5lkCsFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc6LnrQORqu/OHn2WJQLPDe9J2WncNnNmmTDg9vxKGS+/YlqdJWvozDH5HAVQIrG9a6wUXQ9e5guFUHwGkQupZAEyrEh2AwY83kQjNQY6IzQ8OALFhOhEAzlJPz1P4oj3R0nZadw2c2aZMOD2/EoZL78XE5HoxwoeQ6YO4WMi59X3rBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXM51dvJTYM3lvmqRfMj3VIfsxtGSrW6sIX9Wyis0XHhBKL3GSS17EhJAoDgIRkeg52sFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc29qps4+LbeKSHIn/GLLuKnbVyBvUwpEZXxDbgl9IYVmj7nNOVzJ8xEGwsKB7D22U6wUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zkosBQPU8sNCegj/if3m0Rol+NAEE/kBH1Hx/dU+PLnM3ssm1e0YtI4O9Why2bR2KrBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXMS6Maq61hLNhMrZmCni7E8Sdlp3DZzZpkw4Pb8Shkvv5vCOW2MM7efsBgNdeLnHT2sFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1NcwllxyJy54PS/ELD15WAyhiKu1T4q6W6ajdsOKq4qTJ+lmtnoXX9IKlhcfPlA1styawUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zyNc+n02QDn1u0E0KF+CzzTE1a24ABCufUNMiEwgL6L7p7BYvJZnWNNfWxsDrHeDdrBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXOiEPQ824pvsiisMbq/e08ilBcSz4iPg2nJWrUqTvS+HYsmD+D7z3WoAYO18wcoUACsFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc1RcbhcYmIzU0vwbxdxlwsmnpsYVAyA+C9TMIw9k7/Y/OC1qs/VWM9LHOPFbww8YkawUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zMkXtB7hQSu0KbJaiDXgBUsJARZCY9njN2aEtDCmYb7D3fEfX8d5Gk1x06XLOqZPzrBRdD17mC4VQfAaRC6lkAcglDHPAygmsdYjx4vp9F8620nudhM9kcGsWtn763+8eSlkqEWMqIt26NoBTzl68Xpjb7OoPn/rGGT7jA7jsgn6sFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc4WXNKy3Ut0zoVvX7nvElWk6ek4SyOOvWrGyVxv8eAzhJSxAUm+tV7o61AEJEFAZbqwUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zBdb7xPShFzD3QcpCkRcQmUnZadw2c2aZMOD2/EoZL7/UqN2xj3+gcSIHE0Ku9SnirBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXPrQIWYhULAEdHaabK0OTBZhJfS/Y5kidl+2oxiTfaxdv++kxq5UJTBgvhIdn730EysFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc1nVz7of7Y472vEGoji0TG4JTrojFWXcxEwzLdlJvG1kFdKA37clay9zn6UnXAEhZKwUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zvt3+vC3dNyOWbd2uW/GNAS5Fe2lfY3QASwanRjdStWNtbIjqdcobvRMt8UtPQl9irBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXMI3cxcPkyyEG5K+LMteShY/t3rd8+g5fd5CNL0es/Wa8Ff+OsGz/4bjaWpNmu7hqusFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc7qcfB8Ulu0kG8HLhhR5E8zNpoGxe33sc40vXZ6JK3dQSOMByBEP1ILW3wY/OxPmI6wUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zDys3uz5KtIIJE0keca9uGaEIkaFv081Tm//vcdp33Gk7I32FXhZ1C0dlobF657OcrBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXPh+HibFYU0KAtjZ4yz/Wnt+smz+MyoMGNvxtefyhMz7sh+TwB550auKRxFn6JIW9qsFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc8jmV8S6Xif2LZ9/DwxAngREUS5k2qZ7fCednNqUy9HSDcN8g0LPh5GYf3l1YPz946wUXQ9e5guFUHwGkQupZAEyrEh2AwY83kQjNQY6IzQ8BcNqPDvBFu20OOE9+slD+5+CMlXC+Iwr8FXGg3dgMPKWTCgY2Q91j9hBHGULLb56rBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXOcAde+WuYESihXbtqS6S9axr5RBHSPGop1HDKe73kqqLQcKOsPHGbdE0sCdjWBHMysFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc/lYeHTFyOOawMyhDAQi6/v6ybP4zKgwY2/G15/KEzPu7Ml08UDY92qLsdn3BvP5EqwUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1ze0hQZxb9lo2TDz0Lbo4agUknlTGCzeusD6QnDle1ZtpzqiKvgiyEloveTHQRsxyIrBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXO/+UNLBpgCh3kYei//8tRYkUtso6pYIhML8LNjO9HEO3VcExx6+EkV1rjU+SgT3desFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc+eqqXevX1ET1l6tfR9+dEY0bTGS9tOh+7dkwKKcD93zy47/W0oDGbxDDx737bCvq6wUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zEellDVDiIQgeXd3HPklLsJDdwBSSDk7l8XUsMA9eMBmnRMTwmSMMsgYFV1LJwTFjrBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXO5D9nOMuvqZ5v3yyur2rT2Sdlp3DZzZpkw4Pb8Shkvv9JABCm23fpUkdk890ANMQ2sFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc+2GEBL8VpXMlzyvhSL5y4BJ2WncNnNmmTDg9vxKGS+/b1Wke2xpKAcaZSGqueRn/awUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zdomcXuNhzEv4WuvWfyZwXeNwkHTZq4LbR3onA57eTP+f7gfVfOmk8XwPgO0YskYurBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXOeOjVpCwRWnZDMAgJTZr1o0GLcYV/ECeA0wudBsK4Q7ehHSoXs52l75LjTiqAG6GisFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Ncxc/D8jWesS5zegvD1EgdlJJ2WncNnNmmTDg9vxKGS+/4AEX6UavbiO93vYC3XgAuawUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zfQPLOvHDPYbEJ88XMi+0iEnZadw2c2aZMOD2/EoZL7+P0XVpbevBqetwxX+LcDSfrBRdD17mC4VQfAaRC6lkAcglDHPAygmsdYjx4vp9F86PCCI55gnpqf7Teb66bNbBYZrDGc6CPf7Cds64Qr9/gINUDuUqSjn1nJ0DLs2gpq+sFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fU3Azv+Br7Gf5EtbMdb2FlPeR/jPissliz47ZYa6qnu5STA7cZbOJgJ8hYfSH1LKgqwUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1zaTeZm47qLkYJlbHlV6bNtU4Wr7t+9AHEChZ0y01enmkZDc2g5TtPbhWqcVp8lc2VrBRdD17mC4VQfAaRC6lkAfg2ITrZBAqIhDYEKnw9TXNEsJUpFoBh/HWoLRYzhrQYDTRrFHMexKFrxGR/8yiY7ZioUc2vTwQYIJn2slbxkmSsFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc2fwY/ERWd96opvNkW7v+lwLVm7exOyyJb0ykHwDjP/sQkdUmV+iQzdKHS/x3lo+CawUXQ9e5guFUHwGkQupZAEyrEh2AwY83kQjNQY6IzQ8GQ57dc4mlvim0dKFC1Fuadah/+XZYAY+tJTNOt9cFZgFm6YuUhXXKuhvv9jszU87rBRdD17mC4VQfAaRC6lkAUmuMjXBAIP/6kzjFw8c7EUAHly2iEEegiqKNv4+YQgwp/e4vfzvD0cYZyluR4tZ3rYCdnSj8AxHpwVnIbZ/HUWsFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1NcyoiDzZZ5YoFLNFRHq0Qpej0cPnWV5bwkOM00nDWUnhNqEIW7+qEXPvrJSuos9JnZKwUXQ9e5guFUHwGkQupZAH4NiE62QQKiIQ2BCp8PU1z7WI3ThJAaJ5c2ksbmHft+inZuNyomTbH/vpTTNRo4pR+k+2gJ/I23xir4acrl6yArBRdD17mC4VQfAaRC6lkAcglDHPAygmsdYjx4vp9F86kb7Ogr/wGD8OZx70d3AwJPNOlwEw802COS1gGoG0JINToy07STxgmajXQ5TLGuh6sFF0PXuYLhVB8BpELqWQBSa4yNcEAg//qTOMXDxzsRV5w8NQQDIFkOovMuMTdV2nonpt3YJrVtltexUii/+NgpqzNIhfrief/rQMv/NRS6KwUXQ9e5guFUHwGkQupZAEC9sTCPARY+0mZ1At+wyJyjCryPxlAjG06Jue4AwhmAmGawxnOgj3+wnbOuEK/f4CFbQRmnk9r+DPI9Egf/QQyrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X2n0AhCshDumM6tQGS5pt4jJd5PQOx8KmSNFitjVpEv1+Ei8Y4bKS/NhkY7Cw3Qh7+sFF0PXuYLhVB8BpELqWQB+DYhOtkECoiENgQqfD1Nc3Rcm5RMJZv8dOtpiCUdzi4uRXtpX2N0AEsGp0Y3UrVjpwxUYbnhz1goDtkDXz4liKwUXQ9e5guFUHwGkQupZAEfNaRQA1AvM7uhZutUbGM7NfBXUF7sGTiSGiYQMxc40BlNuoFO1WCoDJ0rc1CBRRfodcw0zUYRI6dcVT+bqZywrBRdD17mC4VQfAaRC6lkAe3YEe1VGqcqa7qWkSKkJw/M7erGckLBijuBmnRUnOOq7cx0B5Apgh1Y5/b9vYRWJBXJMqKRMpogqJwvaq6mscqsFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1ffSLjsio/S3t54PhhZkQwo4A08/KBjPmFfMufXs0yaCvcKMtxsjDDguCgmQ0Sg0Ug6wUXQ9e5guFUHwGkQupZAFeCaO3/hD8SwAq/CZIgPV943/hmw2RW8kjKQr1dDCn7mWrU5dSnj8xr8yUTn84s/AK7vvkwpmq4FSWaXgQxz2nrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X2XWrWQjmwsjRPTf9u0SuTMksQjyEMdNTfvRlAcHWXCZaU+UfMYNAwUd1WbIv4xTVesFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fZhfSlvzubC1CwRvhR9HEIhAik0oBTzifDY9SwodequU6Iq7FLePxJzAw7BIWWLdLKwUXQ9e5guFUHwGkQupZAFeCaO3/hD8SwAq/CZIgPV9JKMY1PVWePPZ8bOeGrSXE3uE1vFNBEkJ8tcH42ZtgwvtXwilcwU7z34b0RdN2UZdrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X3rQmiqqAvEQ9I1N1racUk3QIpNKAU84nw2PUsKHXqrlN0aKTBNS9QBCdUMDQTld02sFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fTQ0qM61UujQdXXSacX2MYVHA8pqUI4wJ6e1CpXnXHIrUG5kpmlTDw1Ktw0RFV0Fw6wUXQ9e5guFUHwGkQupZAFeCaO3/hD8SwAq/CZIgPV9U2AUpigvYSBVOl4pjhwHXE3M+ixmZe4YXLZepO3SrvSFcgizRUbpLYo2PscE1FysrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X2oX7MYtepzXXK0b+0loue5P+YWSJSbu3VkZZx4vGKl2GO5Ssr88tt269KKDvTbEbusFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fRXQIT7nlQGAs6C4jPKolw6zAFxwIHVuSya9ENWK5HkViXSEoemJ8YBBytPk5R6Z2qwUXQ9e5guFUHwGkQupZAFP4UYH34T3sjDCvY9oTmhsm2eGWr6YCx0oUT95XRcUCZG4A08WFmYlPJyMq32DzsiC1CfQceE2zkpIsWyZADzUrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X1ZUmmoj8+Sw98kHgNdvsymX0pImXPrZgt2yg7jjbcZCtA+yc5HEIss4rHSLXw/LSisFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fUUq/nOGxlGgeo8dSbwp9diYaswZRHMdYGN1cnl4QQK/rsy0IiY7WspYwdusOmIG+KwUXQ9e5guFUHwGkQupZAFeCaO3/hD8SwAq/CZIgPV9wm7kv18t1ZCwH854sQpeC9xRyaHGvAIC9hvL+Pl0ON+xXU8UyssoVScqBsGedcARrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X0bliE/QKFO+iWtadSshvZjtMnCDm6uYkQwtScsI7XyB5BrCsA6UXwALxF9WmuZFbmsFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fRCvmYXw7ByNZBTPYzRcOUdlxHvyjMADcu2AREECtA8ZOl+6bQo6OltueOjjKPTHlawUXQ9e5guFUHwGkQupZAFeCaO3/hD8SwAq/CZIgPV9aSUDKUwEMUasSrXgxYChkZudcCAsV2W6uTpVpe2VRRo36Pk5jR2wbLMG1KQ+EttMrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X2wAFFOYKs0t/hQYv1Ghm1NY3sXaI12mv+BhJ4s2DOFqDmIwijBnnvTCcztcc9bg6WsFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fZuyvanp6vScYmSNKeIzpkjH9mL4yCGDvFlwhd8Z18XdsfZtllr3ljhxSsprHDObT6wUXQ9e5guFUHwGkQupZAFeCaO3/hD8SwAq/CZIgPV98zEeC6FTX1/C4puAMaZskr+cnFTKgp+G87G3l37h2FMH3EjytkbRGlrEI5KiqXN7rBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X04qE6yL9BXVIHc+2sTzE3PTcz6LGZl7hhctl6k7dKu9G5+iC9bZa1s6Vuc+upOIOqsFF0PXuYLhVB8BpELqWQBHzWkUANQLzO7oWbrVGxjO6kka45QqsWW2sCwwoD/GPhREl+IijAwE66gPsPzsZNiJIsm5ImXbQujmv7xoZE6EKwUXQ9e5guFUHwGkQupZAFVSzlQGPhQFiAYshBB4uls1BNiyA7JHN8FJmpr/VHbQA+x/oOdABQd+d4v7lO9edfpaMx8v9mkazIU53eKf7evrBRdD17mC4VQfAaRC6lkAYao36Jwbo54ugnbNQhS9/eNOcjFlcwcZLoYbHJ5tcDvD7H+g50AFB353i/uU7151ylXsZZB0HxPzT6D4ctKJgisFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fWMAJjYyJeKUnLwPZ7+MvZIPsf6DnQAUHfneL+5TvXnXEEGLZfjC6jjS6Fv5liYtf6wUXQ9e5guFUHwGkQupZAFeCaO3/hD8SwAq/CZIgPV9k8UtqF/wW/aTC5xIuJd4G8OkSn2+iWG7Gx+iLxhcw3oVnR5slc/62YOkW2hAQi2rrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X34WjENiXd7UxQ7s5tvsTA0Sdlp3DZzZpkw4Pb8Shkvv8m2a/KnTQHWSBFFsLyJYqCsFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fTEBtwEGEK0T/VA44DZNKthwdQqLwuJbO8x+/5fqaPbZ4JC+lZ6NGHcr2Ip0BnkKjKwUXQ9e5guFUHwGkQupZAFVSzlQGPhQFiAYshBB4uls9HBfBPUpXy7bUDLI+IqrCUnZadw2c2aZMOD2/EoZL79VkyA5DZXXuWgO6dCwouoVrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X1x8LDBOIZDNdnL51bBnOY+javXYU5BdolH0gOMTv5DdvPTH90DGltrO1hX8wU514asFF0PXuYLhVB8BpELqWQBAvbEwjwEWPtJmdQLfsMiciDITkw+dYZ/uVtwYh4n0NBJ2WncNnNmmTDg9vxKGS+/2ZHobe7jOJGwXqcHbM5gxawUXQ9e5guFUHwGkQupZAFeCaO3/hD8SwAq/CZIgPV90vD16X/pQzFILZNybEZm1u3VQt7vFm6OYURAE+4WZ2Qplxyp9UzLUU7oDOOfVgdirBRdD17mC4VQfAaRC6lkAW44f30wNy1U7UgsWS7sqCfxUtlSirF6ZcZNM9ChpOdX7dVC3u8Wbo5hREAT7hZnZGP+FXFTcJ8IrT9F9ZhWI5qsFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fdqQfVxyr9kFAzry5IYR95Tt1ULe7xZujmFEQBPuFmdknzbYqUx++q5rQRojUJCIl6wUXQ9e5guFUHwGkQupZAEC9sTCPARY+0mZ1At+wyJyfvwpFowKCoGMpSUWivkYXO3VQt7vFm6OYURAE+4WZ2QdboLGnwxAekectyO2R711rBRdD17mC4VQfAaRC6lkAVVLOVAY+FAWIBiyEEHi6WzdEBmOT8bUJht2idy645Y/7dVC3u8Wbo5hREAT7hZnZGam1CctllWrpZRFG4l9FDGsFF0PXuYLhVB8BpELqWQBT+FGB9+E97Iwwr2PaE5obMG/C6mwxXwt3qhxYAANwuHt1ULe7xZujmFEQBPuFmdkKwl+tl+w1x9vyF4jVvbCt6wUXQ9e5guFUHwGkQupZAFeCaO3/hD8SwAq/CZIgPV9zbXI/6MElrtiFo2pFGHsGe3VQt7vFm6OYURAE+4WZ2Reqz3UdH1FsxuOh7jbdwDXrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X1CF3svOebHn4JEcUzalxo/7dVC3u8Wbo5hREAT7hZnZPtTXsj5QHa+ky044TmB2LysFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fVHX5N0wCCTh5k3rEc3Frmvt1ULe7xZujmFEQBPuFmdkuwonIKeE8djhOAH3S+nUcawUXQ9e5guFUHwGkQupZAGc3+52i90DTqmCmiWE1O56XbLH2yEoUN6RQmvH0DW0Le3VQt7vFm6OYURAE+4WZ2S5JVD66bqYYGMpN9DpfR4mrBRdD17mC4VQfAaRC6lkAV4Jo7f+EPxLACr8JkiA9X3KO58HX7or9cepKdFglQmh7dVC3u8Wbo5hREAT7hZnZIw+ePgUo6U43XjLT2MZhAqsFF0PXuYLhVB8BpELqWQBAvbEwjwEWPtJmdQLfsMiciGBHjozhX7wXRwKtdwX/Q/t1ULe7xZujmFEQBPuFmdktjNELrS6puVY+qEriJTdhqwUXQ9e5guFUHwGkQupZAEC9sTCPARY+0mZ1At+wyJycEfdcXBHvuQv2RIl6ijwqZsGb4/Iv1+q8GQEaxlDMXqC+Qjhuqku4ZtYAb+R9qfprBRdD17mC4VQfAaRC6lkAYwpnvEShFC+Q5g7kEOa/Up2XHPA307ZvKyVgvuE0aAwmwZvj8i/X6rwZARrGUMxeplzl82QvQt7EOzF0+3074CsFF0PXuYLhVB8BpELqWQB1Sx785bJrxfXwKlZj466R344avfCDEV2UgqPrwsUb/tPBiZ1UN2ApjsCxTMHA9bEENRFxQZK6mtGBm9XmLYWiKwUXQ9e5guFUHwGkQupZAEC9sTCPARY+0mZ1At+wyJyl/YWkWnYjziMdzMiQvIhQk8GJnVQ3YCmOwLFMwcD1sTxPiAQSRmSOwsUfz/iMG0grBRdD17mC4VQfAaRC6lkAf/595zh+UUCHwINqQ/ypMCxytWmUc0yctZgNva8MK3O8iXkz1YzfIHDskCs3R8sVmc5vZeaN8CmpBCUUc7ErFWsFF0PXuYLhVB8BpELqWQBXgmjt/4Q/EsAKvwmSID1fd0VZwZEmRT0eOKipFiQ2XDt1ULe7xZujmFEQBPuFmdkPV+FL8bkgOEsmwdg3wRmgawUXQ9e5guFUHwGkQupZAHt2BHtVRqnKmu6lpEipCcPLHudseSqGOMrZJ6ek6+kZu3VQt7vFm6OYURAE+4WZ2QFB+ZiX5ravRYs7C/uMCfgrBRdD17mC4VQfAaRC6lkAR/cyZFU+GIEDP7e99Gk/4qSVzWVYmQREeKfWP+CIaI9UBzXfHR/GjStCpsUWW1u+A==
non_process
v upjjmllguhpg bqnl jocm aefj lbekshin naee du hysmd y lmteshy osgz wnt smz sld cmlxc hwolryzhrqydtrrfhmexkfrxgr sqkdumv iqzdkhs xzyay yqgwp inzunyomtbh k ngpqznihfrief rqmv wnbouek qkfo cnftkgp gphrel vhbqa x oodabqd ww dyz tl q wyjyl uuchwinqq cyzfu a
0
6,583
9,661,775,338
IssuesEvent
2019-05-20 18:57:53
googleapis/nodejs-pubsub
https://api.github.com/repos/googleapis/nodejs-pubsub
closed
Publish 0.29.1 to npm
type: process
It seems as though the `0.29.1` release has been made in the repo but not published to `npm`. Could this be addressed? Thanks.
1.0
Publish 0.29.1 to npm - It seems as though the `0.29.1` release has been made in the repo but not published to `npm`. Could this be addressed? Thanks.
process
publish to npm it seems as though the release has been made in the repo but not published to npm could this be addressed thanks
1
231,552
18,778,081,367
IssuesEvent
2021-11-08 00:17:30
kubernetes/minikube
https://api.github.com/repos/kubernetes/minikube
closed
Frequent test failures of `TestDownloadOnly/v1.22.3-rc.0/preload-exists`
priority/backlog kind/failing-test
This test has high flake rates for the following environments: |Environment|Flake Rate (%)| |---|---| |[Docker_Linux_crio_arm64](https://storage.googleapis.com/minikube-flake-rate/flake_chart.html?env=Docker_Linux_crio_arm64&test=TestDownloadOnly/v1.22.3-rc.0/preload-exists)|23.08| |[Docker_macOS](https://storage.googleapis.com/minikube-flake-rate/flake_chart.html?env=Docker_macOS&test=TestDownloadOnly/v1.22.3-rc.0/preload-exists)|20.00|
1.0
Frequent test failures of `TestDownloadOnly/v1.22.3-rc.0/preload-exists` - This test has high flake rates for the following environments: |Environment|Flake Rate (%)| |---|---| |[Docker_Linux_crio_arm64](https://storage.googleapis.com/minikube-flake-rate/flake_chart.html?env=Docker_Linux_crio_arm64&test=TestDownloadOnly/v1.22.3-rc.0/preload-exists)|23.08| |[Docker_macOS](https://storage.googleapis.com/minikube-flake-rate/flake_chart.html?env=Docker_macOS&test=TestDownloadOnly/v1.22.3-rc.0/preload-exists)|20.00|
non_process
frequent test failures of testdownloadonly rc preload exists this test has high flake rates for the following environments environment flake rate
0
70,712
15,099,758,134
IssuesEvent
2021-02-08 03:33:57
fedora-infra/noggin
https://api.github.com/repos/fedora-infra/noggin
closed
[secaudit-blocking] Missing justification for `nosec` lines
security
Part of secaudit #316, blocking. As per the [Fedora Infrastructure Application Security Policy](https://docs.pagure.org/infra-docs/dev-guide/security_policy.html#static-security-checking), any `# nosec` lines must be properly justified. These lines have no documentation why they are ignored from bandit checks: - https://github.com/fedora-infra/noggin/blob/0e3be29de02a1ba7aaf247493c5adf7d08e5f64b/noggin/security/ipa_admin.py#L47 - https://github.com/fedora-infra/noggin/blob/0e3be29de02a1ba7aaf247493c5adf7d08e5f64b/noggin/utility/__init__.py#L17
True
[secaudit-blocking] Missing justification for `nosec` lines - Part of secaudit #316, blocking. As per the [Fedora Infrastructure Application Security Policy](https://docs.pagure.org/infra-docs/dev-guide/security_policy.html#static-security-checking), any `# nosec` lines must be properly justified. These lines have no documentation why they are ignored from bandit checks: - https://github.com/fedora-infra/noggin/blob/0e3be29de02a1ba7aaf247493c5adf7d08e5f64b/noggin/security/ipa_admin.py#L47 - https://github.com/fedora-infra/noggin/blob/0e3be29de02a1ba7aaf247493c5adf7d08e5f64b/noggin/utility/__init__.py#L17
non_process
missing justification for nosec lines part of secaudit blocking as per the any nosec lines must be properly justified these lines have no documentation why they are ignored from bandit checks
0
48,303
7,404,153,732
IssuesEvent
2018-03-20 02:56:42
ufal/neuralmonkey
https://api.github.com/repos/ufal/neuralmonkey
closed
tutorial/docs for continuing a model
documentation
The tutorial or another page in the documentation needs a clear example on how to 'continue a training run', i.e. how to start the training based on some existing variables file.
1.0
tutorial/docs for continuing a model - The tutorial or another page in the documentation needs a clear example on how to 'continue a training run', i.e. how to start the training based on some existing variables file.
non_process
tutorial docs for continuing a model the tutorial or another page in the documentation needs a clear example on how to continue a training run i e how to start the training based on some existing variables file
0
262,506
19,806,754,263
IssuesEvent
2022-01-19 07:50:51
TheThingsIndustries/lorawan-stack-docs
https://api.github.com/repos/TheThingsIndustries/lorawan-stack-docs
opened
Update the command-line interface documentation
documentation
#### Summary The Enterprise CLI (`tti-lw-cli`) has additional commands for tenant management and OpenID Connect, which are not available with `ttn-lw-cli`. Currently, in the [command-line interface documentation](https://www.thethingsindustries.com/docs/reference/cli/), the enterprise CLI (`tti-lw-cli`) commands are also documented to use with the open-source CLI `ttn-lw-cli`. #### Why do we need this ? To distinguish the commands available for the `tti-lw-cli `and `ttn-lw-cli`. #### What is already there? What do you see now? The Things Stack Open source command-line interface **ttn-lw-cli** documentation includes enterprise CLI commands. Ref: https://www.thethingsindustries.com/docs/reference/cli/ #### What is missing? What do you want to see? A distinction should be made between open-source and enterprise CLI commands in the documentation. #### How do you propose to document this? 1. Update the overview of the CLI docs that it is the reference for both open source and enterprise CLI. Ref: https://www.thethingsindustries.com/docs/reference/cli/ 2. Create the command tree for enterprise CLI. #### Can you do this yourself and submit a Pull Request? No
1.0
Update the command-line interface documentation - #### Summary The Enterprise CLI (`tti-lw-cli`) has additional commands for tenant management and OpenID Connect, which are not available with `ttn-lw-cli`. Currently, in the [command-line interface documentation](https://www.thethingsindustries.com/docs/reference/cli/), the enterprise CLI (`tti-lw-cli`) commands are also documented to use with the open-source CLI `ttn-lw-cli`. #### Why do we need this ? To distinguish the commands available for the `tti-lw-cli `and `ttn-lw-cli`. #### What is already there? What do you see now? The Things Stack Open source command-line interface **ttn-lw-cli** documentation includes enterprise CLI commands. Ref: https://www.thethingsindustries.com/docs/reference/cli/ #### What is missing? What do you want to see? A distinction should be made between open-source and enterprise CLI commands in the documentation. #### How do you propose to document this? 1. Update the overview of the CLI docs that it is the reference for both open source and enterprise CLI. Ref: https://www.thethingsindustries.com/docs/reference/cli/ 2. Create the command tree for enterprise CLI. #### Can you do this yourself and submit a Pull Request? No
non_process
update the command line interface documentation summary the enterprise cli tti lw cli has additional commands for tenant management and openid connect which are not available with ttn lw cli currently in the the enterprise cli tti lw cli commands are also documented to use with the open source cli ttn lw cli why do we need this to distinguish the commands available for the tti lw cli and ttn lw cli what is already there what do you see now the things stack open source command line interface ttn lw cli documentation includes enterprise cli commands ref what is missing what do you want to see a distinction should be made between open source and enterprise cli commands in the documentation how do you propose to document this update the overview of the cli docs that it is the reference for both open source and enterprise cli ref create the command tree for enterprise cli can you do this yourself and submit a pull request no
0
6,619
9,702,774,519
IssuesEvent
2019-05-27 09:35:28
plazi/arcadia-project
https://api.github.com/repos/plazi/arcadia-project
opened
website development
Article processing BLR Outreach website
@mguidoti @teodorgeorgiev @millerjemey can we figure out a time Wednesday of Thursday to discuss the usecases we would like to be covered for the BLR website? For me, any time from 1pm to 6pm would be ok each day, with a preference on Wednesday. * The reason we should do this now is that Marcus and Puneet meet in Brazil right now and work on cleaning up data and tags in TB. * We are going to implemenet and hopefully soon move the treatments into Zenodo, including deposit type in Zenodo we have been working. * Jeremy, Torsten, myself and some others came up with a project to be presented at the Entomological Society Meeting in November. It is an ambitious goal to create a cypbercatalogue for a damselfly taxon (Lestoidae: Lestoidea; see abstract below). It has a great science team behind it that will make use of our infrastructure, data, as well as the Smithsonian with a large number of digital images we can use too. This project is ideal to discuss from a science point of view, what data we want to visualized which will help us to design our BLR website. Having said this, this is clearly in line with out arcadia project. * We have another test body of PDFS we can upload to BLR using Lycophronn * We have a body of literature where we will have treatment, treatmentCitations and the cited treatments * we have articles where we can add links to the types, to the figures cited, the specimens in the figure cited. * we have a closed corpus which will help us to decide which element we want to liberate, in what quality * with Odonatologica we have a journal for which we already issue DOIs and thus could just implement a data conversion workflow during the upload * it gives us a chance to document the setting up of such a project, count the hours it takes to build, etc. * it gives us a perfect highlevel example to demonstrate what we do * it is a transpacific project with taxa * it is a cosmopolitan project ----------------------- Abstract: > Mobilizing Data from Taxonomic Literature for a lineage of damselflies (Odonata, Lestoidea) > > Jeremy Miller, Hector Ortega Salas, Torsten Dikow, K.D. Dijkstra, Jan van Tol, Guido Sautter, Donat Agosti > > For most species on earth, the primary taxonomic literature contains nearly everything that is known about it. Every described species on earth is the subject of one or more taxonomic treatments. In the past, we have demonstrated the use of semantic enhancement to extract treatments from taxonomic literature and make them available to a network of databases. We use GoldenGATE software to read and interpret taxonomic literature. Individual treatments are parsed out of journal articles and deposited in TreatmentBank. Among the routine parsing and data sharing activities, images cited in treatments are deposited in the Biodiversity Literature Repository (BLR), and occurrence records are shared with the Global Biodiversity Information Facility (GBIF). In the semantically enhanced treatments, treatment citations are linked to the content they cite, and specimen citations are linked to their occurrence on public facing collections databases (where available). We resolved to semantically enhance all taxonomic treatments on Lestoidea, a group of damselflies comprising four families and about 200 species. Our objective was to make these treatments and associated data available for the broad range of stake holders who might have an interest in these animals, including professional biologists, the curious public, and science communications personnel. We generate a taxonomic catalog based on treatment citations, and compare the results to catalogs compiled by taxon specialists. > literature included: [Literature.xlsx](https://github.com/plazi/arcadia-project/files/3223152/Literature.xlsx)
1.0
website development - @mguidoti @teodorgeorgiev @millerjemey can we figure out a time Wednesday of Thursday to discuss the usecases we would like to be covered for the BLR website? For me, any time from 1pm to 6pm would be ok each day, with a preference on Wednesday. * The reason we should do this now is that Marcus and Puneet meet in Brazil right now and work on cleaning up data and tags in TB. * We are going to implemenet and hopefully soon move the treatments into Zenodo, including deposit type in Zenodo we have been working. * Jeremy, Torsten, myself and some others came up with a project to be presented at the Entomological Society Meeting in November. It is an ambitious goal to create a cypbercatalogue for a damselfly taxon (Lestoidae: Lestoidea; see abstract below). It has a great science team behind it that will make use of our infrastructure, data, as well as the Smithsonian with a large number of digital images we can use too. This project is ideal to discuss from a science point of view, what data we want to visualized which will help us to design our BLR website. Having said this, this is clearly in line with out arcadia project. * We have another test body of PDFS we can upload to BLR using Lycophronn * We have a body of literature where we will have treatment, treatmentCitations and the cited treatments * we have articles where we can add links to the types, to the figures cited, the specimens in the figure cited. * we have a closed corpus which will help us to decide which element we want to liberate, in what quality * with Odonatologica we have a journal for which we already issue DOIs and thus could just implement a data conversion workflow during the upload * it gives us a chance to document the setting up of such a project, count the hours it takes to build, etc. * it gives us a perfect highlevel example to demonstrate what we do * it is a transpacific project with taxa * it is a cosmopolitan project ----------------------- Abstract: > Mobilizing Data from Taxonomic Literature for a lineage of damselflies (Odonata, Lestoidea) > > Jeremy Miller, Hector Ortega Salas, Torsten Dikow, K.D. Dijkstra, Jan van Tol, Guido Sautter, Donat Agosti > > For most species on earth, the primary taxonomic literature contains nearly everything that is known about it. Every described species on earth is the subject of one or more taxonomic treatments. In the past, we have demonstrated the use of semantic enhancement to extract treatments from taxonomic literature and make them available to a network of databases. We use GoldenGATE software to read and interpret taxonomic literature. Individual treatments are parsed out of journal articles and deposited in TreatmentBank. Among the routine parsing and data sharing activities, images cited in treatments are deposited in the Biodiversity Literature Repository (BLR), and occurrence records are shared with the Global Biodiversity Information Facility (GBIF). In the semantically enhanced treatments, treatment citations are linked to the content they cite, and specimen citations are linked to their occurrence on public facing collections databases (where available). We resolved to semantically enhance all taxonomic treatments on Lestoidea, a group of damselflies comprising four families and about 200 species. Our objective was to make these treatments and associated data available for the broad range of stake holders who might have an interest in these animals, including professional biologists, the curious public, and science communications personnel. We generate a taxonomic catalog based on treatment citations, and compare the results to catalogs compiled by taxon specialists. > literature included: [Literature.xlsx](https://github.com/plazi/arcadia-project/files/3223152/Literature.xlsx)
process
website development mguidoti teodorgeorgiev millerjemey can we figure out a time wednesday of thursday to discuss the usecases we would like to be covered for the blr website for me any time from to would be ok each day with a preference on wednesday the reason we should do this now is that marcus and puneet meet in brazil right now and work on cleaning up data and tags in tb we are going to implemenet and hopefully soon move the treatments into zenodo including deposit type in zenodo we have been working jeremy torsten myself and some others came up with a project to be presented at the entomological society meeting in november it is an ambitious goal to create a cypbercatalogue for a damselfly taxon lestoidae lestoidea see abstract below it has a great science team behind it that will make use of our infrastructure data as well as the smithsonian with a large number of digital images we can use too this project is ideal to discuss from a science point of view what data we want to visualized which will help us to design our blr website having said this this is clearly in line with out arcadia project we have another test body of pdfs we can upload to blr using lycophronn we have a body of literature where we will have treatment treatmentcitations and the cited treatments we have articles where we can add links to the types to the figures cited the specimens in the figure cited we have a closed corpus which will help us to decide which element we want to liberate in what quality with odonatologica we have a journal for which we already issue dois and thus could just implement a data conversion workflow during the upload it gives us a chance to document the setting up of such a project count the hours it takes to build etc it gives us a perfect highlevel example to demonstrate what we do it is a transpacific project with taxa it is a cosmopolitan project abstract mobilizing data from taxonomic literature for a lineage of damselflies odonata lestoidea jeremy miller hector ortega salas torsten dikow k d dijkstra jan van tol guido sautter donat agosti for most species on earth the primary taxonomic literature contains nearly everything that is known about it every described species on earth is the subject of one or more taxonomic treatments in the past we have demonstrated the use of semantic enhancement to extract treatments from taxonomic literature and make them available to a network of databases we use goldengate software to read and interpret taxonomic literature individual treatments are parsed out of journal articles and deposited in treatmentbank among the routine parsing and data sharing activities images cited in treatments are deposited in the biodiversity literature repository blr and occurrence records are shared with the global biodiversity information facility gbif in the semantically enhanced treatments treatment citations are linked to the content they cite and specimen citations are linked to their occurrence on public facing collections databases where available we resolved to semantically enhance all taxonomic treatments on lestoidea a group of damselflies comprising four families and about species our objective was to make these treatments and associated data available for the broad range of stake holders who might have an interest in these animals including professional biologists the curious public and science communications personnel we generate a taxonomic catalog based on treatment citations and compare the results to catalogs compiled by taxon specialists literature included
1
61,706
14,634,015,924
IssuesEvent
2020-12-24 03:56:27
LuanP/list-of-common-names
https://api.github.com/repos/LuanP/list-of-common-names
opened
CVE-2020-7693 (Medium) detected in sockjs-0.3.19.tgz
security vulnerability
## CVE-2020-7693 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sockjs-0.3.19.tgz</b></p></summary> <p>SockJS-node is a server counterpart of SockJS-client a JavaScript library that provides a WebSocket-like object in the browser. SockJS gives you a coherent, cross-browser, Javascript API which creates a low latency, full duplex, cross-domain communication</p> <p>Library home page: <a href="https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz">https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz</a></p> <p>Path to dependency file: list-of-common-names/package.json</p> <p>Path to vulnerable library: list-of-common-names/node_modules/sockjs/package.json</p> <p> Dependency Hierarchy: - react-scripts-3.3.0.tgz (Root Library) - webpack-dev-server-3.9.0.tgz - :x: **sockjs-0.3.19.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/LuanP/list-of-common-names/commit/a56712b2ab7bf3b6b05e91d6650bdd3f2fce540a">a56712b2ab7bf3b6b05e91d6650bdd3f2fce540a</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Incorrect handling of Upgrade header with the value websocket leads in crashing of containers hosting sockjs apps. This affects the package sockjs before 0.3.20. <p>Publish Date: 2020-07-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7693>CVE-2020-7693</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/sockjs/sockjs-node/pull/265">https://github.com/sockjs/sockjs-node/pull/265</a></p> <p>Release Date: 2020-07-09</p> <p>Fix Resolution: sockjs - 0.3.20</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-7693 (Medium) detected in sockjs-0.3.19.tgz - ## CVE-2020-7693 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sockjs-0.3.19.tgz</b></p></summary> <p>SockJS-node is a server counterpart of SockJS-client a JavaScript library that provides a WebSocket-like object in the browser. SockJS gives you a coherent, cross-browser, Javascript API which creates a low latency, full duplex, cross-domain communication</p> <p>Library home page: <a href="https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz">https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz</a></p> <p>Path to dependency file: list-of-common-names/package.json</p> <p>Path to vulnerable library: list-of-common-names/node_modules/sockjs/package.json</p> <p> Dependency Hierarchy: - react-scripts-3.3.0.tgz (Root Library) - webpack-dev-server-3.9.0.tgz - :x: **sockjs-0.3.19.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/LuanP/list-of-common-names/commit/a56712b2ab7bf3b6b05e91d6650bdd3f2fce540a">a56712b2ab7bf3b6b05e91d6650bdd3f2fce540a</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Incorrect handling of Upgrade header with the value websocket leads in crashing of containers hosting sockjs apps. This affects the package sockjs before 0.3.20. <p>Publish Date: 2020-07-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7693>CVE-2020-7693</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/sockjs/sockjs-node/pull/265">https://github.com/sockjs/sockjs-node/pull/265</a></p> <p>Release Date: 2020-07-09</p> <p>Fix Resolution: sockjs - 0.3.20</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in sockjs tgz cve medium severity vulnerability vulnerable library sockjs tgz sockjs node is a server counterpart of sockjs client a javascript library that provides a websocket like object in the browser sockjs gives you a coherent cross browser javascript api which creates a low latency full duplex cross domain communication library home page a href path to dependency file list of common names package json path to vulnerable library list of common names node modules sockjs package json dependency hierarchy react scripts tgz root library webpack dev server tgz x sockjs tgz vulnerable library found in head commit a href found in base branch master vulnerability details incorrect handling of upgrade header with the value websocket leads in crashing of containers hosting sockjs apps this affects the package sockjs before publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution sockjs step up your open source security game with whitesource
0
15,370
19,548,039,451
IssuesEvent
2022-01-02 08:13:54
ethereum/EIPs
https://api.github.com/repos/ethereum/EIPs
closed
Extend EIP1 with an option for "EIP editor"
type: Meta type: EIP1 (Process) stale
There are a lot of specifications which could be published as an EIP/ERC, which have been around and (un)maintained in random places. Some of these include: ABI specification, NatSpec, secure private key storage in clients, RPC methods, etc. Since these have been specification efforts by many parties, it may look weird if someone creates an EIP with a single author and collecting all the authors may seem impractical. I propose to introduce an option that instead of a list of authors the EIP/ERC to contain only an `Editor` field listing the person(s) who created the EIP based on an original reference, which should be stated in the `References` section. I hope this would encourage people in the space, who might have not thought about this option given the issue with claiming authorship.
1.0
Extend EIP1 with an option for "EIP editor" - There are a lot of specifications which could be published as an EIP/ERC, which have been around and (un)maintained in random places. Some of these include: ABI specification, NatSpec, secure private key storage in clients, RPC methods, etc. Since these have been specification efforts by many parties, it may look weird if someone creates an EIP with a single author and collecting all the authors may seem impractical. I propose to introduce an option that instead of a list of authors the EIP/ERC to contain only an `Editor` field listing the person(s) who created the EIP based on an original reference, which should be stated in the `References` section. I hope this would encourage people in the space, who might have not thought about this option given the issue with claiming authorship.
process
extend with an option for eip editor there are a lot of specifications which could be published as an eip erc which have been around and un maintained in random places some of these include abi specification natspec secure private key storage in clients rpc methods etc since these have been specification efforts by many parties it may look weird if someone creates an eip with a single author and collecting all the authors may seem impractical i propose to introduce an option that instead of a list of authors the eip erc to contain only an editor field listing the person s who created the eip based on an original reference which should be stated in the references section i hope this would encourage people in the space who might have not thought about this option given the issue with claiming authorship
1
13,983
16,759,521,348
IssuesEvent
2021-06-13 13:53:16
darktable-org/darktable
https://api.github.com/repos/darktable-org/darktable
closed
"Cube lut out of range" on export leads to random failed exporting and freeze up
no-issue-activity scope: image processing understood: unclear
The issue is described at end of the thread...copied here https://discuss.pixls.us/t/cube-lut-lines-number-not-correct-error/17413/14 On exporting images using .cube lut made with 3d lut creator (50 in this bunch), i get an error. Then only a few of the images actually export, and the border of the DT window starts flashing and I get a could not export file error. A few times DT had to be restarted. After trying a number of times still can’t get them all to export without the process mysteriously stopping. I try again and some of the files export, it doesn’t seem to choke on a particular image. Just seems to randomly stop working.
1.0
"Cube lut out of range" on export leads to random failed exporting and freeze up - The issue is described at end of the thread...copied here https://discuss.pixls.us/t/cube-lut-lines-number-not-correct-error/17413/14 On exporting images using .cube lut made with 3d lut creator (50 in this bunch), i get an error. Then only a few of the images actually export, and the border of the DT window starts flashing and I get a could not export file error. A few times DT had to be restarted. After trying a number of times still can’t get them all to export without the process mysteriously stopping. I try again and some of the files export, it doesn’t seem to choke on a particular image. Just seems to randomly stop working.
process
cube lut out of range on export leads to random failed exporting and freeze up the issue is described at end of the thread copied here on exporting images using cube lut made with lut creator in this bunch i get an error then only a few of the images actually export and the border of the dt window starts flashing and i get a could not export file error a few times dt had to be restarted after trying a number of times still can’t get them all to export without the process mysteriously stopping i try again and some of the files export it doesn’t seem to choke on a particular image just seems to randomly stop working
1
8,036
11,210,864,206
IssuesEvent
2020-01-06 14:14:06
10up/wp-component-library
https://api.github.com/repos/10up/wp-component-library
closed
Add Functional Testing to UI Components
enhancement in process
This should be added after the ES6 conversion is complete.
1.0
Add Functional Testing to UI Components - This should be added after the ES6 conversion is complete.
process
add functional testing to ui components this should be added after the conversion is complete
1
103,316
4,166,943,779
IssuesEvent
2016-06-20 07:21:31
TheNOOFClan/S.C.S.I.
https://api.github.com/repos/TheNOOFClan/S.C.S.I.
closed
Check time without needing a message to be sent
help wanted High Priority Independent TODO
Find a way to Check time without needing a message. Possible Solution: - Store time stamped commands/actions in a JSON file - Create a new thread or process to check each command/action (maybe not every tick) - Run command/action when time stamp is reached - Remove command/action from JSON file
1.0
Check time without needing a message to be sent - Find a way to Check time without needing a message. Possible Solution: - Store time stamped commands/actions in a JSON file - Create a new thread or process to check each command/action (maybe not every tick) - Run command/action when time stamp is reached - Remove command/action from JSON file
non_process
check time without needing a message to be sent find a way to check time without needing a message possible solution store time stamped commands actions in a json file create a new thread or process to check each command action maybe not every tick run command action when time stamp is reached remove command action from json file
0
94,466
3,926,236,265
IssuesEvent
2016-04-22 22:25:53
kdahlquist/GRNmap
https://api.github.com/repos/kdahlquist/GRNmap
closed
Download transcription factor data from SGD and YEASTRACT for constructing networks
data analysis priority 1
Will download transcription factor data from SGD and YEASTRACT so that we have a standard set of data to use when building new networks. We don't want to be confused by data version issues if those databases get updated while we are developing our process for creating networks.
1.0
Download transcription factor data from SGD and YEASTRACT for constructing networks - Will download transcription factor data from SGD and YEASTRACT so that we have a standard set of data to use when building new networks. We don't want to be confused by data version issues if those databases get updated while we are developing our process for creating networks.
non_process
download transcription factor data from sgd and yeastract for constructing networks will download transcription factor data from sgd and yeastract so that we have a standard set of data to use when building new networks we don t want to be confused by data version issues if those databases get updated while we are developing our process for creating networks
0
6,454
9,546,536,567
IssuesEvent
2019-05-01 20:14:21
openopps/openopps-platform
https://api.github.com/repos/openopps/openopps-platform
closed
Department of State: Add content to Next Steps Page
Apply Process Approved Requirements Ready
Who: Student What: Add content to let the student know we are pulling their information from their USAJOBS profile Why: As a student I want to know where the information that is populated in my application came from. A/C - Change the content under the "Change your profile" to the following: To save time, we'll import the information in your USAJOBS profile into your application.Please review your application before you submit it to make sure all of the information is correct. If you make any edits to your application, the information will not be updated in your USAJOBS profile. This is not in the InVision Mock, it is a content change.
1.0
Department of State: Add content to Next Steps Page - Who: Student What: Add content to let the student know we are pulling their information from their USAJOBS profile Why: As a student I want to know where the information that is populated in my application came from. A/C - Change the content under the "Change your profile" to the following: To save time, we'll import the information in your USAJOBS profile into your application.Please review your application before you submit it to make sure all of the information is correct. If you make any edits to your application, the information will not be updated in your USAJOBS profile. This is not in the InVision Mock, it is a content change.
process
department of state add content to next steps page who student what add content to let the student know we are pulling their information from their usajobs profile why as a student i want to know where the information that is populated in my application came from a c change the content under the change your profile to the following to save time we ll import the information in your usajobs profile into your application please review your application before you submit it to make sure all of the information is correct if you make any edits to your application the information will not be updated in your usajobs profile this is not in the invision mock it is a content change
1
640,525
20,791,793,011
IssuesEvent
2022-03-17 03:23:30
AY2122S2-CS2103T-W13-4/tp
https://api.github.com/repos/AY2122S2-CS2103T-W13-4/tp
closed
As a first time user I can update an existing contact
priority.High type.Story
so that I can accommodate changes in contact details or fix any mistakes that I made when entering the contact details
1.0
As a first time user I can update an existing contact - so that I can accommodate changes in contact details or fix any mistakes that I made when entering the contact details
non_process
as a first time user i can update an existing contact so that i can accommodate changes in contact details or fix any mistakes that i made when entering the contact details
0
66,911
14,813,513,107
IssuesEvent
2021-01-14 02:15:59
MValle21/lamby_site
https://api.github.com/repos/MValle21/lamby_site
opened
CVE-2020-8162 (High) detected in activestorage-5.2.3.gem
security vulnerability
## CVE-2020-8162 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>activestorage-5.2.3.gem</b></p></summary> <p>Attach cloud and local files in Rails applications.</p> <p>Library home page: <a href="https://rubygems.org/gems/activestorage-5.2.3.gem">https://rubygems.org/gems/activestorage-5.2.3.gem</a></p> <p> Dependency Hierarchy: - rails-5.2.3.gem (Root Library) - :x: **activestorage-5.2.3.gem** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/MValle21/lamby_site/commit/58d2ba7cfe9616216cb126c69803c5ccd10d32b9">58d2ba7cfe9616216cb126c69803c5ccd10d32b9</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A client side enforcement of server side security vulnerability exists in rails < 5.2.4.2 and rails < 6.0.3.1 ActiveStorage's S3 adapter that allows the Content-Length of a direct file upload to be modified by an end user bypassing upload limits. <p>Publish Date: 2020-06-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8162>CVE-2020-8162</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-m42x-37p3-fv5w">https://github.com/advisories/GHSA-m42x-37p3-fv5w</a></p> <p>Release Date: 2020-05-31</p> <p>Fix Resolution: 5.2.4.3,6.0.3.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Ruby","packageName":"activestorage","packageVersion":"5.2.3","isTransitiveDependency":true,"dependencyTree":"rails:5.2.3;activestorage:5.2.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.2.4.3,6.0.3.1"}],"vulnerabilityIdentifier":"CVE-2020-8162","vulnerabilityDetails":"A client side enforcement of server side security vulnerability exists in rails \u003c 5.2.4.2 and rails \u003c 6.0.3.1 ActiveStorage\u0027s S3 adapter that allows the Content-Length of a direct file upload to be modified by an end user bypassing upload limits.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8162","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-8162 (High) detected in activestorage-5.2.3.gem - ## CVE-2020-8162 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>activestorage-5.2.3.gem</b></p></summary> <p>Attach cloud and local files in Rails applications.</p> <p>Library home page: <a href="https://rubygems.org/gems/activestorage-5.2.3.gem">https://rubygems.org/gems/activestorage-5.2.3.gem</a></p> <p> Dependency Hierarchy: - rails-5.2.3.gem (Root Library) - :x: **activestorage-5.2.3.gem** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/MValle21/lamby_site/commit/58d2ba7cfe9616216cb126c69803c5ccd10d32b9">58d2ba7cfe9616216cb126c69803c5ccd10d32b9</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A client side enforcement of server side security vulnerability exists in rails < 5.2.4.2 and rails < 6.0.3.1 ActiveStorage's S3 adapter that allows the Content-Length of a direct file upload to be modified by an end user bypassing upload limits. <p>Publish Date: 2020-06-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8162>CVE-2020-8162</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-m42x-37p3-fv5w">https://github.com/advisories/GHSA-m42x-37p3-fv5w</a></p> <p>Release Date: 2020-05-31</p> <p>Fix Resolution: 5.2.4.3,6.0.3.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Ruby","packageName":"activestorage","packageVersion":"5.2.3","isTransitiveDependency":true,"dependencyTree":"rails:5.2.3;activestorage:5.2.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.2.4.3,6.0.3.1"}],"vulnerabilityIdentifier":"CVE-2020-8162","vulnerabilityDetails":"A client side enforcement of server side security vulnerability exists in rails \u003c 5.2.4.2 and rails \u003c 6.0.3.1 ActiveStorage\u0027s S3 adapter that allows the Content-Length of a direct file upload to be modified by an end user bypassing upload limits.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8162","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in activestorage gem cve high severity vulnerability vulnerable library activestorage gem attach cloud and local files in rails applications library home page a href dependency hierarchy rails gem root library x activestorage gem vulnerable library found in head commit a href found in base branch master vulnerability details a client side enforcement of server side security vulnerability exists in rails and rails activestorage s adapter that allows the content length of a direct file upload to be modified by an end user bypassing upload limits publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails a client side enforcement of server side security vulnerability exists in rails and rails activestorage adapter that allows the content length of a direct file upload to be modified by an end user bypassing upload limits vulnerabilityurl
0
18,257
24,339,110,043
IssuesEvent
2022-10-01 13:16:13
fertadeo/ISPC-2do-Cuat-Proyecto
https://api.github.com/repos/fertadeo/ISPC-2do-Cuat-Proyecto
reopened
#TK 3.0 Creación de un formulario de géneros literarios
in process
Generar opciones desplegables en un "SELECT" de HTML para los géneros literarios (drama, ficción, romance, etc.)
1.0
#TK 3.0 Creación de un formulario de géneros literarios - Generar opciones desplegables en un "SELECT" de HTML para los géneros literarios (drama, ficción, romance, etc.)
process
tk creación de un formulario de géneros literarios generar opciones desplegables en un select de html para los géneros literarios drama ficción romance etc
1
449,872
12,976,255,253
IssuesEvent
2020-07-21 18:26:45
GoogleContainerTools/skaffold
https://api.github.com/repos/GoogleContainerTools/skaffold
opened
Add ability to suppress status check output when configured
area/logging area/ux july-chill kind/feature-request priority/p0
depends on #4512 wire up flag/config option to suppress status check output in favor of status.
1.0
Add ability to suppress status check output when configured - depends on #4512 wire up flag/config option to suppress status check output in favor of status.
non_process
add ability to suppress status check output when configured depends on wire up flag config option to suppress status check output in favor of status
0
18,720
24,610,987,525
IssuesEvent
2022-10-14 21:26:07
hashgraph/hedera-mirror-node
https://api.github.com/repos/hashgraph/hedera-mirror-node
closed
Release checklist 0.65
enhancement process
### Problem We need a checklist to verify the release is rolled out successfully. ### Solution - [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc) - [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.65.0) - [x] GitHub checks for branch are passing - [x] Automated Kubernetes deployment successful - [x] Tag release - [x] Upload release artifacts - [x] Publish release ## Integration - [x] Deploy to VM ## Performance - [x] Deploy to Kubernetes - [x] Deploy to VM - [x] gRPC API performance tests - [x] Importer performance tests - [x] REST API performance tests - [x] Migrations tested against mainnet clone ## Previewnet - [x] Deploy to VM ## Staging - [x] Deploy to Kubernetes EU - [x] Deploy to Kubernetes NA ## Testnet - [x] Deploy to VM ## Mainnet - [x] Deploy to Kubernetes EU - [x] Deploy to Kubernetes NA - [x] Deploy to VM - [x] Deploy to ETL ### Alternatives _No response_
1.0
Release checklist 0.65 - ### Problem We need a checklist to verify the release is rolled out successfully. ### Solution - [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc) - [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.65.0) - [x] GitHub checks for branch are passing - [x] Automated Kubernetes deployment successful - [x] Tag release - [x] Upload release artifacts - [x] Publish release ## Integration - [x] Deploy to VM ## Performance - [x] Deploy to Kubernetes - [x] Deploy to VM - [x] gRPC API performance tests - [x] Importer performance tests - [x] REST API performance tests - [x] Migrations tested against mainnet clone ## Previewnet - [x] Deploy to VM ## Staging - [x] Deploy to Kubernetes EU - [x] Deploy to Kubernetes NA ## Testnet - [x] Deploy to VM ## Mainnet - [x] Deploy to Kubernetes EU - [x] Deploy to Kubernetes NA - [x] Deploy to VM - [x] Deploy to ETL ### Alternatives _No response_
process
release checklist problem we need a checklist to verify the release is rolled out successfully solution milestone field populated on relevant nothing open for github checks for branch are passing automated kubernetes deployment successful tag release upload release artifacts publish release integration deploy to vm performance deploy to kubernetes deploy to vm grpc api performance tests importer performance tests rest api performance tests migrations tested against mainnet clone previewnet deploy to vm staging deploy to kubernetes eu deploy to kubernetes na testnet deploy to vm mainnet deploy to kubernetes eu deploy to kubernetes na deploy to vm deploy to etl alternatives no response
1
17,960
23,966,996,090
IssuesEvent
2022-09-13 02:39:46
openxla/stablehlo
https://api.github.com/repos/openxla/stablehlo
closed
Add python bindings test to GitHub Actions
Process
### Request description Per feedback in #49 - Add tests for python API to CI workflows. https://github.com/openxla/stablehlo/tree/main/build_tools#python-api ### Additional context _No response_
1.0
Add python bindings test to GitHub Actions - ### Request description Per feedback in #49 - Add tests for python API to CI workflows. https://github.com/openxla/stablehlo/tree/main/build_tools#python-api ### Additional context _No response_
process
add python bindings test to github actions request description per feedback in add tests for python api to ci workflows additional context no response
1
16,294
20,923,992,407
IssuesEvent
2022-03-24 20:24:24
biocodellc/localcontexts_db
https://api.github.com/repos/biocodellc/localcontexts_db
opened
Add preparation step screen to the registration process when it is approved
medium priority registration process Figma
This will be a screen that will prepare the user to go through community or institution account creation process. It is created in Figma, just needs to be approved by Jane or Maui before it is added to the hub.
1.0
Add preparation step screen to the registration process when it is approved - This will be a screen that will prepare the user to go through community or institution account creation process. It is created in Figma, just needs to be approved by Jane or Maui before it is added to the hub.
process
add preparation step screen to the registration process when it is approved this will be a screen that will prepare the user to go through community or institution account creation process it is created in figma just needs to be approved by jane or maui before it is added to the hub
1
20,288
26,921,767,489
IssuesEvent
2023-02-07 10:58:00
firebase/firebase-cpp-sdk
https://api.github.com/repos/firebase/firebase-cpp-sdk
closed
[C++] Nightly Integration Testing Report
type: process nightly-testing
Note: This report excludes firestore. Please also check **[the report for firestore](https://github.com/firebase/firebase-cpp-sdk/issues/1178)** *** <hidden value="integration-test-status-comment"></hidden> ### ❌&nbsp; [build against repo] Integration test FAILED Requested by @DellaBitta on commit 5c0eebe6cdffa6007bd82cfac606c515ef9abb94 Last updated: Mon Feb 6 03:23 PST 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4101963752)** | Failures | Configs | |----------|---------| | missing_log | [BUILD] [ERROR] [Linux] [1/2 ssl_lib: x86] [All 2 build_type]<br/> | | storage | [TEST] [FLAKINESS] [Android] [2/3 os: ubuntu windows] [1/4 android_device: android_target]<details><summary>(1 failed tests)</summary>&nbsp;&nbsp;CRASH/TIMEOUT</details> | Add flaky tests to **[go/fpl-cpp-flake-tracker](http://go/fpl-cpp-flake-tracker)** <hidden value="integration-test-status-comment"></hidden> *** ### ✅&nbsp; [build against SDK] Integration test succeeded! Requested by @firebase-workflow-trigger[bot] on commit 5c0eebe6cdffa6007bd82cfac606c515ef9abb94 Last updated: Sun Feb 5 05:19 PST 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4096493964)** <hidden value="integration-test-status-comment"></hidden>
1.0
[C++] Nightly Integration Testing Report - Note: This report excludes firestore. Please also check **[the report for firestore](https://github.com/firebase/firebase-cpp-sdk/issues/1178)** *** <hidden value="integration-test-status-comment"></hidden> ### ❌&nbsp; [build against repo] Integration test FAILED Requested by @DellaBitta on commit 5c0eebe6cdffa6007bd82cfac606c515ef9abb94 Last updated: Mon Feb 6 03:23 PST 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4101963752)** | Failures | Configs | |----------|---------| | missing_log | [BUILD] [ERROR] [Linux] [1/2 ssl_lib: x86] [All 2 build_type]<br/> | | storage | [TEST] [FLAKINESS] [Android] [2/3 os: ubuntu windows] [1/4 android_device: android_target]<details><summary>(1 failed tests)</summary>&nbsp;&nbsp;CRASH/TIMEOUT</details> | Add flaky tests to **[go/fpl-cpp-flake-tracker](http://go/fpl-cpp-flake-tracker)** <hidden value="integration-test-status-comment"></hidden> *** ### ✅&nbsp; [build against SDK] Integration test succeeded! Requested by @firebase-workflow-trigger[bot] on commit 5c0eebe6cdffa6007bd82cfac606c515ef9abb94 Last updated: Sun Feb 5 05:19 PST 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4096493964)** <hidden value="integration-test-status-comment"></hidden>
process
nightly integration testing report note this report excludes firestore please also check ❌ nbsp integration test failed requested by dellabitta on commit last updated mon feb pst failures configs missing log storage failed tests nbsp nbsp crash timeout add flaky tests to ✅ nbsp integration test succeeded requested by firebase workflow trigger on commit last updated sun feb pst
1
272,061
23,651,233,371
IssuesEvent
2022-08-26 06:52:25
kubernetes-sigs/cluster-api-provider-aws
https://api.github.com/repos/kubernetes-sigs/cluster-api-provider-aws
closed
Divide presubmit e2e tests to shorten test duration
priority/backlog lifecycle/rotten area/testing triage/accepted
Our e2e test suite is growing, and as a consequence takes more time ( ~1.5 hours) to finish. Not all the tests are giving valuable signal to run as a presubmit job, running them in periodic jobs are enough. We could divide existing tests as pr-e2e-full and pr-e2e-essential (or better naming) to run on PRs. For instance, BYO infra tests and CSI migration tests are not being affected by most changes, hence can go to only pr-e2e-full. Or clusterctl upgrade tests are only important when there is an API change in a PR. Also, proposing to enable blocking e2e test to run for all PRs.
1.0
Divide presubmit e2e tests to shorten test duration - Our e2e test suite is growing, and as a consequence takes more time ( ~1.5 hours) to finish. Not all the tests are giving valuable signal to run as a presubmit job, running them in periodic jobs are enough. We could divide existing tests as pr-e2e-full and pr-e2e-essential (or better naming) to run on PRs. For instance, BYO infra tests and CSI migration tests are not being affected by most changes, hence can go to only pr-e2e-full. Or clusterctl upgrade tests are only important when there is an API change in a PR. Also, proposing to enable blocking e2e test to run for all PRs.
non_process
divide presubmit tests to shorten test duration our test suite is growing and as a consequence takes more time hours to finish not all the tests are giving valuable signal to run as a presubmit job running them in periodic jobs are enough we could divide existing tests as pr full and pr essential or better naming to run on prs for instance byo infra tests and csi migration tests are not being affected by most changes hence can go to only pr full or clusterctl upgrade tests are only important when there is an api change in a pr also proposing to enable blocking test to run for all prs
0
14,334
17,364,973,654
IssuesEvent
2021-07-30 05:33:36
googleapis/google-cloud-ruby
https://api.github.com/repos/googleapis/google-cloud-ruby
closed
Failure in nightly rubocop test
type: process
https://github.com/googleapis/google-cloud-ruby/runs/3006986561 One fairly straightforward fix in the security_center samples acceptance test.
1.0
Failure in nightly rubocop test - https://github.com/googleapis/google-cloud-ruby/runs/3006986561 One fairly straightforward fix in the security_center samples acceptance test.
process
failure in nightly rubocop test one fairly straightforward fix in the security center samples acceptance test
1
6,163
9,048,488,751
IssuesEvent
2019-02-12 00:17:16
googleapis/nodejs-error-reporting
https://api.github.com/repos/googleapis/nodejs-error-reporting
closed
Determine if escape-regexp-component@1.0.2 can be used
type: process
The `escape-regexp-component@1.0.2` package does not have a license specified in `package.json` or a LICENSE file. However, its `Readme.md` file contains a License section that just says `MIT`. Determine if this is enough to know that the library is under the MIT license.
1.0
Determine if escape-regexp-component@1.0.2 can be used - The `escape-regexp-component@1.0.2` package does not have a license specified in `package.json` or a LICENSE file. However, its `Readme.md` file contains a License section that just says `MIT`. Determine if this is enough to know that the library is under the MIT license.
process
determine if escape regexp component can be used the escape regexp component package does not have a license specified in package json or a license file however its readme md file contains a license section that just says mit determine if this is enough to know that the library is under the mit license
1
19,410
25,556,308,142
IssuesEvent
2022-11-30 07:02:35
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Wed, 30 Nov 22
event camera white balance isp compression image signal processing image signal process raw raw image
## Keyword: event camera There is no result ## Keyword: white balance There is no result ## Keyword: isp ### FETI-DP preconditioners for 2D Biot model with discontinuous Galerkin discretization - **Authors:** Pilhwa Lee - **Subjects:** Numerical Analysis (math.NA); Analysis of PDEs (math.AP) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15670 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15670 - **Abstract** Dual-primal FETI (FETI-DP) preconditioners are developed for a 2D Biot model. The model is formulated with mixed-finite elements as a saddle-point problem. The displacement $\mathbf{u}$ and the Darcy flux flow $\mathbf{z}$ are represented with $P_1$ piecewise continuous elements and pore-pressure $p$ with $P_0$ piecewise constant elements, {\it i.e.}, overall three fields with a stabilizing term. We have tested the functionality of FETI-DP with and without Dirichlet preconditioners. Numerical experiments show a signature of scalability of the resulting parallel algorithm in the compressible elasticity with permeable Darcy flow as well as almost incompressible elasticity. ### Predicting Football Match Outcomes with eXplainable Machine Learning and the Kelly Index - **Authors:** Yiming Ren, Teo Susnjak - **Subjects:** Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15734 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15734 - **Abstract** In this work, a machine learning approach is developed for predicting the outcomes of football matches. The novelty of this research lies in the utilisation of the Kelly Index to first classify matches into categories where each one denotes the different levels of predictive difficulty. Classification models using a wide suite of algorithms were developed for each category of matches in order to determine the efficacy of the approach. In conjunction to this, a set of previously unexplored features were engineering including Elo-based variables. The dataset originated from the Premier League match data covering the 2019-2021 seasons. The findings indicate that the process of decomposing the predictive problem into sub-tasks was effective and produced competitive results with prior works, while the ensemble-based methods were the most effective. The paper also devised an investment strategy in order to evaluate its effectiveness by benchmarking against bookmaker odds. An approach was developed that minimises risk by combining the Kelly Index with the predefined confidence thresholds of the predictive models. The experiments found that the proposed strategy can return a profit when following a conservative approach that focuses primarily on easy-to-predict matches where the predictive models display a high confidence level. ### Understanding the Impact of Adversarial Robustness on Accuracy Disparity - **Authors:** Yuzheng Hu, Fan Wu, Hongyang Zhang, Han Zhao - **Subjects:** Machine Learning (cs.LG); Machine Learning (stat.ML) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15762 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15762 - **Abstract** While it has long been empirically observed that adversarial robustness may be at odds with standard accuracy and may have further disparate impacts on different classes, it remains an open question to what extent such observations hold and how the class imbalance plays a role within. In this paper, we attempt to understand this question of accuracy disparity by taking a closer look at linear classifiers under a Gaussian mixture model. We decompose the impact of adversarial robustness into two parts: an inherent effect that will degrade the standard accuracy on all classes, and the other caused by the class imbalance ratio, which will increase the accuracy disparity compared to standard training. Furthermore, we also extend our model to the general family of stable distributions. We demonstrate that while the constraint of adversarial robustness consistently degrades the standard accuracy in the balanced class setting, the class imbalance ratio plays a fundamentally different role in accuracy disparity compared to the Gaussian case, due to the heavy tail of the stable distribution. We additionally perform experiments on both synthetic and real-world datasets. The empirical results not only corroborate our theoretical findings, but also suggest that the implications may extend to nonlinear models over real-world datasets. ### Distributed Energy Management and Demand Response in Smart Grids: A Multi-Agent Deep Reinforcement Learning Framework - **Authors:** Amin Shojaeighadikolaei, Arman Ghasemi, Kailani Jones, Yousif Dafalla, Alexandru G. Bardas, Reza Ahmadi, Morteza Haashemi - **Subjects:** Multiagent Systems (cs.MA); Machine Learning (cs.LG); Systems and Control (eess.SY) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15858 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15858 - **Abstract** This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems. In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users. DR has a widely recognized potential for improving power grid stability and reliability, while at the same time reducing end-users energy bills. However, the conventional DR techniques come with several shortcomings, such as the inability to handle operational uncertainties while incurring end-user disutility, which prevents widespread adoption in real-world applications. The proposed framework addresses these shortcomings by implementing DR and DEM based on real-time pricing strategy that is achieved using deep reinforcement learning. Furthermore, this framework enables the power grid service provider to leverage distributed energy resources (i.e., PV rooftop panels and battery storage) as dispatchable assets to support the smart grid during peak hours, thus achieving management of distributed energy resources. Simulation results based on the Deep Q-Network (DQN) demonstrate significant improvements of the 24-hour accumulative profit for both prosumers and the power grid service provider, as well as major reductions in the utilization of the power grid reserve generators. ### Advisory Tool for Managing Failure Cascades in Systems with Wind Power - **Authors:** Siyu Liu, Marija Ilic - **Subjects:** Systems and Control (eess.SY) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15957 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15957 - **Abstract** This paper concerns the resilience of systems with wind power upon wind reduction by evaluating the potential of corrective actions, such as generation and load dispatch, on minimizing the effects of transmission line failures. Three functions (grid, consumer-centric loss, and resilience impact) are used to statistically evaluate the criticality of initial contingent failures and wind reductions. Our model is learned with Monte Carlo, convex optimization, and adaptive selection, illustrated on the IEEE-30 and IEEE-300 bus systems with both AC and DC models. We highlight the impact of wind reductions and propose physically implementable solutions. ### Data Privacy Protection in DeFi Protocols - **Authors:** Jiawei Zhu, Zhuangtong Huang, Yixin Xu, Jerome Yen, Ye Wang - **Subjects:** Cryptography and Security (cs.CR); Distributed, Parallel, and Cluster Computing (cs.DC) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16082 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16082 - **Abstract** With the development of decentralized finance (DeFi), the inherent limitations caused by the blockchain system have come to the surface. Because recorded data on the blockchain is available to system participants, DeFi protocols may not collect the private data of users. Otherwise, the information leakage may result in serious financial losses or cause legal issues. Therefore, DeFi protocols could hardly offer different users customized solutions, and the capital utilization is limited. To address this challenge in DeFi, we propose a solution, which is a trustful protocol that allows users to provide personal private data to DeFi protocols without worrying that such information would be disclosed. By implementing asymmetric encryption, zero-knowledge proof, and homomorphic encryption, we ensure that users' data will not be controlled by any centralized authorities and avoid potential financial losses or legal disputes due to information leakage. We further discuss the application scenarios of financial data privacy protection in public blockchain DeFi ecosystems and cross-border financial applications, such as credit aggregation. ### Be Careful with Rotation: A Uniform Backdoor Pattern for 3D Shape - **Authors:** Linkun Fan, Fazhi He, Qing Guo, Wei Tang, Xiaolin Hong, Bing Li - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Cryptography and Security (cs.CR) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16192 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16192 - **Abstract** For saving cost, many deep neural networks (DNNs) are trained on third-party datasets downloaded from internet, which enables attacker to implant backdoor into DNNs. In 2D domain, inherent structures of different image formats are similar. Hence, backdoor attack designed for one image format will suite for others. However, when it comes to 3D world, there is a huge disparity among different 3D data structures. As a result, backdoor pattern designed for one certain 3D data structure will be disable for other data structures of the same 3D scene. Therefore, this paper designs a uniform backdoor pattern: NRBdoor (Noisy Rotation Backdoor) which is able to adapt for heterogeneous 3D data structures. Specifically, we start from the unit rotation and then search for the optimal pattern by noise generation and selection process. The proposed NRBdoor is natural and imperceptible, since rotation is a common operation which usually contains noise due to both the miss match between a pair of points and the sensor calibration error for real-world 3D scene. Extensive experiments on 3D mesh and point cloud show that the proposed NRBdoor achieves state-of-the-art performance, with negligible shape variation. ### Controllable speech synthesis by learning discrete phoneme-level prosodic representations - **Authors:** Nikolaos Ellinas, Myrsini Christidou, Alexandra Vioni, June Sig Sung, Aimilios Chalamandaris, Pirros Tsiakoulis, Paris Mastorocostas - **Subjects:** Sound (cs.SD); Computation and Language (cs.CL); Machine Learning (cs.LG); Audio and Speech Processing (eess.AS) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16307 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16307 - **Abstract** In this paper, we present a novel method for phoneme-level prosody control of F0 and duration using intuitive discrete labels. We propose an unsupervised prosodic clustering process which is used to discretize phoneme-level F0 and duration features from a multispeaker speech dataset. These features are fed as an input sequence of prosodic labels to a prosody encoder module which augments an autoregressive attention-based text-to-speech model. We utilize various methods in order to improve prosodic control range and coverage, such as augmentation, F0 normalization, balanced clustering for duration and speaker-independent clustering. The final model enables fine-grained phoneme-level prosody control for all speakers contained in the training set, while maintaining the speaker identity. Instead of relying on reference utterances for inference, we introduce a prior prosody encoder which learns the style of each speaker and enables speech synthesis without the requirement of reference audio. We also fine-tune the multispeaker model to unseen speakers with limited amounts of data, as a realistic application scenario and show that the prosody control capabilities are maintained, verifying that the speaker-independent prosodic clustering is effective. Experimental results show that the model has high output speech quality and that the proposed method allows efficient prosody control within each speaker's range despite the variability that a multispeaker setting introduces. ### Fourier-Net: Fast Image Registration with Band-limited Deformation - **Authors:** Xi Jia, Joseph Bartlett, Wei Chen, Siyang Song, Tianyang Zhang, Xinxing Cheng, Wenqi Lu, Zhaowen Qiu, Jinming Duan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16342 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16342 - **Abstract** Unsupervised image registration commonly adopts U-Net style networks to predict dense displacement fields in the full-resolution spatial domain. For high-resolution volumetric image data, this process is however resource intensive and time-consuming. To tackle this problem, we propose the Fourier-Net, replacing the expansive path in a U-Net style network with a parameter-free model-driven decoder. Specifically, instead of our Fourier-Net learning to output a full-resolution displacement field in the spatial domain, we learn its low-dimensional representation in a band-limited Fourier domain. This representation is then decoded by our devised model-driven decoder (consisting of a zero padding layer and an inverse discrete Fourier transform layer) to the dense, full-resolution displacement field in the spatial domain. These changes allow our unsupervised Fourier-Net to contain fewer parameters and computational operations, resulting in faster inference speeds. Fourier-Net is then evaluated on two public 3D brain datasets against various state-of-the-art approaches. For example, when compared to a recent transformer-based method, i.e., TransMorph, our Fourier-Net, only using 0.22$\%$ of its parameters and 6.66$\%$ of the mult-adds, achieves a 0.6\% higher Dice score and an 11.48$\times$ faster inference speed. Code is available at \url{https://github.com/xi-jia/Fourier-Net}. ### Is Twitter Enough? Investigating Situational Awareness in Social and Print Media during the Second COVID-19 Wave in India - **Authors:** Ishita Vohra, Meher Shashwat Nigam, Aryan Sakaria, Amey Kudari, Nimmi Rangaswamy - **Subjects:** Social and Information Networks (cs.SI) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16360 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16360 - **Abstract** The pandemic required efficient allocation of public resources and transforming existing ways of societal functions. To manage any crisis, governments and public health researchers exploit the information available to them in order to make informed decisions, also defined as situational awareness. Gathering situational awareness using social media has been functional to manage epidemics. Previous research focused on using discussions during periods of epidemic crises on social media platforms like Twitter, Reddit, or Facebook and developing NLP techniques to filter out relevant discussions from a huge corpus of messages and posts. Social media usage varies with internet penetration and other socioeconomic factors, which might induce disparity in analyzing discussions across different geographies. However, print media is a ubiquitous information source, irrespective of geography. Further, topics discussed in news articles are already newsworthy, while on social media newsworthiness is a product of techno-social processes. Developing this fundamental difference, we study Twitter data during the second wave in India focused on six high-population cities with varied macroeconomic factors. Through a mixture of qualitative and quantitative methods, we further analyze two Indian newspapers during the same period and compare topics from both Twitter and the newspapers to evaluate situational awareness around the second phase of COVID on each of these platforms. We conclude that factors like internet penetration and GDP in a specific city influence the discourse surrounding situational updates on social media. Thus, augmenting information from newspapers with information extracted from social media would provide a more comprehensive perspective in resource deficit cities. ## Keyword: compression ### Post-training Quantization on Diffusion Models - **Authors:** Yuzhang Shang, Zhihang Yuan, Bin Xie, Bingzhe Wu, Yan Yan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15736 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15736 - **Abstract** Denoising diffusion (score-based) generative models have recently achieved significant accomplishments in generating realistic and diverse data. These approaches define a forward diffusion process for transforming data into noise and a backward denoising process for sampling data from noise. Unfortunately, the generation process of current denoising diffusion models is notoriously slow due to the lengthy iterative noise estimations, which rely on cumbersome neural networks. It prevents the diffusion models from being widely deployed, especially on edge devices. Previous works accelerate the generation process of diffusion model (DM) via finding shorter yet effective sampling trajectories. However, they overlook the cost of noise estimation with a heavy network in every iteration. In this work, we accelerate generation from the perspective of compressing the noise estimation network. Due to the difficulty of retraining DMs, we exclude mainstream training-aware compression paradigms and introduce post-training quantization (PTQ) into DM acceleration. However, the output distributions of noise estimation networks change with time-step, making previous PTQ methods fail in DMs since they are designed for single-time step scenarios. To devise a DM-specific PTQ method, we explore PTQ on DM in three aspects: quantized operations, calibration dataset, and calibration metric. We summarize and use several observations derived from all-inclusive investigations to formulate our method, which especially targets the unique multi-time-step structure of DMs. Experimentally, our method can directly quantize full-precision DMs into 8-bit models while maintaining or even improving their performance in a training-free manner. Importantly, our method can serve as a plug-and-play module on other fast-sampling methods, e.g., DDIM. ### Compressing Cross-Lingual Multi-Task Models at Qualtrics - **Authors:** Daniel Campos, Daniel Perry, Samir Joshi, Yashmeet Gambhir, Wei Du, Zhengzheng Xing, Aaron Colak - **Subjects:** Computation and Language (cs.CL); Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15927 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15927 - **Abstract** Experience management is an emerging business area where organizations focus on understanding the feedback of customers and employees in order to improve their end-to-end experiences. This results in a unique set of machine learning problems to help understand how people feel, discover issues they care about, and find which actions need to be taken on data that are different in content and distribution from traditional NLP domains. In this paper, we present a case study of building text analysis applications that perform multiple classification tasks efficiently in 12 languages in the nascent business area of experience management. In order to scale up modern ML methods on experience data, we leverage cross lingual and multi-task modeling techniques to consolidate our models into a single deployment to avoid overhead. We also make use of model compression and model distillation to reduce overall inference latency and hardware cost to the level acceptable for business needs while maintaining model prediction quality. Our findings show that multi-task modeling improves task performance for a subset of experience management tasks in both XLM-R and mBert architectures. Among the compressed architectures we explored, we found that MiniLM achieved the best compression/performance tradeoff. Our case study demonstrates a speedup of up to 15.61x with 2.60% average task degradation (or 3.29x speedup with 1.71% degradation) and estimated savings of 44% over using the original full-size model. These results demonstrate a successful scaling up of text classification for the challenging new area of ML for experience management. ### Maximal Atomic irRedundant Sets: a Usage-based Dataflow Partitioning Algorithm - **Authors:** Corentin Ferry, Steven Derrien, Sanjay Rajopadhye - **Subjects:** Programming Languages (cs.PL); Distributed, Parallel, and Cluster Computing (cs.DC) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15933 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15933 - **Abstract** Programs admitting a polyhedral representation can be transformed in many ways for locality and parallelism, notably loop tiling. Data flow analysis can then compute dependence relations between iterations and between tiles. When tiling is applied, certain iteration-wise dependences cross tile boundaries, creating the need for inter-tile data communication. Previous work computes it as the flow-in and flow-out sets of iteration tiles. In this paper, we propose a partitioning of the flow-out of a tile into the maximal sets of iterations that are entirely consumed and incur no redundant storage or transfer. The computation is described as an algorithm and performed on a selection of polyhedral programs. We then suggest possible applications of this decomposition in compression and memory allocation. ### Trustless unknown-order groups - **Authors:** Samuel Dobson, Steven Galbraith, Benjamin Smith (GRACE) - **Subjects:** Cryptography and Security (cs.CR); Number Theory (math.NT) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16128 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16128 - **Abstract** Groups of unknown order are of major interest due to their applications including time-lock puzzles, verifiable delay functions, and accumulators. In this paper we focus on trustless setup: in this setting, the most popular unknown-order group construction is ideal class groups of imaginary quadratic fields. We argue that the full impact of Sutherland's generic group-order algorithm has not been recognised in this context, and show that group sizes currently being proposed in practice (namely, approximately 830 bits) do not meet the claimed security level. Instead, we claim that random group orders should be at least 3300 bits to meet a 128-bit security level. For ideal class groups this leads to discriminants of around 6656 bits, which are much larger than desirable. One drawback of class groups is that current approaches require approximately $2\log_2(N)$ bits to represent an element in a group of order N. We provide two solutions to mitigate this blow-up in the size of representations. First, we explain how an idea of Bleichenbacher can be used to compress class group elements to $(3/2)\log_2(N)$ bits. Second, we note that using Jacobians of hyperelliptic curves (in other words, class groups of quadratic function fields) allows efficient compression to the optimal element representation size of $\log_2(N)$ bits. We discuss point-counting approaches for hyperelliptic curves and argue that genus-3 curves are secure in the trustless unknown-order setting. We conclude that in practice, Jacobians of hyperelliptic curves are more efficient in practice than ideal class groups at the same security level -- both in the group operation and in the size of the element representation. ### DBA: Efficient Transformer with Dynamic Bilinear Low-Rank Attention - **Authors:** Bosheng Qin, Juncheng Li, Siliang Tang, Yueting Zhuang - **Subjects:** Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16368 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16368 - **Abstract** Many studies have been conducted to improve the efficiency of Transformer from quadric to linear. Among them, the low-rank-based methods aim to learn the projection matrices to compress the sequence length. However, the projection matrices are fixed once they have been learned, which compress sequence length with dedicated coefficients for tokens in the same position. Adopting such input-invariant projections ignores the fact that the most informative part of a sequence varies from sequence to sequence, thus failing to preserve the most useful information that lies in varied positions. In addition, previous efficient Transformers only focus on the influence of sequence length while neglecting the effect of hidden state dimension. To address the aforementioned problems, we present an efficient yet effective attention mechanism, namely the Dynamic Bilinear Low-Rank Attention (DBA), which compresses the sequence length by input-sensitive dynamic projection matrices and achieves linear time and space complexity by jointly optimizing the sequence length and hidden state dimension while maintaining state-of-the-art performance. Specifically, we first theoretically demonstrate that the sequence length can be compressed non-destructively from a novel perspective of information theory, with compression matrices dynamically determined by the input sequence. Furthermore, we show that the hidden state dimension can be approximated by extending the Johnson-Lindenstrauss lemma, optimizing the attention in bilinear form. Theoretical analysis shows that DBA is proficient in capturing high-order relations in cross-attention problems. Experiments over tasks with diverse sequence length conditions show that DBA achieves state-of-the-art performance compared with various strong baselines while maintaining less memory consumption with higher speed. ### Compressing Volumetric Radiance Fields to 1 MB - **Authors:** Lingzhi Li, Zhen Shen, Zhongshu Wang, Li Shen, Liefeng Bo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16386 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16386 - **Abstract** Approximating radiance fields with volumetric grids is one of promising directions for improving NeRF, represented by methods like Plenoxels and DVGO, which achieve super-fast training convergence and real-time rendering. However, these methods typically require a tremendous storage overhead, costing up to hundreds of megabytes of disk space and runtime memory for a single scene. We address this issue in this paper by introducing a simple yet effective framework, called vector quantized radiance fields (VQRF), for compressing these volume-grid-based radiance fields. We first present a robust and adaptive metric for estimating redundancy in grid models and performing voxel pruning by better exploring intermediate outputs of volumetric rendering. A trainable vector quantization is further proposed to improve the compactness of grid models. In combination with an efficient joint tuning strategy and post-processing, our method can achieve a compression ratio of 100$\times$ by reducing the overall model size to 1 MB with negligible loss on visual quality. Extensive experiments demonstrate that the proposed framework is capable of achieving unrivaled performance and well generalization across multiple methods with distinct volumetric structures, facilitating the wide use of volumetric radiance fields methods in real-world applications. Code Available at \url{https://github.com/AlgoHunt/VQRF} ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: raw ### Learning Visual Planning Models from Partially Observed Images - **Authors:** Kebing Jin, Zhanhao Xiao, Hankui Hankz Zhuo, Hai Wan, Jiaran Cai - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15666 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15666 - **Abstract** There has been increasing attention on planning model learning in classical planning. Most existing approaches, however, focus on learning planning models from structured data in symbolic representations. It is often difficult to obtain such structured data in real-world scenarios. Although a number of approaches have been developed for learning planning models from fully observed unstructured data (e.g., images), in many scenarios raw observations are often incomplete. In this paper, we provide a novel framework, \aType{Recplan}, for learning a transition model from partially observed raw image traces. More specifically, by considering the preceding and subsequent images in a trace, we learn the latent state representations of raw observations and then build a transition model based on such representations. Additionally, we propose a neural-network-based approach to learn a heuristic model that estimates the distance toward a given goal observation. Based on the learned transition model and heuristic model, we implement a classical planner for images. We exhibit empirically that our approach is more effective than a state-of-the-art approach of learning visual planning models in the environment with incomplete observations. ### Deep Semi-supervised Learning with Double-Contrast of Features and Semantics - **Authors:** Quan Feng, Jiayu Yao, Zhison Pan, Guojun Zhou - **Subjects:** Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15671 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15671 - **Abstract** In recent years, the field of intelligent transportation systems (ITS) has achieved remarkable success, which is mainly due to the large amount of available annotation data. However, obtaining these annotated data has to afford expensive costs in reality. Therefore, a more realistic strategy is to leverage semi-supervised learning (SSL) with a small amount of labeled data and a large amount of unlabeled data. Typically, semantic consistency regularization and the two-stage learning methods of decoupling feature extraction and classification have been proven effective. Nevertheless, representation learning only limited to semantic consistency regularization may not guarantee the separation or discriminability of representations of samples with different semantics; due to the inherent limitations of the two-stage learning methods, the extracted features may not match the specific downstream tasks. In order to deal with the above drawbacks, this paper proposes an end-to-end deep semi-supervised learning double contrast of semantic and feature, which extracts effective tasks specific discriminative features by contrasting the semantics/features of positive and negative augmented samples pairs. Moreover, we leverage information theory to explain the rationality of double contrast of semantics and features and slack mutual information to contrastive loss in a simpler way. Finally, the effectiveness of our method is verified in benchmark datasets. ### Superpoint Transformer for 3D Scene Instance Segmentation - **Authors:** Jiahao Sun, Chunmei Qing, Junpeng Tan, Xiangmin Xu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15766 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15766 - **Abstract** Most existing methods realize 3D instance segmentation by extending those models used for 3D object detection or 3D semantic segmentation. However, these non-straightforward methods suffer from two drawbacks: 1) Imprecise bounding boxes or unsatisfactory semantic predictions limit the performance of the overall 3D instance segmentation framework. 2) Existing method requires a time-consuming intermediate step of aggregation. To address these issues, this paper proposes a novel end-to-end 3D instance segmentation method based on Superpoint Transformer, named as SPFormer. It groups potential features from point clouds into superpoints, and directly predicts instances through query vectors without relying on the results of object detection or semantic segmentation. The key step in this framework is a novel query decoder with transformers that can capture the instance information through the superpoint cross-attention mechanism and generate the superpoint masks of the instances. Through bipartite matching based on superpoint masks, SPFormer can implement the network training without the intermediate aggregation step, which accelerates the network. Extensive experiments on ScanNetv2 and S3DIS benchmarks verify that our method is concise yet efficient. Notably, SPFormer exceeds compared state-of-the-art methods by 4.3% on ScanNetv2 hidden test set in terms of mAP and keeps fast inference speed (247ms per frame) simultaneously. Code is available at https://github.com/sunjiahao1999/SPFormer. ### ClueWeb22: 10 Billion Web Documents with Rich Information - **Authors:** Arnold Overwijk, Chenyan Xiong, Xiao Liu, Cameron VandenBerg, Jamie Callan - **Subjects:** Information Retrieval (cs.IR); Artificial Intelligence (cs.AI); Computation and Language (cs.CL) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15848 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15848 - **Abstract** ClueWeb22, the newest iteration of the ClueWeb line of datasets, provides 10 billion web pages affiliated with rich information. Its design was influenced by the need for a high quality, large scale web corpus to support a range of academic and industry research, for example, in information systems, retrieval-augmented AI systems, and model pretraining. Compared with earlier ClueWeb corpora, the ClueWeb22 corpus is larger, more varied, of higher-quality, and aligned with the document distributions in commercial web search. Besides raw HTML, ClueWeb22 includes rich information about the web pages provided by industry-standard document understanding systems, including the visual representation of pages rendered by a web browser, parsed HTML structure information from a neural network parser, and pre-processed cleaned document text to lower the barrier to entry. Many of these signals have been widely used in industry but are available to the research community for the first time at this scale. ### Neural Feature-Adaptation for Symbolic Predictions Using Pre-Training and Semantic Loss - **Authors:** Vedant Shah, Aditya Agrawal, Lovekesh Vig, Ashwin Srinivasan, Gautam Shroff, Tanmay Verlekar - **Subjects:** Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Logic in Computer Science (cs.LO) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16047 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16047 - **Abstract** We are interested in neurosymbolic systems consisting of a high-level symbolic layer for explainable prediction in terms of human-intelligible concepts; and a low-level neural layer for extracting symbols required to generate the symbolic explanation. Real data is often imperfect meaning that even if the symbolic theory remains unchanged, we may still need to address the problem of mapping raw data to high-level symbols, each time there is a change in the data acquisition environment or equipment. Manual (re-)annotation of the raw data each time this happens is laborious and expensive; and automated labelling methods are often imperfect, especially for complex problems. NEUROLOG proposed the use of a semantic loss function that allows an existing feature-based symbolic model to guide the extraction of feature-values from raw data, using `abduction'. However, the experiments demonstrating the use of semantic loss through abduction appear to rely heavily on a domain-specific pre-processing step that enables a prior delineation of feature locations in the raw data. We examine the use of semantic loss in domains where such pre-processing is not possible, or is not obvious. We show that without any prior information about the features, the NEUROLOG approach can continue to predict accurately even with substantially incorrect feature predictions. We show also that prior information about the features in the form of even imperfect pre-training can help correct this situation. These findings are replicated on the original problem considered by NEUROLOG, without the use of feature-delineation. This suggests that symbolic explanations constructed for data in a domain could be re-used in a related domain, by `feature-adaptation' of pre-trained neural extractors using the semantic loss function constrained by abductive feedback. ### Behavior Estimation from Multi-Source Data for Offline Reinforcement Learning - **Authors:** Guoxi Zhang, Hisashi Kashima - **Subjects:** Machine Learning (cs.LG); Robotics (cs.RO) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16078 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16078 - **Abstract** Offline reinforcement learning (RL) have received rising interest due to its appealing data efficiency. The present study addresses behavior estimation, a task that lays the foundation of many offline RL algorithms. Behavior estimation aims at estimating the policy with which training data are generated. In particular, this work considers a scenario where the data are collected from multiple sources. In this case, neglecting data heterogeneity, existing approaches for behavior estimation suffers from behavior misspecification. To overcome this drawback, the present study proposes a latent variable model to infer a set of policies from data, which allows an agent to use as behavior policy the policy that best describes a particular trajectory. This model provides with a agent fine-grained characterization for multi-source data and helps it overcome behavior misspecification. This work also proposes a learning algorithm for this model and illustrates its practical usage via extending an existing offline RL algorithm. Lastly, with extensive evaluation this work confirms the existence of behavior misspecification and the efficacy of the proposed model. ### Peculiarities of gender disambiguation and ordering of non-English authors' names for Economic papers beyond core databases - **Authors:** O. Mryglod, S. Nazarovets, S. Kozmenko - **Subjects:** Digital Libraries (cs.DL) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16124 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16124 - **Abstract** This paper presents the results of further exploration of Crossref data related to Ukrainian Economics research (the first part can be found in [Mryglod, O., Nazarovets, S. & Kozmenko, S. (2021) Scientometrics, 126, 8187]). Our purpose is to supplement the quantitative portrait of Ukrainian Economics discipline with the results of gender and author ordering analysis at the level of individual authors, special methods of working with bibliographic data with a predominant share of non-English authors are used. The properties of gender mixing, the likelihood of male and female authors occupying the first position in the authorship list, as well as the arrangements of names are studied. A data set containing bibliographic records related to Ukrainian journal publications in the field of Economics is constructed using Crossref metadata. The described stages for working with such specific data help to work at the level of authors and analyse, in particular, gender issues. Despite the larger number of female authors, gender equality is more likely to be reported at the individual level for the discipline of Ukrainian Economics. The tendencies towards collaborative or solo-publications and gender mixing patterns are found to be dependent on the journal: the differences for publications indexed in Scopus and/or Web of Science databases are found. It has also been found that Ukrainian Economics research is characterized by rather a non-alphabetical order of authors. To our knowledge, this is the first large-scale quantitative study of Ukrainian Economic discipline. The results obtained are valuable not only at the national level, but also contribute to general knowledge about Economic research, gender issues and authors' names ordering. Here, for the first time, attention is drawn to the explicit use of the features of the Slavic authors' names. ### Trustless unknown-order groups - **Authors:** Samuel Dobson, Steven Galbraith, Benjamin Smith (GRACE) - **Subjects:** Cryptography and Security (cs.CR); Number Theory (math.NT) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16128 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16128 - **Abstract** Groups of unknown order are of major interest due to their applications including time-lock puzzles, verifiable delay functions, and accumulators. In this paper we focus on trustless setup: in this setting, the most popular unknown-order group construction is ideal class groups of imaginary quadratic fields. We argue that the full impact of Sutherland's generic group-order algorithm has not been recognised in this context, and show that group sizes currently being proposed in practice (namely, approximately 830 bits) do not meet the claimed security level. Instead, we claim that random group orders should be at least 3300 bits to meet a 128-bit security level. For ideal class groups this leads to discriminants of around 6656 bits, which are much larger than desirable. One drawback of class groups is that current approaches require approximately $2\log_2(N)$ bits to represent an element in a group of order N. We provide two solutions to mitigate this blow-up in the size of representations. First, we explain how an idea of Bleichenbacher can be used to compress class group elements to $(3/2)\log_2(N)$ bits. Second, we note that using Jacobians of hyperelliptic curves (in other words, class groups of quadratic function fields) allows efficient compression to the optimal element representation size of $\log_2(N)$ bits. We discuss point-counting approaches for hyperelliptic curves and argue that genus-3 curves are secure in the trustless unknown-order setting. We conclude that in practice, Jacobians of hyperelliptic curves are more efficient in practice than ideal class groups at the same security level -- both in the group operation and in the size of the element representation. ### AdaEnlight: Energy-aware Low-light Video Stream Enhancement on Mobile Devices - **Authors:** Sicong Liu (Northwestern Polytechnical University, China), Xiaochen Li (Northwestern Polytechnical University, China), Zimu Zhou (City University of Hong Kong, China), Bin Guo (Northwestern Polytechnical University, China), Meng Zhang (Northwestern Polytechnical University, China), Haochen Shen (Northwestern Polytechnical University, China), Zhiwen Yu (Northwestern Polytechnical University, China) - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16135 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16135 - **Abstract** The ubiquity of camera-embedded devices and the advances in deep learning have stimulated various intelligent mobile video applications. These applications often demand on-device processing of video streams to deliver real-time, high-quality services for privacy and robustness concerns. However, the performance of these applications is constrained by the raw video streams, which tend to be taken with small-aperture cameras of ubiquitous mobile platforms in dim light. Despite extensive low-light video enhancement solutions, they are unfit for deployment to mobile devices due to their complex models and and ignorance of system dynamics like energy budgets. In this paper, we propose AdaEnlight, an energy-aware low-light video stream enhancement system on mobile devices. It achieves real-time video enhancement with competitive visual quality while allowing runtime behavior adaptation to the platform-imposed dynamic energy budgets. We report extensive experiments on diverse datasets, scenarios, and platforms and demonstrate the superiority of AdaEnlight compared with state-of-the-art low-light image and video enhancement solutions. ### Few-shot Query-Focused Summarization with Prefix-Merging - **Authors:** Ruifeng Yuan, Zili Wang, Ziqiang Cao, Wenjie Li - **Subjects:** Computation and Language (cs.CL); Artificial Intelligence (cs.AI) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16164 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16164 - **Abstract** Query-focused summarization has been considered as an important extension for text summarization. It aims to generate a concise highlight for a given query. Different from text summarization, query-focused summarization has long been plagued by the problem of lacking high-quality large-scale datasets. In this paper, we investigate the idea that whether we can integrate and transfer the knowledge of text summarization and question answering to assist the few-shot learning in query-focused summarization. Here, we propose prefix-merging, a prefix-based pretraining strategy for few-shot learning in query-focused summarization. Drawn inspiration from prefix-tuning, we are allowed to integrate the task knowledge from text summarization and question answering into a properly designed prefix and apply the merged prefix to query-focused summarization. With only a small amount of trainable parameters, prefix-merging outperforms fine-tuning on query-focused summarization. We further discuss the influence of different prefix designs and propose a visualized explanation for how prefix-merging works. ### DATID-3D: Diversity-Preserved Domain Adaptation Using Text-to-Image Diffusion for 3D Generative Model - **Authors:** Gwanghyun Kim, Se Young Chun - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16374 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16374 - **Abstract** Recent 3D generative models have achieved remarkable performance in synthesizing high resolution photorealistic images with view consistency and detailed 3D shapes, but training them for diverse domains is challenging since it requires massive training images and their camera distribution information. Text-guided domain adaptation methods have shown impressive performance on converting the 2D generative model on one domain into the models on other domains with different styles by leveraging the CLIP (Contrastive Language-Image Pre-training), rather than collecting massive datasets for those domains. However, one drawback of them is that the sample diversity in the original generative model is not well-preserved in the domain-adapted generative models due to the deterministic nature of the CLIP text encoder. Text-guided domain adaptation will be even more challenging for 3D generative models not only because of catastrophic diversity loss, but also because of inferior text-image correspondence and poor image quality. Here we propose DATID-3D, a domain adaptation method tailored for 3D generative models using text-to-image diffusion models that can synthesize diverse images per text prompt without collecting additional images and camera information for the target domain. Unlike 3D extensions of prior text-guided domain adaptation methods, our novel pipeline was able to fine-tune the state-of-the-art 3D generator of the source domain to synthesize high resolution, multi-view consistent images in text-guided targeted domains without additional data, outperforming the existing text-guided domain adaptation methods in diversity and text-image correspondence. Furthermore, we propose and demonstrate diverse 3D image manipulations such as one-shot instance-selected adaptation and single-view manipulated 3D reconstruction to fully enjoy diversity in text. ### Symmetry Detection in Trajectory Data for More Meaningful Reinforcement Learning Representations - **Authors:** Marissa D'Alonzo, Rebecca Russell - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Robotics (cs.RO) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16381 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16381 - **Abstract** Knowledge of the symmetries of reinforcement learning (RL) systems can be used to create compressed and semantically meaningful representations of a low-level state space. We present a method of automatically detecting RL symmetries directly from raw trajectory data without requiring active control of the system. Our method generates candidate symmetries and trains a recurrent neural network (RNN) to discriminate between the original trajectories and the transformed trajectories for each candidate symmetry. The RNN discriminator's accuracy for each candidate reveals how symmetric the system is under that transformation. This information can be used to create high-level representations that are invariant to all symmetries on a dataset level and to communicate properties of the RL behavior to users. We show in experiments on two simulated RL use cases (a pusher robot and a UAV flying in wind) that our method can determine the symmetries underlying both the environment physics and the trained RL policy. ### Abstract Visual Reasoning with Tangram Shapes - **Authors:** Anya Ji, Noriyuki Kojima, Noah Rush, Alane Suhr, Wai Keen Vong, Robert D. Hawkins, Yoav Artzi - **Subjects:** Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16492 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16492 - **Abstract** We introduce KiloGram, a resource for studying abstract visual reasoning in humans and machines. Drawing on the history of tangram puzzles as stimuli in cognitive science, we build a richly annotated dataset that, with >1k distinct stimuli, is orders of magnitude larger and more diverse than prior resources. It is both visually and linguistically richer, moving beyond whole shape descriptions to include segmentation maps and part labels. We use this resource to evaluate the abstract visual reasoning capacities of recent multi-modal models. We observe that pre-trained weights demonstrate limited abstract reasoning, which dramatically improves with fine-tuning. We also observe that explicitly describing parts aids abstract reasoning for both humans and models, especially when jointly encoding the linguistic and visual inputs. KiloGram is available at https://lil.nlp.cornell.edu/kilogram . ## Keyword: raw image ### Learning Visual Planning Models from Partially Observed Images - **Authors:** Kebing Jin, Zhanhao Xiao, Hankui Hankz Zhuo, Hai Wan, Jiaran Cai - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15666 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15666 - **Abstract** There has been increasing attention on planning model learning in classical planning. Most existing approaches, however, focus on learning planning models from structured data in symbolic representations. It is often difficult to obtain such structured data in real-world scenarios. Although a number of approaches have been developed for learning planning models from fully observed unstructured data (e.g., images), in many scenarios raw observations are often incomplete. In this paper, we provide a novel framework, \aType{Recplan}, for learning a transition model from partially observed raw image traces. More specifically, by considering the preceding and subsequent images in a trace, we learn the latent state representations of raw observations and then build a transition model based on such representations. Additionally, we propose a neural-network-based approach to learn a heuristic model that estimates the distance toward a given goal observation. Based on the learned transition model and heuristic model, we implement a classical planner for images. We exhibit empirically that our approach is more effective than a state-of-the-art approach of learning visual planning models in the environment with incomplete observations.
2.0
New submissions for Wed, 30 Nov 22 - ## Keyword: event camera There is no result ## Keyword: white balance There is no result ## Keyword: isp ### FETI-DP preconditioners for 2D Biot model with discontinuous Galerkin discretization - **Authors:** Pilhwa Lee - **Subjects:** Numerical Analysis (math.NA); Analysis of PDEs (math.AP) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15670 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15670 - **Abstract** Dual-primal FETI (FETI-DP) preconditioners are developed for a 2D Biot model. The model is formulated with mixed-finite elements as a saddle-point problem. The displacement $\mathbf{u}$ and the Darcy flux flow $\mathbf{z}$ are represented with $P_1$ piecewise continuous elements and pore-pressure $p$ with $P_0$ piecewise constant elements, {\it i.e.}, overall three fields with a stabilizing term. We have tested the functionality of FETI-DP with and without Dirichlet preconditioners. Numerical experiments show a signature of scalability of the resulting parallel algorithm in the compressible elasticity with permeable Darcy flow as well as almost incompressible elasticity. ### Predicting Football Match Outcomes with eXplainable Machine Learning and the Kelly Index - **Authors:** Yiming Ren, Teo Susnjak - **Subjects:** Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15734 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15734 - **Abstract** In this work, a machine learning approach is developed for predicting the outcomes of football matches. The novelty of this research lies in the utilisation of the Kelly Index to first classify matches into categories where each one denotes the different levels of predictive difficulty. Classification models using a wide suite of algorithms were developed for each category of matches in order to determine the efficacy of the approach. In conjunction to this, a set of previously unexplored features were engineering including Elo-based variables. The dataset originated from the Premier League match data covering the 2019-2021 seasons. The findings indicate that the process of decomposing the predictive problem into sub-tasks was effective and produced competitive results with prior works, while the ensemble-based methods were the most effective. The paper also devised an investment strategy in order to evaluate its effectiveness by benchmarking against bookmaker odds. An approach was developed that minimises risk by combining the Kelly Index with the predefined confidence thresholds of the predictive models. The experiments found that the proposed strategy can return a profit when following a conservative approach that focuses primarily on easy-to-predict matches where the predictive models display a high confidence level. ### Understanding the Impact of Adversarial Robustness on Accuracy Disparity - **Authors:** Yuzheng Hu, Fan Wu, Hongyang Zhang, Han Zhao - **Subjects:** Machine Learning (cs.LG); Machine Learning (stat.ML) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15762 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15762 - **Abstract** While it has long been empirically observed that adversarial robustness may be at odds with standard accuracy and may have further disparate impacts on different classes, it remains an open question to what extent such observations hold and how the class imbalance plays a role within. In this paper, we attempt to understand this question of accuracy disparity by taking a closer look at linear classifiers under a Gaussian mixture model. We decompose the impact of adversarial robustness into two parts: an inherent effect that will degrade the standard accuracy on all classes, and the other caused by the class imbalance ratio, which will increase the accuracy disparity compared to standard training. Furthermore, we also extend our model to the general family of stable distributions. We demonstrate that while the constraint of adversarial robustness consistently degrades the standard accuracy in the balanced class setting, the class imbalance ratio plays a fundamentally different role in accuracy disparity compared to the Gaussian case, due to the heavy tail of the stable distribution. We additionally perform experiments on both synthetic and real-world datasets. The empirical results not only corroborate our theoretical findings, but also suggest that the implications may extend to nonlinear models over real-world datasets. ### Distributed Energy Management and Demand Response in Smart Grids: A Multi-Agent Deep Reinforcement Learning Framework - **Authors:** Amin Shojaeighadikolaei, Arman Ghasemi, Kailani Jones, Yousif Dafalla, Alexandru G. Bardas, Reza Ahmadi, Morteza Haashemi - **Subjects:** Multiagent Systems (cs.MA); Machine Learning (cs.LG); Systems and Control (eess.SY) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15858 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15858 - **Abstract** This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems. In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users. DR has a widely recognized potential for improving power grid stability and reliability, while at the same time reducing end-users energy bills. However, the conventional DR techniques come with several shortcomings, such as the inability to handle operational uncertainties while incurring end-user disutility, which prevents widespread adoption in real-world applications. The proposed framework addresses these shortcomings by implementing DR and DEM based on real-time pricing strategy that is achieved using deep reinforcement learning. Furthermore, this framework enables the power grid service provider to leverage distributed energy resources (i.e., PV rooftop panels and battery storage) as dispatchable assets to support the smart grid during peak hours, thus achieving management of distributed energy resources. Simulation results based on the Deep Q-Network (DQN) demonstrate significant improvements of the 24-hour accumulative profit for both prosumers and the power grid service provider, as well as major reductions in the utilization of the power grid reserve generators. ### Advisory Tool for Managing Failure Cascades in Systems with Wind Power - **Authors:** Siyu Liu, Marija Ilic - **Subjects:** Systems and Control (eess.SY) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15957 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15957 - **Abstract** This paper concerns the resilience of systems with wind power upon wind reduction by evaluating the potential of corrective actions, such as generation and load dispatch, on minimizing the effects of transmission line failures. Three functions (grid, consumer-centric loss, and resilience impact) are used to statistically evaluate the criticality of initial contingent failures and wind reductions. Our model is learned with Monte Carlo, convex optimization, and adaptive selection, illustrated on the IEEE-30 and IEEE-300 bus systems with both AC and DC models. We highlight the impact of wind reductions and propose physically implementable solutions. ### Data Privacy Protection in DeFi Protocols - **Authors:** Jiawei Zhu, Zhuangtong Huang, Yixin Xu, Jerome Yen, Ye Wang - **Subjects:** Cryptography and Security (cs.CR); Distributed, Parallel, and Cluster Computing (cs.DC) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16082 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16082 - **Abstract** With the development of decentralized finance (DeFi), the inherent limitations caused by the blockchain system have come to the surface. Because recorded data on the blockchain is available to system participants, DeFi protocols may not collect the private data of users. Otherwise, the information leakage may result in serious financial losses or cause legal issues. Therefore, DeFi protocols could hardly offer different users customized solutions, and the capital utilization is limited. To address this challenge in DeFi, we propose a solution, which is a trustful protocol that allows users to provide personal private data to DeFi protocols without worrying that such information would be disclosed. By implementing asymmetric encryption, zero-knowledge proof, and homomorphic encryption, we ensure that users' data will not be controlled by any centralized authorities and avoid potential financial losses or legal disputes due to information leakage. We further discuss the application scenarios of financial data privacy protection in public blockchain DeFi ecosystems and cross-border financial applications, such as credit aggregation. ### Be Careful with Rotation: A Uniform Backdoor Pattern for 3D Shape - **Authors:** Linkun Fan, Fazhi He, Qing Guo, Wei Tang, Xiaolin Hong, Bing Li - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Cryptography and Security (cs.CR) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16192 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16192 - **Abstract** For saving cost, many deep neural networks (DNNs) are trained on third-party datasets downloaded from internet, which enables attacker to implant backdoor into DNNs. In 2D domain, inherent structures of different image formats are similar. Hence, backdoor attack designed for one image format will suite for others. However, when it comes to 3D world, there is a huge disparity among different 3D data structures. As a result, backdoor pattern designed for one certain 3D data structure will be disable for other data structures of the same 3D scene. Therefore, this paper designs a uniform backdoor pattern: NRBdoor (Noisy Rotation Backdoor) which is able to adapt for heterogeneous 3D data structures. Specifically, we start from the unit rotation and then search for the optimal pattern by noise generation and selection process. The proposed NRBdoor is natural and imperceptible, since rotation is a common operation which usually contains noise due to both the miss match between a pair of points and the sensor calibration error for real-world 3D scene. Extensive experiments on 3D mesh and point cloud show that the proposed NRBdoor achieves state-of-the-art performance, with negligible shape variation. ### Controllable speech synthesis by learning discrete phoneme-level prosodic representations - **Authors:** Nikolaos Ellinas, Myrsini Christidou, Alexandra Vioni, June Sig Sung, Aimilios Chalamandaris, Pirros Tsiakoulis, Paris Mastorocostas - **Subjects:** Sound (cs.SD); Computation and Language (cs.CL); Machine Learning (cs.LG); Audio and Speech Processing (eess.AS) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16307 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16307 - **Abstract** In this paper, we present a novel method for phoneme-level prosody control of F0 and duration using intuitive discrete labels. We propose an unsupervised prosodic clustering process which is used to discretize phoneme-level F0 and duration features from a multispeaker speech dataset. These features are fed as an input sequence of prosodic labels to a prosody encoder module which augments an autoregressive attention-based text-to-speech model. We utilize various methods in order to improve prosodic control range and coverage, such as augmentation, F0 normalization, balanced clustering for duration and speaker-independent clustering. The final model enables fine-grained phoneme-level prosody control for all speakers contained in the training set, while maintaining the speaker identity. Instead of relying on reference utterances for inference, we introduce a prior prosody encoder which learns the style of each speaker and enables speech synthesis without the requirement of reference audio. We also fine-tune the multispeaker model to unseen speakers with limited amounts of data, as a realistic application scenario and show that the prosody control capabilities are maintained, verifying that the speaker-independent prosodic clustering is effective. Experimental results show that the model has high output speech quality and that the proposed method allows efficient prosody control within each speaker's range despite the variability that a multispeaker setting introduces. ### Fourier-Net: Fast Image Registration with Band-limited Deformation - **Authors:** Xi Jia, Joseph Bartlett, Wei Chen, Siyang Song, Tianyang Zhang, Xinxing Cheng, Wenqi Lu, Zhaowen Qiu, Jinming Duan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16342 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16342 - **Abstract** Unsupervised image registration commonly adopts U-Net style networks to predict dense displacement fields in the full-resolution spatial domain. For high-resolution volumetric image data, this process is however resource intensive and time-consuming. To tackle this problem, we propose the Fourier-Net, replacing the expansive path in a U-Net style network with a parameter-free model-driven decoder. Specifically, instead of our Fourier-Net learning to output a full-resolution displacement field in the spatial domain, we learn its low-dimensional representation in a band-limited Fourier domain. This representation is then decoded by our devised model-driven decoder (consisting of a zero padding layer and an inverse discrete Fourier transform layer) to the dense, full-resolution displacement field in the spatial domain. These changes allow our unsupervised Fourier-Net to contain fewer parameters and computational operations, resulting in faster inference speeds. Fourier-Net is then evaluated on two public 3D brain datasets against various state-of-the-art approaches. For example, when compared to a recent transformer-based method, i.e., TransMorph, our Fourier-Net, only using 0.22$\%$ of its parameters and 6.66$\%$ of the mult-adds, achieves a 0.6\% higher Dice score and an 11.48$\times$ faster inference speed. Code is available at \url{https://github.com/xi-jia/Fourier-Net}. ### Is Twitter Enough? Investigating Situational Awareness in Social and Print Media during the Second COVID-19 Wave in India - **Authors:** Ishita Vohra, Meher Shashwat Nigam, Aryan Sakaria, Amey Kudari, Nimmi Rangaswamy - **Subjects:** Social and Information Networks (cs.SI) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16360 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16360 - **Abstract** The pandemic required efficient allocation of public resources and transforming existing ways of societal functions. To manage any crisis, governments and public health researchers exploit the information available to them in order to make informed decisions, also defined as situational awareness. Gathering situational awareness using social media has been functional to manage epidemics. Previous research focused on using discussions during periods of epidemic crises on social media platforms like Twitter, Reddit, or Facebook and developing NLP techniques to filter out relevant discussions from a huge corpus of messages and posts. Social media usage varies with internet penetration and other socioeconomic factors, which might induce disparity in analyzing discussions across different geographies. However, print media is a ubiquitous information source, irrespective of geography. Further, topics discussed in news articles are already newsworthy, while on social media newsworthiness is a product of techno-social processes. Developing this fundamental difference, we study Twitter data during the second wave in India focused on six high-population cities with varied macroeconomic factors. Through a mixture of qualitative and quantitative methods, we further analyze two Indian newspapers during the same period and compare topics from both Twitter and the newspapers to evaluate situational awareness around the second phase of COVID on each of these platforms. We conclude that factors like internet penetration and GDP in a specific city influence the discourse surrounding situational updates on social media. Thus, augmenting information from newspapers with information extracted from social media would provide a more comprehensive perspective in resource deficit cities. ## Keyword: compression ### Post-training Quantization on Diffusion Models - **Authors:** Yuzhang Shang, Zhihang Yuan, Bin Xie, Bingzhe Wu, Yan Yan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15736 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15736 - **Abstract** Denoising diffusion (score-based) generative models have recently achieved significant accomplishments in generating realistic and diverse data. These approaches define a forward diffusion process for transforming data into noise and a backward denoising process for sampling data from noise. Unfortunately, the generation process of current denoising diffusion models is notoriously slow due to the lengthy iterative noise estimations, which rely on cumbersome neural networks. It prevents the diffusion models from being widely deployed, especially on edge devices. Previous works accelerate the generation process of diffusion model (DM) via finding shorter yet effective sampling trajectories. However, they overlook the cost of noise estimation with a heavy network in every iteration. In this work, we accelerate generation from the perspective of compressing the noise estimation network. Due to the difficulty of retraining DMs, we exclude mainstream training-aware compression paradigms and introduce post-training quantization (PTQ) into DM acceleration. However, the output distributions of noise estimation networks change with time-step, making previous PTQ methods fail in DMs since they are designed for single-time step scenarios. To devise a DM-specific PTQ method, we explore PTQ on DM in three aspects: quantized operations, calibration dataset, and calibration metric. We summarize and use several observations derived from all-inclusive investigations to formulate our method, which especially targets the unique multi-time-step structure of DMs. Experimentally, our method can directly quantize full-precision DMs into 8-bit models while maintaining or even improving their performance in a training-free manner. Importantly, our method can serve as a plug-and-play module on other fast-sampling methods, e.g., DDIM. ### Compressing Cross-Lingual Multi-Task Models at Qualtrics - **Authors:** Daniel Campos, Daniel Perry, Samir Joshi, Yashmeet Gambhir, Wei Du, Zhengzheng Xing, Aaron Colak - **Subjects:** Computation and Language (cs.CL); Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15927 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15927 - **Abstract** Experience management is an emerging business area where organizations focus on understanding the feedback of customers and employees in order to improve their end-to-end experiences. This results in a unique set of machine learning problems to help understand how people feel, discover issues they care about, and find which actions need to be taken on data that are different in content and distribution from traditional NLP domains. In this paper, we present a case study of building text analysis applications that perform multiple classification tasks efficiently in 12 languages in the nascent business area of experience management. In order to scale up modern ML methods on experience data, we leverage cross lingual and multi-task modeling techniques to consolidate our models into a single deployment to avoid overhead. We also make use of model compression and model distillation to reduce overall inference latency and hardware cost to the level acceptable for business needs while maintaining model prediction quality. Our findings show that multi-task modeling improves task performance for a subset of experience management tasks in both XLM-R and mBert architectures. Among the compressed architectures we explored, we found that MiniLM achieved the best compression/performance tradeoff. Our case study demonstrates a speedup of up to 15.61x with 2.60% average task degradation (or 3.29x speedup with 1.71% degradation) and estimated savings of 44% over using the original full-size model. These results demonstrate a successful scaling up of text classification for the challenging new area of ML for experience management. ### Maximal Atomic irRedundant Sets: a Usage-based Dataflow Partitioning Algorithm - **Authors:** Corentin Ferry, Steven Derrien, Sanjay Rajopadhye - **Subjects:** Programming Languages (cs.PL); Distributed, Parallel, and Cluster Computing (cs.DC) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15933 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15933 - **Abstract** Programs admitting a polyhedral representation can be transformed in many ways for locality and parallelism, notably loop tiling. Data flow analysis can then compute dependence relations between iterations and between tiles. When tiling is applied, certain iteration-wise dependences cross tile boundaries, creating the need for inter-tile data communication. Previous work computes it as the flow-in and flow-out sets of iteration tiles. In this paper, we propose a partitioning of the flow-out of a tile into the maximal sets of iterations that are entirely consumed and incur no redundant storage or transfer. The computation is described as an algorithm and performed on a selection of polyhedral programs. We then suggest possible applications of this decomposition in compression and memory allocation. ### Trustless unknown-order groups - **Authors:** Samuel Dobson, Steven Galbraith, Benjamin Smith (GRACE) - **Subjects:** Cryptography and Security (cs.CR); Number Theory (math.NT) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16128 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16128 - **Abstract** Groups of unknown order are of major interest due to their applications including time-lock puzzles, verifiable delay functions, and accumulators. In this paper we focus on trustless setup: in this setting, the most popular unknown-order group construction is ideal class groups of imaginary quadratic fields. We argue that the full impact of Sutherland's generic group-order algorithm has not been recognised in this context, and show that group sizes currently being proposed in practice (namely, approximately 830 bits) do not meet the claimed security level. Instead, we claim that random group orders should be at least 3300 bits to meet a 128-bit security level. For ideal class groups this leads to discriminants of around 6656 bits, which are much larger than desirable. One drawback of class groups is that current approaches require approximately $2\log_2(N)$ bits to represent an element in a group of order N. We provide two solutions to mitigate this blow-up in the size of representations. First, we explain how an idea of Bleichenbacher can be used to compress class group elements to $(3/2)\log_2(N)$ bits. Second, we note that using Jacobians of hyperelliptic curves (in other words, class groups of quadratic function fields) allows efficient compression to the optimal element representation size of $\log_2(N)$ bits. We discuss point-counting approaches for hyperelliptic curves and argue that genus-3 curves are secure in the trustless unknown-order setting. We conclude that in practice, Jacobians of hyperelliptic curves are more efficient in practice than ideal class groups at the same security level -- both in the group operation and in the size of the element representation. ### DBA: Efficient Transformer with Dynamic Bilinear Low-Rank Attention - **Authors:** Bosheng Qin, Juncheng Li, Siliang Tang, Yueting Zhuang - **Subjects:** Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16368 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16368 - **Abstract** Many studies have been conducted to improve the efficiency of Transformer from quadric to linear. Among them, the low-rank-based methods aim to learn the projection matrices to compress the sequence length. However, the projection matrices are fixed once they have been learned, which compress sequence length with dedicated coefficients for tokens in the same position. Adopting such input-invariant projections ignores the fact that the most informative part of a sequence varies from sequence to sequence, thus failing to preserve the most useful information that lies in varied positions. In addition, previous efficient Transformers only focus on the influence of sequence length while neglecting the effect of hidden state dimension. To address the aforementioned problems, we present an efficient yet effective attention mechanism, namely the Dynamic Bilinear Low-Rank Attention (DBA), which compresses the sequence length by input-sensitive dynamic projection matrices and achieves linear time and space complexity by jointly optimizing the sequence length and hidden state dimension while maintaining state-of-the-art performance. Specifically, we first theoretically demonstrate that the sequence length can be compressed non-destructively from a novel perspective of information theory, with compression matrices dynamically determined by the input sequence. Furthermore, we show that the hidden state dimension can be approximated by extending the Johnson-Lindenstrauss lemma, optimizing the attention in bilinear form. Theoretical analysis shows that DBA is proficient in capturing high-order relations in cross-attention problems. Experiments over tasks with diverse sequence length conditions show that DBA achieves state-of-the-art performance compared with various strong baselines while maintaining less memory consumption with higher speed. ### Compressing Volumetric Radiance Fields to 1 MB - **Authors:** Lingzhi Li, Zhen Shen, Zhongshu Wang, Li Shen, Liefeng Bo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16386 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16386 - **Abstract** Approximating radiance fields with volumetric grids is one of promising directions for improving NeRF, represented by methods like Plenoxels and DVGO, which achieve super-fast training convergence and real-time rendering. However, these methods typically require a tremendous storage overhead, costing up to hundreds of megabytes of disk space and runtime memory for a single scene. We address this issue in this paper by introducing a simple yet effective framework, called vector quantized radiance fields (VQRF), for compressing these volume-grid-based radiance fields. We first present a robust and adaptive metric for estimating redundancy in grid models and performing voxel pruning by better exploring intermediate outputs of volumetric rendering. A trainable vector quantization is further proposed to improve the compactness of grid models. In combination with an efficient joint tuning strategy and post-processing, our method can achieve a compression ratio of 100$\times$ by reducing the overall model size to 1 MB with negligible loss on visual quality. Extensive experiments demonstrate that the proposed framework is capable of achieving unrivaled performance and well generalization across multiple methods with distinct volumetric structures, facilitating the wide use of volumetric radiance fields methods in real-world applications. Code Available at \url{https://github.com/AlgoHunt/VQRF} ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: raw ### Learning Visual Planning Models from Partially Observed Images - **Authors:** Kebing Jin, Zhanhao Xiao, Hankui Hankz Zhuo, Hai Wan, Jiaran Cai - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15666 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15666 - **Abstract** There has been increasing attention on planning model learning in classical planning. Most existing approaches, however, focus on learning planning models from structured data in symbolic representations. It is often difficult to obtain such structured data in real-world scenarios. Although a number of approaches have been developed for learning planning models from fully observed unstructured data (e.g., images), in many scenarios raw observations are often incomplete. In this paper, we provide a novel framework, \aType{Recplan}, for learning a transition model from partially observed raw image traces. More specifically, by considering the preceding and subsequent images in a trace, we learn the latent state representations of raw observations and then build a transition model based on such representations. Additionally, we propose a neural-network-based approach to learn a heuristic model that estimates the distance toward a given goal observation. Based on the learned transition model and heuristic model, we implement a classical planner for images. We exhibit empirically that our approach is more effective than a state-of-the-art approach of learning visual planning models in the environment with incomplete observations. ### Deep Semi-supervised Learning with Double-Contrast of Features and Semantics - **Authors:** Quan Feng, Jiayu Yao, Zhison Pan, Guojun Zhou - **Subjects:** Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15671 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15671 - **Abstract** In recent years, the field of intelligent transportation systems (ITS) has achieved remarkable success, which is mainly due to the large amount of available annotation data. However, obtaining these annotated data has to afford expensive costs in reality. Therefore, a more realistic strategy is to leverage semi-supervised learning (SSL) with a small amount of labeled data and a large amount of unlabeled data. Typically, semantic consistency regularization and the two-stage learning methods of decoupling feature extraction and classification have been proven effective. Nevertheless, representation learning only limited to semantic consistency regularization may not guarantee the separation or discriminability of representations of samples with different semantics; due to the inherent limitations of the two-stage learning methods, the extracted features may not match the specific downstream tasks. In order to deal with the above drawbacks, this paper proposes an end-to-end deep semi-supervised learning double contrast of semantic and feature, which extracts effective tasks specific discriminative features by contrasting the semantics/features of positive and negative augmented samples pairs. Moreover, we leverage information theory to explain the rationality of double contrast of semantics and features and slack mutual information to contrastive loss in a simpler way. Finally, the effectiveness of our method is verified in benchmark datasets. ### Superpoint Transformer for 3D Scene Instance Segmentation - **Authors:** Jiahao Sun, Chunmei Qing, Junpeng Tan, Xiangmin Xu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15766 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15766 - **Abstract** Most existing methods realize 3D instance segmentation by extending those models used for 3D object detection or 3D semantic segmentation. However, these non-straightforward methods suffer from two drawbacks: 1) Imprecise bounding boxes or unsatisfactory semantic predictions limit the performance of the overall 3D instance segmentation framework. 2) Existing method requires a time-consuming intermediate step of aggregation. To address these issues, this paper proposes a novel end-to-end 3D instance segmentation method based on Superpoint Transformer, named as SPFormer. It groups potential features from point clouds into superpoints, and directly predicts instances through query vectors without relying on the results of object detection or semantic segmentation. The key step in this framework is a novel query decoder with transformers that can capture the instance information through the superpoint cross-attention mechanism and generate the superpoint masks of the instances. Through bipartite matching based on superpoint masks, SPFormer can implement the network training without the intermediate aggregation step, which accelerates the network. Extensive experiments on ScanNetv2 and S3DIS benchmarks verify that our method is concise yet efficient. Notably, SPFormer exceeds compared state-of-the-art methods by 4.3% on ScanNetv2 hidden test set in terms of mAP and keeps fast inference speed (247ms per frame) simultaneously. Code is available at https://github.com/sunjiahao1999/SPFormer. ### ClueWeb22: 10 Billion Web Documents with Rich Information - **Authors:** Arnold Overwijk, Chenyan Xiong, Xiao Liu, Cameron VandenBerg, Jamie Callan - **Subjects:** Information Retrieval (cs.IR); Artificial Intelligence (cs.AI); Computation and Language (cs.CL) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15848 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15848 - **Abstract** ClueWeb22, the newest iteration of the ClueWeb line of datasets, provides 10 billion web pages affiliated with rich information. Its design was influenced by the need for a high quality, large scale web corpus to support a range of academic and industry research, for example, in information systems, retrieval-augmented AI systems, and model pretraining. Compared with earlier ClueWeb corpora, the ClueWeb22 corpus is larger, more varied, of higher-quality, and aligned with the document distributions in commercial web search. Besides raw HTML, ClueWeb22 includes rich information about the web pages provided by industry-standard document understanding systems, including the visual representation of pages rendered by a web browser, parsed HTML structure information from a neural network parser, and pre-processed cleaned document text to lower the barrier to entry. Many of these signals have been widely used in industry but are available to the research community for the first time at this scale. ### Neural Feature-Adaptation for Symbolic Predictions Using Pre-Training and Semantic Loss - **Authors:** Vedant Shah, Aditya Agrawal, Lovekesh Vig, Ashwin Srinivasan, Gautam Shroff, Tanmay Verlekar - **Subjects:** Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Logic in Computer Science (cs.LO) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16047 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16047 - **Abstract** We are interested in neurosymbolic systems consisting of a high-level symbolic layer for explainable prediction in terms of human-intelligible concepts; and a low-level neural layer for extracting symbols required to generate the symbolic explanation. Real data is often imperfect meaning that even if the symbolic theory remains unchanged, we may still need to address the problem of mapping raw data to high-level symbols, each time there is a change in the data acquisition environment or equipment. Manual (re-)annotation of the raw data each time this happens is laborious and expensive; and automated labelling methods are often imperfect, especially for complex problems. NEUROLOG proposed the use of a semantic loss function that allows an existing feature-based symbolic model to guide the extraction of feature-values from raw data, using `abduction'. However, the experiments demonstrating the use of semantic loss through abduction appear to rely heavily on a domain-specific pre-processing step that enables a prior delineation of feature locations in the raw data. We examine the use of semantic loss in domains where such pre-processing is not possible, or is not obvious. We show that without any prior information about the features, the NEUROLOG approach can continue to predict accurately even with substantially incorrect feature predictions. We show also that prior information about the features in the form of even imperfect pre-training can help correct this situation. These findings are replicated on the original problem considered by NEUROLOG, without the use of feature-delineation. This suggests that symbolic explanations constructed for data in a domain could be re-used in a related domain, by `feature-adaptation' of pre-trained neural extractors using the semantic loss function constrained by abductive feedback. ### Behavior Estimation from Multi-Source Data for Offline Reinforcement Learning - **Authors:** Guoxi Zhang, Hisashi Kashima - **Subjects:** Machine Learning (cs.LG); Robotics (cs.RO) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16078 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16078 - **Abstract** Offline reinforcement learning (RL) have received rising interest due to its appealing data efficiency. The present study addresses behavior estimation, a task that lays the foundation of many offline RL algorithms. Behavior estimation aims at estimating the policy with which training data are generated. In particular, this work considers a scenario where the data are collected from multiple sources. In this case, neglecting data heterogeneity, existing approaches for behavior estimation suffers from behavior misspecification. To overcome this drawback, the present study proposes a latent variable model to infer a set of policies from data, which allows an agent to use as behavior policy the policy that best describes a particular trajectory. This model provides with a agent fine-grained characterization for multi-source data and helps it overcome behavior misspecification. This work also proposes a learning algorithm for this model and illustrates its practical usage via extending an existing offline RL algorithm. Lastly, with extensive evaluation this work confirms the existence of behavior misspecification and the efficacy of the proposed model. ### Peculiarities of gender disambiguation and ordering of non-English authors' names for Economic papers beyond core databases - **Authors:** O. Mryglod, S. Nazarovets, S. Kozmenko - **Subjects:** Digital Libraries (cs.DL) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16124 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16124 - **Abstract** This paper presents the results of further exploration of Crossref data related to Ukrainian Economics research (the first part can be found in [Mryglod, O., Nazarovets, S. & Kozmenko, S. (2021) Scientometrics, 126, 8187]). Our purpose is to supplement the quantitative portrait of Ukrainian Economics discipline with the results of gender and author ordering analysis at the level of individual authors, special methods of working with bibliographic data with a predominant share of non-English authors are used. The properties of gender mixing, the likelihood of male and female authors occupying the first position in the authorship list, as well as the arrangements of names are studied. A data set containing bibliographic records related to Ukrainian journal publications in the field of Economics is constructed using Crossref metadata. The described stages for working with such specific data help to work at the level of authors and analyse, in particular, gender issues. Despite the larger number of female authors, gender equality is more likely to be reported at the individual level for the discipline of Ukrainian Economics. The tendencies towards collaborative or solo-publications and gender mixing patterns are found to be dependent on the journal: the differences for publications indexed in Scopus and/or Web of Science databases are found. It has also been found that Ukrainian Economics research is characterized by rather a non-alphabetical order of authors. To our knowledge, this is the first large-scale quantitative study of Ukrainian Economic discipline. The results obtained are valuable not only at the national level, but also contribute to general knowledge about Economic research, gender issues and authors' names ordering. Here, for the first time, attention is drawn to the explicit use of the features of the Slavic authors' names. ### Trustless unknown-order groups - **Authors:** Samuel Dobson, Steven Galbraith, Benjamin Smith (GRACE) - **Subjects:** Cryptography and Security (cs.CR); Number Theory (math.NT) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16128 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16128 - **Abstract** Groups of unknown order are of major interest due to their applications including time-lock puzzles, verifiable delay functions, and accumulators. In this paper we focus on trustless setup: in this setting, the most popular unknown-order group construction is ideal class groups of imaginary quadratic fields. We argue that the full impact of Sutherland's generic group-order algorithm has not been recognised in this context, and show that group sizes currently being proposed in practice (namely, approximately 830 bits) do not meet the claimed security level. Instead, we claim that random group orders should be at least 3300 bits to meet a 128-bit security level. For ideal class groups this leads to discriminants of around 6656 bits, which are much larger than desirable. One drawback of class groups is that current approaches require approximately $2\log_2(N)$ bits to represent an element in a group of order N. We provide two solutions to mitigate this blow-up in the size of representations. First, we explain how an idea of Bleichenbacher can be used to compress class group elements to $(3/2)\log_2(N)$ bits. Second, we note that using Jacobians of hyperelliptic curves (in other words, class groups of quadratic function fields) allows efficient compression to the optimal element representation size of $\log_2(N)$ bits. We discuss point-counting approaches for hyperelliptic curves and argue that genus-3 curves are secure in the trustless unknown-order setting. We conclude that in practice, Jacobians of hyperelliptic curves are more efficient in practice than ideal class groups at the same security level -- both in the group operation and in the size of the element representation. ### AdaEnlight: Energy-aware Low-light Video Stream Enhancement on Mobile Devices - **Authors:** Sicong Liu (Northwestern Polytechnical University, China), Xiaochen Li (Northwestern Polytechnical University, China), Zimu Zhou (City University of Hong Kong, China), Bin Guo (Northwestern Polytechnical University, China), Meng Zhang (Northwestern Polytechnical University, China), Haochen Shen (Northwestern Polytechnical University, China), Zhiwen Yu (Northwestern Polytechnical University, China) - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16135 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16135 - **Abstract** The ubiquity of camera-embedded devices and the advances in deep learning have stimulated various intelligent mobile video applications. These applications often demand on-device processing of video streams to deliver real-time, high-quality services for privacy and robustness concerns. However, the performance of these applications is constrained by the raw video streams, which tend to be taken with small-aperture cameras of ubiquitous mobile platforms in dim light. Despite extensive low-light video enhancement solutions, they are unfit for deployment to mobile devices due to their complex models and and ignorance of system dynamics like energy budgets. In this paper, we propose AdaEnlight, an energy-aware low-light video stream enhancement system on mobile devices. It achieves real-time video enhancement with competitive visual quality while allowing runtime behavior adaptation to the platform-imposed dynamic energy budgets. We report extensive experiments on diverse datasets, scenarios, and platforms and demonstrate the superiority of AdaEnlight compared with state-of-the-art low-light image and video enhancement solutions. ### Few-shot Query-Focused Summarization with Prefix-Merging - **Authors:** Ruifeng Yuan, Zili Wang, Ziqiang Cao, Wenjie Li - **Subjects:** Computation and Language (cs.CL); Artificial Intelligence (cs.AI) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16164 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16164 - **Abstract** Query-focused summarization has been considered as an important extension for text summarization. It aims to generate a concise highlight for a given query. Different from text summarization, query-focused summarization has long been plagued by the problem of lacking high-quality large-scale datasets. In this paper, we investigate the idea that whether we can integrate and transfer the knowledge of text summarization and question answering to assist the few-shot learning in query-focused summarization. Here, we propose prefix-merging, a prefix-based pretraining strategy for few-shot learning in query-focused summarization. Drawn inspiration from prefix-tuning, we are allowed to integrate the task knowledge from text summarization and question answering into a properly designed prefix and apply the merged prefix to query-focused summarization. With only a small amount of trainable parameters, prefix-merging outperforms fine-tuning on query-focused summarization. We further discuss the influence of different prefix designs and propose a visualized explanation for how prefix-merging works. ### DATID-3D: Diversity-Preserved Domain Adaptation Using Text-to-Image Diffusion for 3D Generative Model - **Authors:** Gwanghyun Kim, Se Young Chun - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16374 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16374 - **Abstract** Recent 3D generative models have achieved remarkable performance in synthesizing high resolution photorealistic images with view consistency and detailed 3D shapes, but training them for diverse domains is challenging since it requires massive training images and their camera distribution information. Text-guided domain adaptation methods have shown impressive performance on converting the 2D generative model on one domain into the models on other domains with different styles by leveraging the CLIP (Contrastive Language-Image Pre-training), rather than collecting massive datasets for those domains. However, one drawback of them is that the sample diversity in the original generative model is not well-preserved in the domain-adapted generative models due to the deterministic nature of the CLIP text encoder. Text-guided domain adaptation will be even more challenging for 3D generative models not only because of catastrophic diversity loss, but also because of inferior text-image correspondence and poor image quality. Here we propose DATID-3D, a domain adaptation method tailored for 3D generative models using text-to-image diffusion models that can synthesize diverse images per text prompt without collecting additional images and camera information for the target domain. Unlike 3D extensions of prior text-guided domain adaptation methods, our novel pipeline was able to fine-tune the state-of-the-art 3D generator of the source domain to synthesize high resolution, multi-view consistent images in text-guided targeted domains without additional data, outperforming the existing text-guided domain adaptation methods in diversity and text-image correspondence. Furthermore, we propose and demonstrate diverse 3D image manipulations such as one-shot instance-selected adaptation and single-view manipulated 3D reconstruction to fully enjoy diversity in text. ### Symmetry Detection in Trajectory Data for More Meaningful Reinforcement Learning Representations - **Authors:** Marissa D'Alonzo, Rebecca Russell - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Robotics (cs.RO) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16381 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16381 - **Abstract** Knowledge of the symmetries of reinforcement learning (RL) systems can be used to create compressed and semantically meaningful representations of a low-level state space. We present a method of automatically detecting RL symmetries directly from raw trajectory data without requiring active control of the system. Our method generates candidate symmetries and trains a recurrent neural network (RNN) to discriminate between the original trajectories and the transformed trajectories for each candidate symmetry. The RNN discriminator's accuracy for each candidate reveals how symmetric the system is under that transformation. This information can be used to create high-level representations that are invariant to all symmetries on a dataset level and to communicate properties of the RL behavior to users. We show in experiments on two simulated RL use cases (a pusher robot and a UAV flying in wind) that our method can determine the symmetries underlying both the environment physics and the trained RL policy. ### Abstract Visual Reasoning with Tangram Shapes - **Authors:** Anya Ji, Noriyuki Kojima, Noah Rush, Alane Suhr, Wai Keen Vong, Robert D. Hawkins, Yoav Artzi - **Subjects:** Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16492 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16492 - **Abstract** We introduce KiloGram, a resource for studying abstract visual reasoning in humans and machines. Drawing on the history of tangram puzzles as stimuli in cognitive science, we build a richly annotated dataset that, with >1k distinct stimuli, is orders of magnitude larger and more diverse than prior resources. It is both visually and linguistically richer, moving beyond whole shape descriptions to include segmentation maps and part labels. We use this resource to evaluate the abstract visual reasoning capacities of recent multi-modal models. We observe that pre-trained weights demonstrate limited abstract reasoning, which dramatically improves with fine-tuning. We also observe that explicitly describing parts aids abstract reasoning for both humans and models, especially when jointly encoding the linguistic and visual inputs. KiloGram is available at https://lil.nlp.cornell.edu/kilogram . ## Keyword: raw image ### Learning Visual Planning Models from Partially Observed Images - **Authors:** Kebing Jin, Zhanhao Xiao, Hankui Hankz Zhuo, Hai Wan, Jiaran Cai - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15666 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15666 - **Abstract** There has been increasing attention on planning model learning in classical planning. Most existing approaches, however, focus on learning planning models from structured data in symbolic representations. It is often difficult to obtain such structured data in real-world scenarios. Although a number of approaches have been developed for learning planning models from fully observed unstructured data (e.g., images), in many scenarios raw observations are often incomplete. In this paper, we provide a novel framework, \aType{Recplan}, for learning a transition model from partially observed raw image traces. More specifically, by considering the preceding and subsequent images in a trace, we learn the latent state representations of raw observations and then build a transition model based on such representations. Additionally, we propose a neural-network-based approach to learn a heuristic model that estimates the distance toward a given goal observation. Based on the learned transition model and heuristic model, we implement a classical planner for images. We exhibit empirically that our approach is more effective than a state-of-the-art approach of learning visual planning models in the environment with incomplete observations.
process
new submissions for wed nov keyword event camera there is no result keyword white balance there is no result keyword isp feti dp preconditioners for biot model with discontinuous galerkin discretization authors pilhwa lee subjects numerical analysis math na analysis of pdes math ap arxiv link pdf link abstract dual primal feti feti dp preconditioners are developed for a biot model the model is formulated with mixed finite elements as a saddle point problem the displacement mathbf u and the darcy flux flow mathbf z are represented with p piecewise continuous elements and pore pressure p with p piecewise constant elements it i e overall three fields with a stabilizing term we have tested the functionality of feti dp with and without dirichlet preconditioners numerical experiments show a signature of scalability of the resulting parallel algorithm in the compressible elasticity with permeable darcy flow as well as almost incompressible elasticity predicting football match outcomes with explainable machine learning and the kelly index authors yiming ren teo susnjak subjects machine learning cs lg arxiv link pdf link abstract in this work a machine learning approach is developed for predicting the outcomes of football matches the novelty of this research lies in the utilisation of the kelly index to first classify matches into categories where each one denotes the different levels of predictive difficulty classification models using a wide suite of algorithms were developed for each category of matches in order to determine the efficacy of the approach in conjunction to this a set of previously unexplored features were engineering including elo based variables the dataset originated from the premier league match data covering the seasons the findings indicate that the process of decomposing the predictive problem into sub tasks was effective and produced competitive results with prior works while the ensemble based methods were the most effective the paper also devised an investment strategy in order to evaluate its effectiveness by benchmarking against bookmaker odds an approach was developed that minimises risk by combining the kelly index with the predefined confidence thresholds of the predictive models the experiments found that the proposed strategy can return a profit when following a conservative approach that focuses primarily on easy to predict matches where the predictive models display a high confidence level understanding the impact of adversarial robustness on accuracy disparity authors yuzheng hu fan wu hongyang zhang han zhao subjects machine learning cs lg machine learning stat ml arxiv link pdf link abstract while it has long been empirically observed that adversarial robustness may be at odds with standard accuracy and may have further disparate impacts on different classes it remains an open question to what extent such observations hold and how the class imbalance plays a role within in this paper we attempt to understand this question of accuracy disparity by taking a closer look at linear classifiers under a gaussian mixture model we decompose the impact of adversarial robustness into two parts an inherent effect that will degrade the standard accuracy on all classes and the other caused by the class imbalance ratio which will increase the accuracy disparity compared to standard training furthermore we also extend our model to the general family of stable distributions we demonstrate that while the constraint of adversarial robustness consistently degrades the standard accuracy in the balanced class setting the class imbalance ratio plays a fundamentally different role in accuracy disparity compared to the gaussian case due to the heavy tail of the stable distribution we additionally perform experiments on both synthetic and real world datasets the empirical results not only corroborate our theoretical findings but also suggest that the implications may extend to nonlinear models over real world datasets distributed energy management and demand response in smart grids a multi agent deep reinforcement learning framework authors amin shojaeighadikolaei arman ghasemi kailani jones yousif dafalla alexandru g bardas reza ahmadi morteza haashemi subjects multiagent systems cs ma machine learning cs lg systems and control eess sy arxiv link pdf link abstract this paper presents a multi agent deep reinforcement learning drl framework for autonomous control and integration of renewable energy resources into smart power grid systems in particular the proposed framework jointly considers demand response dr and distributed energy management dem for residential end users dr has a widely recognized potential for improving power grid stability and reliability while at the same time reducing end users energy bills however the conventional dr techniques come with several shortcomings such as the inability to handle operational uncertainties while incurring end user disutility which prevents widespread adoption in real world applications the proposed framework addresses these shortcomings by implementing dr and dem based on real time pricing strategy that is achieved using deep reinforcement learning furthermore this framework enables the power grid service provider to leverage distributed energy resources i e pv rooftop panels and battery storage as dispatchable assets to support the smart grid during peak hours thus achieving management of distributed energy resources simulation results based on the deep q network dqn demonstrate significant improvements of the hour accumulative profit for both prosumers and the power grid service provider as well as major reductions in the utilization of the power grid reserve generators advisory tool for managing failure cascades in systems with wind power authors siyu liu marija ilic subjects systems and control eess sy arxiv link pdf link abstract this paper concerns the resilience of systems with wind power upon wind reduction by evaluating the potential of corrective actions such as generation and load dispatch on minimizing the effects of transmission line failures three functions grid consumer centric loss and resilience impact are used to statistically evaluate the criticality of initial contingent failures and wind reductions our model is learned with monte carlo convex optimization and adaptive selection illustrated on the ieee and ieee bus systems with both ac and dc models we highlight the impact of wind reductions and propose physically implementable solutions data privacy protection in defi protocols authors jiawei zhu zhuangtong huang yixin xu jerome yen ye wang subjects cryptography and security cs cr distributed parallel and cluster computing cs dc arxiv link pdf link abstract with the development of decentralized finance defi the inherent limitations caused by the blockchain system have come to the surface because recorded data on the blockchain is available to system participants defi protocols may not collect the private data of users otherwise the information leakage may result in serious financial losses or cause legal issues therefore defi protocols could hardly offer different users customized solutions and the capital utilization is limited to address this challenge in defi we propose a solution which is a trustful protocol that allows users to provide personal private data to defi protocols without worrying that such information would be disclosed by implementing asymmetric encryption zero knowledge proof and homomorphic encryption we ensure that users data will not be controlled by any centralized authorities and avoid potential financial losses or legal disputes due to information leakage we further discuss the application scenarios of financial data privacy protection in public blockchain defi ecosystems and cross border financial applications such as credit aggregation be careful with rotation a uniform backdoor pattern for shape authors linkun fan fazhi he qing guo wei tang xiaolin hong bing li subjects computer vision and pattern recognition cs cv cryptography and security cs cr arxiv link pdf link abstract for saving cost many deep neural networks dnns are trained on third party datasets downloaded from internet which enables attacker to implant backdoor into dnns in domain inherent structures of different image formats are similar hence backdoor attack designed for one image format will suite for others however when it comes to world there is a huge disparity among different data structures as a result backdoor pattern designed for one certain data structure will be disable for other data structures of the same scene therefore this paper designs a uniform backdoor pattern nrbdoor noisy rotation backdoor which is able to adapt for heterogeneous data structures specifically we start from the unit rotation and then search for the optimal pattern by noise generation and selection process the proposed nrbdoor is natural and imperceptible since rotation is a common operation which usually contains noise due to both the miss match between a pair of points and the sensor calibration error for real world scene extensive experiments on mesh and point cloud show that the proposed nrbdoor achieves state of the art performance with negligible shape variation controllable speech synthesis by learning discrete phoneme level prosodic representations authors nikolaos ellinas myrsini christidou alexandra vioni june sig sung aimilios chalamandaris pirros tsiakoulis paris mastorocostas subjects sound cs sd computation and language cs cl machine learning cs lg audio and speech processing eess as arxiv link pdf link abstract in this paper we present a novel method for phoneme level prosody control of and duration using intuitive discrete labels we propose an unsupervised prosodic clustering process which is used to discretize phoneme level and duration features from a multispeaker speech dataset these features are fed as an input sequence of prosodic labels to a prosody encoder module which augments an autoregressive attention based text to speech model we utilize various methods in order to improve prosodic control range and coverage such as augmentation normalization balanced clustering for duration and speaker independent clustering the final model enables fine grained phoneme level prosody control for all speakers contained in the training set while maintaining the speaker identity instead of relying on reference utterances for inference we introduce a prior prosody encoder which learns the style of each speaker and enables speech synthesis without the requirement of reference audio we also fine tune the multispeaker model to unseen speakers with limited amounts of data as a realistic application scenario and show that the prosody control capabilities are maintained verifying that the speaker independent prosodic clustering is effective experimental results show that the model has high output speech quality and that the proposed method allows efficient prosody control within each speaker s range despite the variability that a multispeaker setting introduces fourier net fast image registration with band limited deformation authors xi jia joseph bartlett wei chen siyang song tianyang zhang xinxing cheng wenqi lu zhaowen qiu jinming duan subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract unsupervised image registration commonly adopts u net style networks to predict dense displacement fields in the full resolution spatial domain for high resolution volumetric image data this process is however resource intensive and time consuming to tackle this problem we propose the fourier net replacing the expansive path in a u net style network with a parameter free model driven decoder specifically instead of our fourier net learning to output a full resolution displacement field in the spatial domain we learn its low dimensional representation in a band limited fourier domain this representation is then decoded by our devised model driven decoder consisting of a zero padding layer and an inverse discrete fourier transform layer to the dense full resolution displacement field in the spatial domain these changes allow our unsupervised fourier net to contain fewer parameters and computational operations resulting in faster inference speeds fourier net is then evaluated on two public brain datasets against various state of the art approaches for example when compared to a recent transformer based method i e transmorph our fourier net only using of its parameters and of the mult adds achieves a higher dice score and an times faster inference speed code is available at url is twitter enough investigating situational awareness in social and print media during the second covid wave in india authors ishita vohra meher shashwat nigam aryan sakaria amey kudari nimmi rangaswamy subjects social and information networks cs si arxiv link pdf link abstract the pandemic required efficient allocation of public resources and transforming existing ways of societal functions to manage any crisis governments and public health researchers exploit the information available to them in order to make informed decisions also defined as situational awareness gathering situational awareness using social media has been functional to manage epidemics previous research focused on using discussions during periods of epidemic crises on social media platforms like twitter reddit or facebook and developing nlp techniques to filter out relevant discussions from a huge corpus of messages and posts social media usage varies with internet penetration and other socioeconomic factors which might induce disparity in analyzing discussions across different geographies however print media is a ubiquitous information source irrespective of geography further topics discussed in news articles are already newsworthy while on social media newsworthiness is a product of techno social processes developing this fundamental difference we study twitter data during the second wave in india focused on six high population cities with varied macroeconomic factors through a mixture of qualitative and quantitative methods we further analyze two indian newspapers during the same period and compare topics from both twitter and the newspapers to evaluate situational awareness around the second phase of covid on each of these platforms we conclude that factors like internet penetration and gdp in a specific city influence the discourse surrounding situational updates on social media thus augmenting information from newspapers with information extracted from social media would provide a more comprehensive perspective in resource deficit cities keyword compression post training quantization on diffusion models authors yuzhang shang zhihang yuan bin xie bingzhe wu yan yan subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract denoising diffusion score based generative models have recently achieved significant accomplishments in generating realistic and diverse data these approaches define a forward diffusion process for transforming data into noise and a backward denoising process for sampling data from noise unfortunately the generation process of current denoising diffusion models is notoriously slow due to the lengthy iterative noise estimations which rely on cumbersome neural networks it prevents the diffusion models from being widely deployed especially on edge devices previous works accelerate the generation process of diffusion model dm via finding shorter yet effective sampling trajectories however they overlook the cost of noise estimation with a heavy network in every iteration in this work we accelerate generation from the perspective of compressing the noise estimation network due to the difficulty of retraining dms we exclude mainstream training aware compression paradigms and introduce post training quantization ptq into dm acceleration however the output distributions of noise estimation networks change with time step making previous ptq methods fail in dms since they are designed for single time step scenarios to devise a dm specific ptq method we explore ptq on dm in three aspects quantized operations calibration dataset and calibration metric we summarize and use several observations derived from all inclusive investigations to formulate our method which especially targets the unique multi time step structure of dms experimentally our method can directly quantize full precision dms into bit models while maintaining or even improving their performance in a training free manner importantly our method can serve as a plug and play module on other fast sampling methods e g ddim compressing cross lingual multi task models at qualtrics authors daniel campos daniel perry samir joshi yashmeet gambhir wei du zhengzheng xing aaron colak subjects computation and language cs cl machine learning cs lg arxiv link pdf link abstract experience management is an emerging business area where organizations focus on understanding the feedback of customers and employees in order to improve their end to end experiences this results in a unique set of machine learning problems to help understand how people feel discover issues they care about and find which actions need to be taken on data that are different in content and distribution from traditional nlp domains in this paper we present a case study of building text analysis applications that perform multiple classification tasks efficiently in languages in the nascent business area of experience management in order to scale up modern ml methods on experience data we leverage cross lingual and multi task modeling techniques to consolidate our models into a single deployment to avoid overhead we also make use of model compression and model distillation to reduce overall inference latency and hardware cost to the level acceptable for business needs while maintaining model prediction quality our findings show that multi task modeling improves task performance for a subset of experience management tasks in both xlm r and mbert architectures among the compressed architectures we explored we found that minilm achieved the best compression performance tradeoff our case study demonstrates a speedup of up to with average task degradation or speedup with degradation and estimated savings of over using the original full size model these results demonstrate a successful scaling up of text classification for the challenging new area of ml for experience management maximal atomic irredundant sets a usage based dataflow partitioning algorithm authors corentin ferry steven derrien sanjay rajopadhye subjects programming languages cs pl distributed parallel and cluster computing cs dc arxiv link pdf link abstract programs admitting a polyhedral representation can be transformed in many ways for locality and parallelism notably loop tiling data flow analysis can then compute dependence relations between iterations and between tiles when tiling is applied certain iteration wise dependences cross tile boundaries creating the need for inter tile data communication previous work computes it as the flow in and flow out sets of iteration tiles in this paper we propose a partitioning of the flow out of a tile into the maximal sets of iterations that are entirely consumed and incur no redundant storage or transfer the computation is described as an algorithm and performed on a selection of polyhedral programs we then suggest possible applications of this decomposition in compression and memory allocation trustless unknown order groups authors samuel dobson steven galbraith benjamin smith grace subjects cryptography and security cs cr number theory math nt arxiv link pdf link abstract groups of unknown order are of major interest due to their applications including time lock puzzles verifiable delay functions and accumulators in this paper we focus on trustless setup in this setting the most popular unknown order group construction is ideal class groups of imaginary quadratic fields we argue that the full impact of sutherland s generic group order algorithm has not been recognised in this context and show that group sizes currently being proposed in practice namely approximately bits do not meet the claimed security level instead we claim that random group orders should be at least bits to meet a bit security level for ideal class groups this leads to discriminants of around bits which are much larger than desirable one drawback of class groups is that current approaches require approximately log n bits to represent an element in a group of order n we provide two solutions to mitigate this blow up in the size of representations first we explain how an idea of bleichenbacher can be used to compress class group elements to log n bits second we note that using jacobians of hyperelliptic curves in other words class groups of quadratic function fields allows efficient compression to the optimal element representation size of log n bits we discuss point counting approaches for hyperelliptic curves and argue that genus curves are secure in the trustless unknown order setting we conclude that in practice jacobians of hyperelliptic curves are more efficient in practice than ideal class groups at the same security level both in the group operation and in the size of the element representation dba efficient transformer with dynamic bilinear low rank attention authors bosheng qin juncheng li siliang tang yueting zhuang subjects machine learning cs lg arxiv link pdf link abstract many studies have been conducted to improve the efficiency of transformer from quadric to linear among them the low rank based methods aim to learn the projection matrices to compress the sequence length however the projection matrices are fixed once they have been learned which compress sequence length with dedicated coefficients for tokens in the same position adopting such input invariant projections ignores the fact that the most informative part of a sequence varies from sequence to sequence thus failing to preserve the most useful information that lies in varied positions in addition previous efficient transformers only focus on the influence of sequence length while neglecting the effect of hidden state dimension to address the aforementioned problems we present an efficient yet effective attention mechanism namely the dynamic bilinear low rank attention dba which compresses the sequence length by input sensitive dynamic projection matrices and achieves linear time and space complexity by jointly optimizing the sequence length and hidden state dimension while maintaining state of the art performance specifically we first theoretically demonstrate that the sequence length can be compressed non destructively from a novel perspective of information theory with compression matrices dynamically determined by the input sequence furthermore we show that the hidden state dimension can be approximated by extending the johnson lindenstrauss lemma optimizing the attention in bilinear form theoretical analysis shows that dba is proficient in capturing high order relations in cross attention problems experiments over tasks with diverse sequence length conditions show that dba achieves state of the art performance compared with various strong baselines while maintaining less memory consumption with higher speed compressing volumetric radiance fields to mb authors lingzhi li zhen shen zhongshu wang li shen liefeng bo subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract approximating radiance fields with volumetric grids is one of promising directions for improving nerf represented by methods like plenoxels and dvgo which achieve super fast training convergence and real time rendering however these methods typically require a tremendous storage overhead costing up to hundreds of megabytes of disk space and runtime memory for a single scene we address this issue in this paper by introducing a simple yet effective framework called vector quantized radiance fields vqrf for compressing these volume grid based radiance fields we first present a robust and adaptive metric for estimating redundancy in grid models and performing voxel pruning by better exploring intermediate outputs of volumetric rendering a trainable vector quantization is further proposed to improve the compactness of grid models in combination with an efficient joint tuning strategy and post processing our method can achieve a compression ratio of times by reducing the overall model size to mb with negligible loss on visual quality extensive experiments demonstrate that the proposed framework is capable of achieving unrivaled performance and well generalization across multiple methods with distinct volumetric structures facilitating the wide use of volumetric radiance fields methods in real world applications code available at url keyword image signal processing there is no result keyword image signal process there is no result keyword raw learning visual planning models from partially observed images authors kebing jin zhanhao xiao hankui hankz zhuo hai wan jiaran cai subjects machine learning cs lg artificial intelligence cs ai computer vision and pattern recognition cs cv arxiv link pdf link abstract there has been increasing attention on planning model learning in classical planning most existing approaches however focus on learning planning models from structured data in symbolic representations it is often difficult to obtain such structured data in real world scenarios although a number of approaches have been developed for learning planning models from fully observed unstructured data e g images in many scenarios raw observations are often incomplete in this paper we provide a novel framework atype recplan for learning a transition model from partially observed raw image traces more specifically by considering the preceding and subsequent images in a trace we learn the latent state representations of raw observations and then build a transition model based on such representations additionally we propose a neural network based approach to learn a heuristic model that estimates the distance toward a given goal observation based on the learned transition model and heuristic model we implement a classical planner for images we exhibit empirically that our approach is more effective than a state of the art approach of learning visual planning models in the environment with incomplete observations deep semi supervised learning with double contrast of features and semantics authors quan feng jiayu yao zhison pan guojun zhou subjects machine learning cs lg arxiv link pdf link abstract in recent years the field of intelligent transportation systems its has achieved remarkable success which is mainly due to the large amount of available annotation data however obtaining these annotated data has to afford expensive costs in reality therefore a more realistic strategy is to leverage semi supervised learning ssl with a small amount of labeled data and a large amount of unlabeled data typically semantic consistency regularization and the two stage learning methods of decoupling feature extraction and classification have been proven effective nevertheless representation learning only limited to semantic consistency regularization may not guarantee the separation or discriminability of representations of samples with different semantics due to the inherent limitations of the two stage learning methods the extracted features may not match the specific downstream tasks in order to deal with the above drawbacks this paper proposes an end to end deep semi supervised learning double contrast of semantic and feature which extracts effective tasks specific discriminative features by contrasting the semantics features of positive and negative augmented samples pairs moreover we leverage information theory to explain the rationality of double contrast of semantics and features and slack mutual information to contrastive loss in a simpler way finally the effectiveness of our method is verified in benchmark datasets superpoint transformer for scene instance segmentation authors jiahao sun chunmei qing junpeng tan xiangmin xu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract most existing methods realize instance segmentation by extending those models used for object detection or semantic segmentation however these non straightforward methods suffer from two drawbacks imprecise bounding boxes or unsatisfactory semantic predictions limit the performance of the overall instance segmentation framework existing method requires a time consuming intermediate step of aggregation to address these issues this paper proposes a novel end to end instance segmentation method based on superpoint transformer named as spformer it groups potential features from point clouds into superpoints and directly predicts instances through query vectors without relying on the results of object detection or semantic segmentation the key step in this framework is a novel query decoder with transformers that can capture the instance information through the superpoint cross attention mechanism and generate the superpoint masks of the instances through bipartite matching based on superpoint masks spformer can implement the network training without the intermediate aggregation step which accelerates the network extensive experiments on and benchmarks verify that our method is concise yet efficient notably spformer exceeds compared state of the art methods by on hidden test set in terms of map and keeps fast inference speed per frame simultaneously code is available at billion web documents with rich information authors arnold overwijk chenyan xiong xiao liu cameron vandenberg jamie callan subjects information retrieval cs ir artificial intelligence cs ai computation and language cs cl arxiv link pdf link abstract the newest iteration of the clueweb line of datasets provides billion web pages affiliated with rich information its design was influenced by the need for a high quality large scale web corpus to support a range of academic and industry research for example in information systems retrieval augmented ai systems and model pretraining compared with earlier clueweb corpora the corpus is larger more varied of higher quality and aligned with the document distributions in commercial web search besides raw html includes rich information about the web pages provided by industry standard document understanding systems including the visual representation of pages rendered by a web browser parsed html structure information from a neural network parser and pre processed cleaned document text to lower the barrier to entry many of these signals have been widely used in industry but are available to the research community for the first time at this scale neural feature adaptation for symbolic predictions using pre training and semantic loss authors vedant shah aditya agrawal lovekesh vig ashwin srinivasan gautam shroff tanmay verlekar subjects artificial intelligence cs ai machine learning cs lg logic in computer science cs lo arxiv link pdf link abstract we are interested in neurosymbolic systems consisting of a high level symbolic layer for explainable prediction in terms of human intelligible concepts and a low level neural layer for extracting symbols required to generate the symbolic explanation real data is often imperfect meaning that even if the symbolic theory remains unchanged we may still need to address the problem of mapping raw data to high level symbols each time there is a change in the data acquisition environment or equipment manual re annotation of the raw data each time this happens is laborious and expensive and automated labelling methods are often imperfect especially for complex problems neurolog proposed the use of a semantic loss function that allows an existing feature based symbolic model to guide the extraction of feature values from raw data using abduction however the experiments demonstrating the use of semantic loss through abduction appear to rely heavily on a domain specific pre processing step that enables a prior delineation of feature locations in the raw data we examine the use of semantic loss in domains where such pre processing is not possible or is not obvious we show that without any prior information about the features the neurolog approach can continue to predict accurately even with substantially incorrect feature predictions we show also that prior information about the features in the form of even imperfect pre training can help correct this situation these findings are replicated on the original problem considered by neurolog without the use of feature delineation this suggests that symbolic explanations constructed for data in a domain could be re used in a related domain by feature adaptation of pre trained neural extractors using the semantic loss function constrained by abductive feedback behavior estimation from multi source data for offline reinforcement learning authors guoxi zhang hisashi kashima subjects machine learning cs lg robotics cs ro arxiv link pdf link abstract offline reinforcement learning rl have received rising interest due to its appealing data efficiency the present study addresses behavior estimation a task that lays the foundation of many offline rl algorithms behavior estimation aims at estimating the policy with which training data are generated in particular this work considers a scenario where the data are collected from multiple sources in this case neglecting data heterogeneity existing approaches for behavior estimation suffers from behavior misspecification to overcome this drawback the present study proposes a latent variable model to infer a set of policies from data which allows an agent to use as behavior policy the policy that best describes a particular trajectory this model provides with a agent fine grained characterization for multi source data and helps it overcome behavior misspecification this work also proposes a learning algorithm for this model and illustrates its practical usage via extending an existing offline rl algorithm lastly with extensive evaluation this work confirms the existence of behavior misspecification and the efficacy of the proposed model peculiarities of gender disambiguation and ordering of non english authors names for economic papers beyond core databases authors o mryglod s nazarovets s kozmenko subjects digital libraries cs dl arxiv link pdf link abstract this paper presents the results of further exploration of crossref data related to ukrainian economics research the first part can be found in our purpose is to supplement the quantitative portrait of ukrainian economics discipline with the results of gender and author ordering analysis at the level of individual authors special methods of working with bibliographic data with a predominant share of non english authors are used the properties of gender mixing the likelihood of male and female authors occupying the first position in the authorship list as well as the arrangements of names are studied a data set containing bibliographic records related to ukrainian journal publications in the field of economics is constructed using crossref metadata the described stages for working with such specific data help to work at the level of authors and analyse in particular gender issues despite the larger number of female authors gender equality is more likely to be reported at the individual level for the discipline of ukrainian economics the tendencies towards collaborative or solo publications and gender mixing patterns are found to be dependent on the journal the differences for publications indexed in scopus and or web of science databases are found it has also been found that ukrainian economics research is characterized by rather a non alphabetical order of authors to our knowledge this is the first large scale quantitative study of ukrainian economic discipline the results obtained are valuable not only at the national level but also contribute to general knowledge about economic research gender issues and authors names ordering here for the first time attention is drawn to the explicit use of the features of the slavic authors names trustless unknown order groups authors samuel dobson steven galbraith benjamin smith grace subjects cryptography and security cs cr number theory math nt arxiv link pdf link abstract groups of unknown order are of major interest due to their applications including time lock puzzles verifiable delay functions and accumulators in this paper we focus on trustless setup in this setting the most popular unknown order group construction is ideal class groups of imaginary quadratic fields we argue that the full impact of sutherland s generic group order algorithm has not been recognised in this context and show that group sizes currently being proposed in practice namely approximately bits do not meet the claimed security level instead we claim that random group orders should be at least bits to meet a bit security level for ideal class groups this leads to discriminants of around bits which are much larger than desirable one drawback of class groups is that current approaches require approximately log n bits to represent an element in a group of order n we provide two solutions to mitigate this blow up in the size of representations first we explain how an idea of bleichenbacher can be used to compress class group elements to log n bits second we note that using jacobians of hyperelliptic curves in other words class groups of quadratic function fields allows efficient compression to the optimal element representation size of log n bits we discuss point counting approaches for hyperelliptic curves and argue that genus curves are secure in the trustless unknown order setting we conclude that in practice jacobians of hyperelliptic curves are more efficient in practice than ideal class groups at the same security level both in the group operation and in the size of the element representation adaenlight energy aware low light video stream enhancement on mobile devices authors sicong liu northwestern polytechnical university china xiaochen li northwestern polytechnical university china zimu zhou city university of hong kong china bin guo northwestern polytechnical university china meng zhang northwestern polytechnical university china haochen shen northwestern polytechnical university china zhiwen yu northwestern polytechnical university china subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the ubiquity of camera embedded devices and the advances in deep learning have stimulated various intelligent mobile video applications these applications often demand on device processing of video streams to deliver real time high quality services for privacy and robustness concerns however the performance of these applications is constrained by the raw video streams which tend to be taken with small aperture cameras of ubiquitous mobile platforms in dim light despite extensive low light video enhancement solutions they are unfit for deployment to mobile devices due to their complex models and and ignorance of system dynamics like energy budgets in this paper we propose adaenlight an energy aware low light video stream enhancement system on mobile devices it achieves real time video enhancement with competitive visual quality while allowing runtime behavior adaptation to the platform imposed dynamic energy budgets we report extensive experiments on diverse datasets scenarios and platforms and demonstrate the superiority of adaenlight compared with state of the art low light image and video enhancement solutions few shot query focused summarization with prefix merging authors ruifeng yuan zili wang ziqiang cao wenjie li subjects computation and language cs cl artificial intelligence cs ai arxiv link pdf link abstract query focused summarization has been considered as an important extension for text summarization it aims to generate a concise highlight for a given query different from text summarization query focused summarization has long been plagued by the problem of lacking high quality large scale datasets in this paper we investigate the idea that whether we can integrate and transfer the knowledge of text summarization and question answering to assist the few shot learning in query focused summarization here we propose prefix merging a prefix based pretraining strategy for few shot learning in query focused summarization drawn inspiration from prefix tuning we are allowed to integrate the task knowledge from text summarization and question answering into a properly designed prefix and apply the merged prefix to query focused summarization with only a small amount of trainable parameters prefix merging outperforms fine tuning on query focused summarization we further discuss the influence of different prefix designs and propose a visualized explanation for how prefix merging works datid diversity preserved domain adaptation using text to image diffusion for generative model authors gwanghyun kim se young chun subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract recent generative models have achieved remarkable performance in synthesizing high resolution photorealistic images with view consistency and detailed shapes but training them for diverse domains is challenging since it requires massive training images and their camera distribution information text guided domain adaptation methods have shown impressive performance on converting the generative model on one domain into the models on other domains with different styles by leveraging the clip contrastive language image pre training rather than collecting massive datasets for those domains however one drawback of them is that the sample diversity in the original generative model is not well preserved in the domain adapted generative models due to the deterministic nature of the clip text encoder text guided domain adaptation will be even more challenging for generative models not only because of catastrophic diversity loss but also because of inferior text image correspondence and poor image quality here we propose datid a domain adaptation method tailored for generative models using text to image diffusion models that can synthesize diverse images per text prompt without collecting additional images and camera information for the target domain unlike extensions of prior text guided domain adaptation methods our novel pipeline was able to fine tune the state of the art generator of the source domain to synthesize high resolution multi view consistent images in text guided targeted domains without additional data outperforming the existing text guided domain adaptation methods in diversity and text image correspondence furthermore we propose and demonstrate diverse image manipulations such as one shot instance selected adaptation and single view manipulated reconstruction to fully enjoy diversity in text symmetry detection in trajectory data for more meaningful reinforcement learning representations authors marissa d alonzo rebecca russell subjects machine learning cs lg artificial intelligence cs ai robotics cs ro arxiv link pdf link abstract knowledge of the symmetries of reinforcement learning rl systems can be used to create compressed and semantically meaningful representations of a low level state space we present a method of automatically detecting rl symmetries directly from raw trajectory data without requiring active control of the system our method generates candidate symmetries and trains a recurrent neural network rnn to discriminate between the original trajectories and the transformed trajectories for each candidate symmetry the rnn discriminator s accuracy for each candidate reveals how symmetric the system is under that transformation this information can be used to create high level representations that are invariant to all symmetries on a dataset level and to communicate properties of the rl behavior to users we show in experiments on two simulated rl use cases a pusher robot and a uav flying in wind that our method can determine the symmetries underlying both the environment physics and the trained rl policy abstract visual reasoning with tangram shapes authors anya ji noriyuki kojima noah rush alane suhr wai keen vong robert d hawkins yoav artzi subjects computation and language cs cl artificial intelligence cs ai computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract we introduce kilogram a resource for studying abstract visual reasoning in humans and machines drawing on the history of tangram puzzles as stimuli in cognitive science we build a richly annotated dataset that with distinct stimuli is orders of magnitude larger and more diverse than prior resources it is both visually and linguistically richer moving beyond whole shape descriptions to include segmentation maps and part labels we use this resource to evaluate the abstract visual reasoning capacities of recent multi modal models we observe that pre trained weights demonstrate limited abstract reasoning which dramatically improves with fine tuning we also observe that explicitly describing parts aids abstract reasoning for both humans and models especially when jointly encoding the linguistic and visual inputs kilogram is available at keyword raw image learning visual planning models from partially observed images authors kebing jin zhanhao xiao hankui hankz zhuo hai wan jiaran cai subjects machine learning cs lg artificial intelligence cs ai computer vision and pattern recognition cs cv arxiv link pdf link abstract there has been increasing attention on planning model learning in classical planning most existing approaches however focus on learning planning models from structured data in symbolic representations it is often difficult to obtain such structured data in real world scenarios although a number of approaches have been developed for learning planning models from fully observed unstructured data e g images in many scenarios raw observations are often incomplete in this paper we provide a novel framework atype recplan for learning a transition model from partially observed raw image traces more specifically by considering the preceding and subsequent images in a trace we learn the latent state representations of raw observations and then build a transition model based on such representations additionally we propose a neural network based approach to learn a heuristic model that estimates the distance toward a given goal observation based on the learned transition model and heuristic model we implement a classical planner for images we exhibit empirically that our approach is more effective than a state of the art approach of learning visual planning models in the environment with incomplete observations
1
231,714
18,790,245,604
IssuesEvent
2021-11-08 16:05:35
Azure/azure-sdk-for-js
https://api.github.com/repos/Azure/azure-sdk-for-js
closed
Azure Form Recognizer Samples Issue
bug Client Docs Cognitive - Form Recognizer test-manual-pass
1. Section [link1](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples-dev/buildModel.ts),[link2](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples-dev/copyModel.ts),[link3](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/javascript/buildModel.js),[link4](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/javascript/copyModel.js),[link5](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/typescript/src/buildModel.ts),[link6](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/typescript/src/copyModel.ts),[link7](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/buildModel.js),[link8](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/copyModel.js),[link9](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4/typescript/src/buildModel.ts),[link10](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4/typescript/src/copyModel.ts): ![image](https://user-images.githubusercontent.com/80496810/139009177-d7e4ed10-603c-404c-9a7c-4cc33f34cde2.png) Suggestion: If in `.ts` file, add the code as follow: ``` import * as dotenv from "dotenv"; dotenv.config(); ``` If in `.js` file, add the code as follow: ``` const dotenv = require("dotenv"); dotenv.config(); ``` @meeraharidasa, @mayurid , @ramya-rao-a and @jeremymeng for notification.
1.0
Azure Form Recognizer Samples Issue - 1. Section [link1](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples-dev/buildModel.ts),[link2](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples-dev/copyModel.ts),[link3](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/javascript/buildModel.js),[link4](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/javascript/copyModel.js),[link5](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/typescript/src/buildModel.ts),[link6](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/typescript/src/copyModel.ts),[link7](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/buildModel.js),[link8](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/copyModel.js),[link9](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4/typescript/src/buildModel.ts),[link10](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4/typescript/src/copyModel.ts): ![image](https://user-images.githubusercontent.com/80496810/139009177-d7e4ed10-603c-404c-9a7c-4cc33f34cde2.png) Suggestion: If in `.ts` file, add the code as follow: ``` import * as dotenv from "dotenv"; dotenv.config(); ``` If in `.js` file, add the code as follow: ``` const dotenv = require("dotenv"); dotenv.config(); ``` @meeraharidasa, @mayurid , @ramya-rao-a and @jeremymeng for notification.
non_process
azure form recognizer samples issue section suggestion if in ts file add the code as follow import as dotenv from dotenv dotenv config if in js file add the code as follow const dotenv require dotenv dotenv config meeraharidasa mayurid ramya rao a and jeremymeng for notification
0
79,795
10,142,059,663
IssuesEvent
2019-08-03 20:02:28
brotkrueml/schema
https://api.github.com/repos/brotkrueml/schema
closed
Don't embed schema markup when no_index=1
documentation enhancement
When the page should not be indexed by search engines (the page field "no_index" is activated) no schema markup should be rendered. This saves some bytes. **Acceptance criteria:** - [x] If the "no_index" checkbox (from seo extension) is activated in the page properties, the field for selecting the web page type is hidden. - [x] If the seo extension is not activated, the field for selecting the web page type is always shown. - [x] If the "no_index" checkbox is not activated in the page properties, the schema markup is embedded into the web page. - [x] If the "no_index" checkbox is activated in the page properties, no schema markup is embedded into the web page. - [x] If the seo extension is not activated, the schema markup is always embedded into the web page. - [x] The documentation is adjusted.
1.0
Don't embed schema markup when no_index=1 - When the page should not be indexed by search engines (the page field "no_index" is activated) no schema markup should be rendered. This saves some bytes. **Acceptance criteria:** - [x] If the "no_index" checkbox (from seo extension) is activated in the page properties, the field for selecting the web page type is hidden. - [x] If the seo extension is not activated, the field for selecting the web page type is always shown. - [x] If the "no_index" checkbox is not activated in the page properties, the schema markup is embedded into the web page. - [x] If the "no_index" checkbox is activated in the page properties, no schema markup is embedded into the web page. - [x] If the seo extension is not activated, the schema markup is always embedded into the web page. - [x] The documentation is adjusted.
non_process
don t embed schema markup when no index when the page should not be indexed by search engines the page field no index is activated no schema markup should be rendered this saves some bytes acceptance criteria if the no index checkbox from seo extension is activated in the page properties the field for selecting the web page type is hidden if the seo extension is not activated the field for selecting the web page type is always shown if the no index checkbox is not activated in the page properties the schema markup is embedded into the web page if the no index checkbox is activated in the page properties no schema markup is embedded into the web page if the seo extension is not activated the schema markup is always embedded into the web page the documentation is adjusted
0
19,653
26,010,586,445
IssuesEvent
2022-12-21 01:02:33
ossf/package-analysis
https://api.github.com/repos/ossf/package-analysis
closed
Disable "Dismiss stale pull request approvals when new commits are pushed"
process
Rationale from offline chat: - when enabled, it limits the ability to do "LGTM with nits" and ends up needing more back and forth for small issues to be resolved - also applies to rebases and things like fixing commit signoffs where approval status is essentially unchanged
1.0
Disable "Dismiss stale pull request approvals when new commits are pushed" - Rationale from offline chat: - when enabled, it limits the ability to do "LGTM with nits" and ends up needing more back and forth for small issues to be resolved - also applies to rebases and things like fixing commit signoffs where approval status is essentially unchanged
process
disable dismiss stale pull request approvals when new commits are pushed rationale from offline chat when enabled it limits the ability to do lgtm with nits and ends up needing more back and forth for small issues to be resolved also applies to rebases and things like fixing commit signoffs where approval status is essentially unchanged
1
85
2,533,378,389
IssuesEvent
2015-01-23 22:52:04
MozillaFoundation/plan
https://api.github.com/repos/MozillaFoundation/plan
opened
Create resources for quicker design implementation
process
We have a need to ship design more quickly, in particular to ship design and dev within a single heartbeat. Other than ensuring heartbeats are scoped appropriately, what are some other tools or methods that would make design move faster without compromising quality or burning out the team? Some ideas: * update to Makerstrap * standardized PSDs or whatever a sketch file is, with grid, type elements, different layouts depending on the brief (ie. Open Science and Advocacy have very similar layouts, hmmm....) * up-to-date Resources drive (that @xmatthewx has been wrangling) * the same toolsets (does it slow us down that some are in photoshop, others in sketch?) * QA checklists – @iamjessklein can you document the one you were using for Privacy in the Design Handbook? @davidascher @thisandagain Do you guys want to add any thoughts to this?
1.0
Create resources for quicker design implementation - We have a need to ship design more quickly, in particular to ship design and dev within a single heartbeat. Other than ensuring heartbeats are scoped appropriately, what are some other tools or methods that would make design move faster without compromising quality or burning out the team? Some ideas: * update to Makerstrap * standardized PSDs or whatever a sketch file is, with grid, type elements, different layouts depending on the brief (ie. Open Science and Advocacy have very similar layouts, hmmm....) * up-to-date Resources drive (that @xmatthewx has been wrangling) * the same toolsets (does it slow us down that some are in photoshop, others in sketch?) * QA checklists – @iamjessklein can you document the one you were using for Privacy in the Design Handbook? @davidascher @thisandagain Do you guys want to add any thoughts to this?
process
create resources for quicker design implementation we have a need to ship design more quickly in particular to ship design and dev within a single heartbeat other than ensuring heartbeats are scoped appropriately what are some other tools or methods that would make design move faster without compromising quality or burning out the team some ideas update to makerstrap standardized psds or whatever a sketch file is with grid type elements different layouts depending on the brief ie open science and advocacy have very similar layouts hmmm up to date resources drive that xmatthewx has been wrangling the same toolsets does it slow us down that some are in photoshop others in sketch qa checklists – iamjessklein can you document the one you were using for privacy in the design handbook davidascher thisandagain do you guys want to add any thoughts to this
1
282,133
24,452,003,518
IssuesEvent
2022-10-07 00:35:13
yugabyte/yugabyte-db
https://api.github.com/repos/yugabyte/yugabyte-db
opened
[DocDB] flaky test: YBBackupTest.TestYSQLTabletSplitRangeUniqueIndexOnHiddenColumn
kind/failing-test area/docdb priority/high status/awaiting-triage
### Description https://detective-gcp.dev.yugabyte.com/stability/test?branch=master&buckets=20&build_type=all&class=YBBackupTest&fail_tag=all&name=TestYSQLTabletSplitRangeUniqueIndexOnHiddenColumn&platform=linux Traces back to https://github.com/yugabyte/yugabyte-db/commit/7583bcc0ea272ef8742adff796b36e354f9b6767
1.0
[DocDB] flaky test: YBBackupTest.TestYSQLTabletSplitRangeUniqueIndexOnHiddenColumn - ### Description https://detective-gcp.dev.yugabyte.com/stability/test?branch=master&buckets=20&build_type=all&class=YBBackupTest&fail_tag=all&name=TestYSQLTabletSplitRangeUniqueIndexOnHiddenColumn&platform=linux Traces back to https://github.com/yugabyte/yugabyte-db/commit/7583bcc0ea272ef8742adff796b36e354f9b6767
non_process
flaky test ybbackuptest testysqltabletsplitrangeuniqueindexonhiddencolumn description traces back to
0
2,278
5,105,231,767
IssuesEvent
2017-01-05 06:10:38
opentrials/opentrials
https://api.github.com/repos/opentrials/opentrials
opened
Process new data contributions
0. Ready for Analysis Collectors Processors
@georgiana-b @nightsh this is quite urgent, especially the Ebola-related contributions.
1.0
Process new data contributions - @georgiana-b @nightsh this is quite urgent, especially the Ebola-related contributions.
process
process new data contributions georgiana b nightsh this is quite urgent especially the ebola related contributions
1
529,608
15,392,304,414
IssuesEvent
2021-03-03 15:29:17
Prodigy-Hacking/ProdigyMathGameHacking
https://api.github.com/repos/Prodigy-Hacking/ProdigyMathGameHacking
closed
[BUG] Error's on startup, tested multiple times, repeatable.
Bug Confirmed Priority: High
***Description***: "Uncaught (in promise) TypeError: Failed to fetch" (async () => { const debug = false; const redirectorDomain = debug ? "http://localhost:1337" : "https://prodigyhacking.ml" if (!window.abortion) { // only run inject script once on the page, even if game.min is requested multiple times window.abortion = "Hey, we've injected the thingy"; // check for outdated plugin const pluginVersion = chrome.runtime.getManifest().version; const supportedVersion = (await (await fetch(`${redirectorDomain}/version`)).text()); if (pluginVersion !== supportedVersion) { const res = confirm("Outdated plugin version! Hacks are not guaranteed to work! If you would like to update, please click 'OK'"); if (res) location = "https://github.com/Prodigy-Hacking/ProdigyMathGameHacking/wiki/How-to-Update"; } // die, integrity [...document.getElementsByTagName("script"), ...document.getElementsByTagName("link")].forEach(v => { if (v.integrity) { console.log(v.integrity); v.removeAttribute("integrity"); } }); // <link rel="preload" href="https://code.prodigygame.com/code/3-13-0/game.min.js?v=3-13-0" as="script" crossorigin="anonymous"></link> /* const prelly = document.createElement("link"); prelly.rel = "preload"; prelly.href = `${redirectorDomain}/game.min.js`; */ // <script src="https://code.prodigygame.com/code/3-13-0/game.min.js?v=3-13-0" onload="SW.Load.onGameLoad();" crossorigin="anonymous"></script> // we cancel the real game.min, and just append ours // a messy solution for sure, but this should only be a bandaid on a bulletwound const penguin = document.createElement("script"); penguin.src = `${redirectorDomain}/game.min.js`; document.body.append(penguin); } })(); ***Replication***: Startup prodigy, login, the loading with be stopped mid way through. ***Images***: ![image](https://user-images.githubusercontent.com/60020494/109702568-6798a300-7b62-11eb-9fa0-baba10dc74f4.png) ***Additional Errors***: That is all.
1.0
[BUG] Error's on startup, tested multiple times, repeatable. - ***Description***: "Uncaught (in promise) TypeError: Failed to fetch" (async () => { const debug = false; const redirectorDomain = debug ? "http://localhost:1337" : "https://prodigyhacking.ml" if (!window.abortion) { // only run inject script once on the page, even if game.min is requested multiple times window.abortion = "Hey, we've injected the thingy"; // check for outdated plugin const pluginVersion = chrome.runtime.getManifest().version; const supportedVersion = (await (await fetch(`${redirectorDomain}/version`)).text()); if (pluginVersion !== supportedVersion) { const res = confirm("Outdated plugin version! Hacks are not guaranteed to work! If you would like to update, please click 'OK'"); if (res) location = "https://github.com/Prodigy-Hacking/ProdigyMathGameHacking/wiki/How-to-Update"; } // die, integrity [...document.getElementsByTagName("script"), ...document.getElementsByTagName("link")].forEach(v => { if (v.integrity) { console.log(v.integrity); v.removeAttribute("integrity"); } }); // <link rel="preload" href="https://code.prodigygame.com/code/3-13-0/game.min.js?v=3-13-0" as="script" crossorigin="anonymous"></link> /* const prelly = document.createElement("link"); prelly.rel = "preload"; prelly.href = `${redirectorDomain}/game.min.js`; */ // <script src="https://code.prodigygame.com/code/3-13-0/game.min.js?v=3-13-0" onload="SW.Load.onGameLoad();" crossorigin="anonymous"></script> // we cancel the real game.min, and just append ours // a messy solution for sure, but this should only be a bandaid on a bulletwound const penguin = document.createElement("script"); penguin.src = `${redirectorDomain}/game.min.js`; document.body.append(penguin); } })(); ***Replication***: Startup prodigy, login, the loading with be stopped mid way through. ***Images***: ![image](https://user-images.githubusercontent.com/60020494/109702568-6798a300-7b62-11eb-9fa0-baba10dc74f4.png) ***Additional Errors***: That is all.
non_process
error s on startup tested multiple times repeatable description uncaught in promise typeerror failed to fetch async const debug false const redirectordomain debug if window abortion only run inject script once on the page even if game min is requested multiple times window abortion hey we ve injected the thingy check for outdated plugin const pluginversion chrome runtime getmanifest version const supportedversion await await fetch redirectordomain version text if pluginversion supportedversion const res confirm outdated plugin version hacks are not guaranteed to work if you would like to update please click ok if res location die integrity foreach v if v integrity console log v integrity v removeattribute integrity const prelly document createelement link prelly rel preload prelly href redirectordomain game min js we cancel the real game min and just append ours a messy solution for sure but this should only be a bandaid on a bulletwound const penguin document createelement script penguin src redirectordomain game min js document body append penguin replication startup prodigy login the loading with be stopped mid way through images additional errors that is all
0
136,482
19,824,485,521
IssuesEvent
2022-01-20 03:47:04
microsoft/pyright
https://api.github.com/repos/microsoft/pyright
closed
nonlocal type assignment error not caught
as designed
**Describe the bug** ```py from typing import Callable def patch_method() -> Callable[[], None]: captured = 5 def inner() -> None: nonlocal captured reveal_type(captured) # T: int captured = 'a' # should error reveal_type(captured) # T: Literal['a'] return inner ``` **Expected behavior** A type error should be raised on the incorrect assignment of the `captured` variable within the `inner` function. **VS Code extension or command-line** Command-line v1.1.212 **Additional context** Should be noted that mypy raises an error ``` main.py:8: error: Incompatible types in assignment (expression has type "str", variable has type "int") [assignment] captured = 'a' ```
1.0
nonlocal type assignment error not caught - **Describe the bug** ```py from typing import Callable def patch_method() -> Callable[[], None]: captured = 5 def inner() -> None: nonlocal captured reveal_type(captured) # T: int captured = 'a' # should error reveal_type(captured) # T: Literal['a'] return inner ``` **Expected behavior** A type error should be raised on the incorrect assignment of the `captured` variable within the `inner` function. **VS Code extension or command-line** Command-line v1.1.212 **Additional context** Should be noted that mypy raises an error ``` main.py:8: error: Incompatible types in assignment (expression has type "str", variable has type "int") [assignment] captured = 'a' ```
non_process
nonlocal type assignment error not caught describe the bug py from typing import callable def patch method callable none captured def inner none nonlocal captured reveal type captured t int captured a should error reveal type captured t literal return inner expected behavior a type error should be raised on the incorrect assignment of the captured variable within the inner function vs code extension or command line command line additional context should be noted that mypy raises an error main py error incompatible types in assignment expression has type str variable has type int captured a
0
335,412
30,028,854,221
IssuesEvent
2023-06-27 08:15:53
unifyai/ivy
https://api.github.com/repos/unifyai/ivy
closed
Fix jax_devicearray.test_jax_special_add
JAX Frontend Sub Task Failing Test
| | | |---|---| |jax|<a href="https://github.com/unifyai/ivy/actions/runs/5386564860/jobs/9776762610"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5386564860/jobs/9776762610"><img src=https://img.shields.io/badge/-success-success></a> |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5386564860/jobs/9776762610"><img src=https://img.shields.io/badge/-success-success></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/5386564860/jobs/9776762610"><img src=https://img.shields.io/badge/-success-success></a> |paddle|<a href="null"><img src=https://img.shields.io/badge/-success-success></a>
1.0
Fix jax_devicearray.test_jax_special_add - | | | |---|---| |jax|<a href="https://github.com/unifyai/ivy/actions/runs/5386564860/jobs/9776762610"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5386564860/jobs/9776762610"><img src=https://img.shields.io/badge/-success-success></a> |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5386564860/jobs/9776762610"><img src=https://img.shields.io/badge/-success-success></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/5386564860/jobs/9776762610"><img src=https://img.shields.io/badge/-success-success></a> |paddle|<a href="null"><img src=https://img.shields.io/badge/-success-success></a>
non_process
fix jax devicearray test jax special add jax a href src numpy a href src tensorflow a href src torch a href src paddle img src
0
443,563
12,796,086,271
IssuesEvent
2020-07-02 09:49:51
mozilla/addons-server
https://api.github.com/repos/mozilla/addons-server
closed
hard delete addons from *all* users deleted over 7 years old
priority: p4
#14494 adds a cron job that deletes the addons of user accounts that were deleted over 7 years ago, but only for user accounts that are being cleared of fxa_id, email, or last_login_ip. This is fine for new accounts, but we've only started retaining fxa_id and email a few weeks ago, and last_login_ip was previously cleared for all users after 6 months. We should hard delete the add-ons from _all_ user accounts deleted over 7 years ago.
1.0
hard delete addons from *all* users deleted over 7 years old - #14494 adds a cron job that deletes the addons of user accounts that were deleted over 7 years ago, but only for user accounts that are being cleared of fxa_id, email, or last_login_ip. This is fine for new accounts, but we've only started retaining fxa_id and email a few weeks ago, and last_login_ip was previously cleared for all users after 6 months. We should hard delete the add-ons from _all_ user accounts deleted over 7 years ago.
non_process
hard delete addons from all users deleted over years old adds a cron job that deletes the addons of user accounts that were deleted over years ago but only for user accounts that are being cleared of fxa id email or last login ip this is fine for new accounts but we ve only started retaining fxa id and email a few weeks ago and last login ip was previously cleared for all users after months we should hard delete the add ons from all user accounts deleted over years ago
0
10,272
13,125,343,965
IssuesEvent
2020-08-06 06:27:51
didi/mpx
https://api.github.com/repos/didi/mpx
closed
[Bug report] 支付宝下 vant field 组件clearable无法清除输入框中内容
processing
**问题描述** 支付宝下vant field 组件clearable无法清除输入框中内容 使用wx:model双向绑定后绑定值情况了但是输入框中的内容还存在 ![image](https://user-images.githubusercontent.com/22525904/89404884-4814d580-d74d-11ea-8d8f-c669011cb60a.png) **环境信息描述** 至少包含以下部分: 1. Mac 2. mpx版本 "@mpxjs/core": "^2.5.33" "@mpxjs/webpack-plugin": "^2.5.33" 3. 支付宝IDE 1.13.4 **最简复现demo** [归档.zip](https://github.com/didi/mpx/files/5027842/default.zip)
1.0
[Bug report] 支付宝下 vant field 组件clearable无法清除输入框中内容 - **问题描述** 支付宝下vant field 组件clearable无法清除输入框中内容 使用wx:model双向绑定后绑定值情况了但是输入框中的内容还存在 ![image](https://user-images.githubusercontent.com/22525904/89404884-4814d580-d74d-11ea-8d8f-c669011cb60a.png) **环境信息描述** 至少包含以下部分: 1. Mac 2. mpx版本 "@mpxjs/core": "^2.5.33" "@mpxjs/webpack-plugin": "^2.5.33" 3. 支付宝IDE 1.13.4 **最简复现demo** [归档.zip](https://github.com/didi/mpx/files/5027842/default.zip)
process
支付宝下 vant field 组件clearable无法清除输入框中内容 问题描述 支付宝下vant field 组件clearable无法清除输入框中内容 使用wx model双向绑定后绑定值情况了但是输入框中的内容还存在 环境信息描述 至少包含以下部分: mac mpx版本 mpxjs core mpxjs webpack plugin 支付宝ide 最简复现demo
1
131,052
10,679,237,937
IssuesEvent
2019-10-21 18:52:08
HERA-Team/hera-validation
https://api.github.com/repos/HERA-Team/hera-validation
opened
Step 0.5: Sharp-Feature P(k)
formal-test
<!-- Give a brief description here, if necessary, of what this proposed test should do --> This is a test of `pspec`s ability to recover a Gaussian random sky with known P(k), where that P(k) has a reasonably "sharp" feature within the range of interest. <!-- Fill out the following meta-data for the issue (it will be used to apply relevant tags) --> * Simulation Component: EoR (with "sharp" feature) * Simulators: `RIMEz` * Pipeline Components: `pspec` * Depends on: Formally, depends on #35 , though it can be done before that is addressed. ## Why this test is required <!-- Note here why this particular test is important, over and above other tests --> In principle, passing 0.1 and 0.2 does not mean that `pspec` is able to recover sharp features well, and there may be features in the power spectrum. ## Summary A brief step-by-step description of the proposed test follows: * Simulate a Gaussian sky with known input P(k) * Construct visibilities with `RIMEz` * Generate P(k) with `pspec`. ## Simulation Details <!-- What kinds of details might be required for the simulations to run this test? A few suggestions are below, feel free to add more/remove some. --> * Freq. range: * Channel width: * Baseline/antenna configuration: * Total integration time: * Number of realisations: ## Criteria for Success <!-- List explicitly what criteria should be required for success of the test. Try to be as precise as possible --> * P(k) matches known input to within 1%
1.0
Step 0.5: Sharp-Feature P(k) - <!-- Give a brief description here, if necessary, of what this proposed test should do --> This is a test of `pspec`s ability to recover a Gaussian random sky with known P(k), where that P(k) has a reasonably "sharp" feature within the range of interest. <!-- Fill out the following meta-data for the issue (it will be used to apply relevant tags) --> * Simulation Component: EoR (with "sharp" feature) * Simulators: `RIMEz` * Pipeline Components: `pspec` * Depends on: Formally, depends on #35 , though it can be done before that is addressed. ## Why this test is required <!-- Note here why this particular test is important, over and above other tests --> In principle, passing 0.1 and 0.2 does not mean that `pspec` is able to recover sharp features well, and there may be features in the power spectrum. ## Summary A brief step-by-step description of the proposed test follows: * Simulate a Gaussian sky with known input P(k) * Construct visibilities with `RIMEz` * Generate P(k) with `pspec`. ## Simulation Details <!-- What kinds of details might be required for the simulations to run this test? A few suggestions are below, feel free to add more/remove some. --> * Freq. range: * Channel width: * Baseline/antenna configuration: * Total integration time: * Number of realisations: ## Criteria for Success <!-- List explicitly what criteria should be required for success of the test. Try to be as precise as possible --> * P(k) matches known input to within 1%
non_process
step sharp feature p k this is a test of pspec s ability to recover a gaussian random sky with known p k where that p k has a reasonably sharp feature within the range of interest simulation component eor with sharp feature simulators rimez pipeline components pspec depends on formally depends on though it can be done before that is addressed why this test is required in principle passing and does not mean that pspec is able to recover sharp features well and there may be features in the power spectrum summary a brief step by step description of the proposed test follows simulate a gaussian sky with known input p k construct visibilities with rimez generate p k with pspec simulation details what kinds of details might be required for the simulations to run this test a few suggestions are below feel free to add more remove some freq range channel width baseline antenna configuration total integration time number of realisations criteria for success list explicitly what criteria should be required for success of the test try to be as precise as possible p k matches known input to within
0
12,712
15,084,826,066
IssuesEvent
2021-02-05 17:43:41
NationalSecurityAgency/ghidra
https://api.github.com/repos/NationalSecurityAgency/ghidra
closed
arm v7 b.w instructions now decoding as vst4.32 (with ghidra 9.2)
Feature: Processor/ARM
**Describe the bug** byte sequence c2 f4 b4 bf used to decode (with ghidra 9.1.1) as: b.w [address] now decodes (with ghidra 9.2) as: vst4.32 {d27[],d29[],d31[],d1[]},[r2@64],r4 **To Reproduce** Upgrade to ghidra 9.2 and re-disassemble. **Environment (please complete the following information):** - OS: linux - Java Version: ? Don't recall.. - Ghidra Version: 9.2 **Additional context** This file is language ARM:LE:32:v7 (1.103) (after running ghidra 9.2, was 1.102 under ghidra 9.1.1) Other b.w instructions get disassembled to vst4.8, and potentially other wrong instructions. These instructions get tagged with "Warning [Unimplemented Pcode]: Instruction pcode is unimplemented: vst4.32"
1.0
arm v7 b.w instructions now decoding as vst4.32 (with ghidra 9.2) - **Describe the bug** byte sequence c2 f4 b4 bf used to decode (with ghidra 9.1.1) as: b.w [address] now decodes (with ghidra 9.2) as: vst4.32 {d27[],d29[],d31[],d1[]},[r2@64],r4 **To Reproduce** Upgrade to ghidra 9.2 and re-disassemble. **Environment (please complete the following information):** - OS: linux - Java Version: ? Don't recall.. - Ghidra Version: 9.2 **Additional context** This file is language ARM:LE:32:v7 (1.103) (after running ghidra 9.2, was 1.102 under ghidra 9.1.1) Other b.w instructions get disassembled to vst4.8, and potentially other wrong instructions. These instructions get tagged with "Warning [Unimplemented Pcode]: Instruction pcode is unimplemented: vst4.32"
process
arm b w instructions now decoding as with ghidra describe the bug byte sequence bf used to decode with ghidra as b w now decodes with ghidra as to reproduce upgrade to ghidra and re disassemble environment please complete the following information os linux java version don t recall ghidra version additional context this file is language arm le after running ghidra was under ghidra other b w instructions get disassembled to and potentially other wrong instructions these instructions get tagged with warning instruction pcode is unimplemented
1
21,206
28,242,944,744
IssuesEvent
2023-04-06 08:37:27
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
GO:0044658 pore formation in membrane of host by symbiont missing parent
multi-species process
GO:0044658 pore formation in membrane of host by symbiont - move to disruption by symbiont of host cellular component - change label to "pore formation in host plasma membrane"
1.0
GO:0044658 pore formation in membrane of host by symbiont missing parent - GO:0044658 pore formation in membrane of host by symbiont - move to disruption by symbiont of host cellular component - change label to "pore formation in host plasma membrane"
process
go pore formation in membrane of host by symbiont missing parent go pore formation in membrane of host by symbiont move to disruption by symbiont of host cellular component change label to pore formation in host plasma membrane
1
17,608
23,428,322,413
IssuesEvent
2022-08-14 18:33:28
ankidroid/Anki-Android
https://api.github.com/repos/ankidroid/Anki-Android
closed
'DeckPickerTest > version16CollectionOpens' is flaky on windows
Priority-Medium Needs Triage Stale Dev Test process
'DeckPickerTest > version16CollectionOpens' is flaky on windows Seen in another instance or two as well? Will re-run failed unit test once it's possible (other parts of OS matrix have to finish first) ``` com.ichi2.anki.DeckPickerTest > checkDisplayOfStudyOptionsOnTablet[0] SKIPPED thread '<unnamed>' panicked at 'Anki already open, or media currently syncing.', src/lib.rs:251:25 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace thread '<unnamed>' panicked at 'DBError { info: "SqliteFailure(Error { code: Unknown, extended_code: 1 }, Some(\"no such table: decks\"))", kind: Other }', src/lib.rs:256:13 thread '<unnamed>' panicked at 'Anki already open, or media currently syncing.', src/lib.rs:251:25 com.ichi2.anki.DeckPickerTest > version16CollectionOpens[1] FAILED java.lang.AssertionError: Collection should now be open at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:26) at com.ichi2.anki.DeckPickerTest.version16CollectionOpens(DeckPickerTest.java:411) ``` _Originally posted by @mikehardy in https://github.com/ankidroid/Anki-Android/issues/10811#issuecomment-1098285999_
1.0
'DeckPickerTest > version16CollectionOpens' is flaky on windows - 'DeckPickerTest > version16CollectionOpens' is flaky on windows Seen in another instance or two as well? Will re-run failed unit test once it's possible (other parts of OS matrix have to finish first) ``` com.ichi2.anki.DeckPickerTest > checkDisplayOfStudyOptionsOnTablet[0] SKIPPED thread '<unnamed>' panicked at 'Anki already open, or media currently syncing.', src/lib.rs:251:25 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace thread '<unnamed>' panicked at 'DBError { info: "SqliteFailure(Error { code: Unknown, extended_code: 1 }, Some(\"no such table: decks\"))", kind: Other }', src/lib.rs:256:13 thread '<unnamed>' panicked at 'Anki already open, or media currently syncing.', src/lib.rs:251:25 com.ichi2.anki.DeckPickerTest > version16CollectionOpens[1] FAILED java.lang.AssertionError: Collection should now be open at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:26) at com.ichi2.anki.DeckPickerTest.version16CollectionOpens(DeckPickerTest.java:411) ``` _Originally posted by @mikehardy in https://github.com/ankidroid/Anki-Android/issues/10811#issuecomment-1098285999_
process
deckpickertest is flaky on windows deckpickertest is flaky on windows seen in another instance or two as well will re run failed unit test once it s possible other parts of os matrix have to finish first com anki deckpickertest checkdisplayofstudyoptionsontablet skipped thread panicked at anki already open or media currently syncing src lib rs note run with rust backtrace environment variable to display a backtrace thread panicked at dberror info sqlitefailure error code unknown extended code some no such table decks kind other src lib rs thread panicked at anki already open or media currently syncing src lib rs com anki deckpickertest failed java lang assertionerror collection should now be open at org hamcrest matcherassert assertthat matcherassert java at com anki deckpickertest deckpickertest java originally posted by mikehardy in
1
284
2,722,778,372
IssuesEvent
2015-04-14 07:47:51
mkdocs/mkdocs
https://api.github.com/repos/mkdocs/mkdocs
closed
Release 0.12
Process
- [x] Update website DNS. #405 - [x] Fix blocker #441 - [x] Verify potential blocker #439 - [x] Finalise and merge release notes #437 - [x] Bump version - [x] Publish to PyPI
1.0
Release 0.12 - - [x] Update website DNS. #405 - [x] Fix blocker #441 - [x] Verify potential blocker #439 - [x] Finalise and merge release notes #437 - [x] Bump version - [x] Publish to PyPI
process
release update website dns fix blocker verify potential blocker finalise and merge release notes bump version publish to pypi
1
539,089
15,783,144,001
IssuesEvent
2021-04-01 13:39:32
enso-org/enso
https://api.github.com/repos/enso-org/enso
closed
Database Grouped Columns Report Wrong Counts
Category: Libraries Change: Non-Breaking Difficulty: Intermediate Priority: High Type: Bug
<!-- Please ensure that you are running the latest version of Enso before reporting the bug! It may have been fixed since. --> ### General Summary <!-- - Please include a high-level description of your bug here. --> The generated SQL code results in wrong counting when grouping is involved. For example: ``` c = table.group by="A" . at "B" . mean c.length ``` `c` has as many entries as distinct values of A, but `c.length` will be equal to the size of the first group. That is because the generated code is `SELECT COUNT(*) FROM ... GROUP BY A` - that fetches a different thing than we'd expect. The correct codegen will be `SELECT COUNT(*) FROM (SELECT 1 FROM ... GROUP BY A)`, i.e. we need to lift the query into a subquery and count rows of that.
1.0
Database Grouped Columns Report Wrong Counts - <!-- Please ensure that you are running the latest version of Enso before reporting the bug! It may have been fixed since. --> ### General Summary <!-- - Please include a high-level description of your bug here. --> The generated SQL code results in wrong counting when grouping is involved. For example: ``` c = table.group by="A" . at "B" . mean c.length ``` `c` has as many entries as distinct values of A, but `c.length` will be equal to the size of the first group. That is because the generated code is `SELECT COUNT(*) FROM ... GROUP BY A` - that fetches a different thing than we'd expect. The correct codegen will be `SELECT COUNT(*) FROM (SELECT 1 FROM ... GROUP BY A)`, i.e. we need to lift the query into a subquery and count rows of that.
non_process
database grouped columns report wrong counts please ensure that you are running the latest version of enso before reporting the bug it may have been fixed since general summary please include a high level description of your bug here the generated sql code results in wrong counting when grouping is involved for example c table group by a at b mean c length c has as many entries as distinct values of a but c length will be equal to the size of the first group that is because the generated code is select count from group by a that fetches a different thing than we d expect the correct codegen will be select count from select from group by a i e we need to lift the query into a subquery and count rows of that
0
16,390
21,158,103,422
IssuesEvent
2022-04-07 06:43:32
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
opened
DISABLED test_exception_single (__main__.ForkTest)
module: multiprocessing module: flaky-tests skipped
Platforms: asan, linux This test was disabled because it is failing in CI. See [recent examples](http://torch-ci.com/failure/test_exception_single%2C%20ForkTest) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/5863105042). Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 red and 3 green.
1.0
DISABLED test_exception_single (__main__.ForkTest) - Platforms: asan, linux This test was disabled because it is failing in CI. See [recent examples](http://torch-ci.com/failure/test_exception_single%2C%20ForkTest) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/5863105042). Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 red and 3 green.
process
disabled test exception single main forktest platforms asan linux this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with red and green
1
29,001
8,248,310,343
IssuesEvent
2018-09-11 18:03:32
idris-lang/Idris-dev
https://api.github.com/repos/idris-lang/Idris-dev
closed
pow broken with C backend?
C-Low Hanging Fruit G-C Backend S-Normal U-Build System W-Debugging
When I compile the following code ``` idris module Main main : IO () main = do print $ (pow 2 200) ``` with `idris test.idr -o test` then all I get is ``` shell # ./test 0 # ```
1.0
pow broken with C backend? - When I compile the following code ``` idris module Main main : IO () main = do print $ (pow 2 200) ``` with `idris test.idr -o test` then all I get is ``` shell # ./test 0 # ```
non_process
pow broken with c backend when i compile the following code idris module main main io main do print pow with idris test idr o test then all i get is shell test
0
766,254
26,875,249,972
IssuesEvent
2023-02-05 00:05:15
apache/hudi
https://api.github.com/repos/apache/hudi
closed
[SUPPORT][CDC]UnresolvedUnionException: Not in union ["null","double"]: 20230202105806923_0_1
schema-and-data-types priority:blocker spark change-data-capture
**_Tips before filing an issue_** - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)? - Join the mailing list to engage in conversations and get faster support at dev-subscribe@hudi.apache.org. - If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly. **Describe the problem you faced** enable CDC, cannot perform compaction table service. **To Reproduce** Steps to reproduce the behavior: 1.hoodie.table.cdc.enabled=true hoodie.table.cdc.supplemental.logging.mode=data_before_after 2.table type: mor **Expected behavior** A clear and concise description of what you expected to happen. **Environment Description** * Hudi version : master * Spark version : 3.1.1 * Hive version : none * Hadoop version :none * Storage (HDFS/S3/GCS..) : * Running on Docker? (yes/no) : **Additional context** Add any other context about the problem here. **Stacktrace** ``` 23/02/02 10:58:21 ERROR HoodieStreamingSink: Micro batch id=1 threw following expections,aborting streaming app to avoid data loss: org.apache.hudi.exception.HoodieCompactionException: Could not compact /tmp/hudi/cdc_test at org.apache.hudi.table.action.compact.RunCompactionActionExecutor.execute(RunCompactionActionExecutor.java:116) at org.apache.hudi.table.HoodieSparkMergeOnReadTable.compact(HoodieSparkMergeOnReadTable.java:140) at org.apache.hudi.client.SparkRDDTableServiceClient.compact(SparkRDDTableServiceClient.java:75) at org.apache.hudi.client.BaseHoodieTableServiceClient.lambda$runAnyPendingCompactions$2(BaseHoodieTableServiceClient.java:191) at java.util.ArrayList.forEach(ArrayList.java:1259) at org.apache.hudi.client.BaseHoodieTableServiceClient.runAnyPendingCompactions(BaseHoodieTableServiceClient.java:189) at org.apache.hudi.client.BaseHoodieTableServiceClient.inlineCompaction(BaseHoodieTableServiceClient.java:160) at org.apache.hudi.client.BaseHoodieTableServiceClient.runTableServicesInline(BaseHoodieTableServiceClient.java:334) at org.apache.hudi.client.BaseHoodieWriteClient.runTableServicesInline(BaseHoodieWriteClient.java:540) at org.apache.hudi.client.BaseHoodieWriteClient.commitStats(BaseHoodieWriteClient.java:249) at org.apache.hudi.client.SparkRDDWriteClient.commit(SparkRDDWriteClient.java:102) at org.apache.hudi.HoodieSparkSqlWriter$.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:903) at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:372) at org.apache.hudi.HoodieStreamingSink.$anonfun$addBatch$2(HoodieStreamingSink.scala:122) at scala.util.Try$.apply(Try.scala:213) at org.apache.hudi.HoodieStreamingSink.$anonfun$addBatch$1(HoodieStreamingSink.scala:120) at org.apache.hudi.HoodieStreamingSink.retry(HoodieStreamingSink.scala:244) at org.apache.hudi.HoodieStreamingSink.addBatch(HoodieStreamingSink.scala:119) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$16(MicroBatchExecution.scala:586) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$15(MicroBatchExecution.scala:584) at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:357) at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:355) at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:68) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:584) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:226) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:357) at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:355) at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:68) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:194) at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:57) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:188) at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:333) at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:244) Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 70.0 failed 1 times, most recent failure: Lost task 0.0 in stage 70.0 (TID 67) (LAPTOP-DONGSJ executor driver): org.apache.avro.UnresolvedUnionException: Not in union ["null","double"]: 20230202105806923_0_1 at org.apache.avro.generic.GenericData.resolveUnion(GenericData.java:740) at org.apache.avro.generic.GenericDatumWriter.resolveUnion(GenericDatumWriter.java:205) at org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:123) at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:75) at org.apache.avro.generic.GenericDatumWriter.writeField(GenericDatumWriter.java:166) at org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:156) at org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:118) at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:75) at org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:125) at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:75) at org.apache.avro.generic.GenericDatumWriter.writeField(GenericDatumWriter.java:166) at org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:156) at org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:118) at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:75) at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:62) at org.apache.hudi.avro.HoodieAvroUtils.indexedRecordToBytes(HoodieAvroUtils.java:136) at org.apache.hudi.avro.HoodieAvroUtils.avroToBytes(HoodieAvroUtils.java:128) at org.apache.hudi.common.model.HoodieAvroPayload.<init>(HoodieAvroPayload.java:47) at org.apache.hudi.io.HoodieCDCLogger.put(HoodieCDCLogger.java:175) at org.apache.hudi.io.HoodieMergeHandleWithChangeLog.writeInsertRecord(HoodieMergeHandleWithChangeLog.java:106) at org.apache.hudi.io.HoodieMergeHandle.writeIncomingRecords(HoodieMergeHandle.java:397) at org.apache.hudi.io.HoodieMergeHandle.close(HoodieMergeHandle.java:405) at org.apache.hudi.io.HoodieMergeHandleWithChangeLog.close(HoodieMergeHandleWithChangeLog.java:112) at org.apache.hudi.table.action.commit.HoodieMergeHelper.runMerge(HoodieMergeHelper.java:168) at org.apache.hudi.table.HoodieSparkCopyOnWriteTable.handleUpdateInternal(HoodieSparkCopyOnWriteTable.java:224) at org.apache.hudi.table.HoodieSparkCopyOnWriteTable.handleUpdate(HoodieSparkCopyOnWriteTable.java:215) at org.apache.hudi.table.action.compact.CompactionExecutionHelper.writeFileAndGetWriteStats(CompactionExecutionHelper.java:64) at org.apache.hudi.table.action.compact.HoodieCompactor.compact(HoodieCompactor.java:239) at org.apache.hudi.table.action.compact.HoodieCompactor.lambda$compact$9cd4b1be$1(HoodieCompactor.java:137) at org.apache.spark.api.java.JavaPairRDD$.$anonfun$toScalaFunction$1(JavaPairRDD.scala:1070) at scala.collection.Iterator$$anon$10.next(Iterator.scala:461) at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492) at org.apache.spark.storage.memory.MemoryStore.putIterator(MemoryStore.scala:221) at org.apache.spark.storage.memory.MemoryStore.putIteratorAsBytes(MemoryStore.scala:349) at org.apache.spark.storage.BlockManager.$anonfun$doPutIterator$1(BlockManager.scala:1440) at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$doPut(BlockManager.scala:1350) at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1414) at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:1237) at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:384) at org.apache.spark.rdd.RDD.iterator(RDD.scala:335) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2253) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2202) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2201) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2201) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1078) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1078) at scala.Option.foreach(Option.scala:407) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1078) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2440) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2382) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2371) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:868) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2202) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2223) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2242) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2267) at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1030) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:414) at org.apache.spark.rdd.RDD.collect(RDD.scala:1029) at org.apache.spark.api.java.JavaRDDLike.collect(JavaRDDLike.scala:362) at org.apache.spark.api.java.JavaRDDLike.collect$(JavaRDDLike.scala:361) at org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:45) at org.apache.hudi.data.HoodieJavaRDD.collectAsList(HoodieJavaRDD.java:163) at org.apache.hudi.table.action.compact.RunCompactionActionExecutor.execute(RunCompactionActionExecutor.java:101) ... 38 more ```
1.0
[SUPPORT][CDC]UnresolvedUnionException: Not in union ["null","double"]: 20230202105806923_0_1 - **_Tips before filing an issue_** - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)? - Join the mailing list to engage in conversations and get faster support at dev-subscribe@hudi.apache.org. - If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly. **Describe the problem you faced** enable CDC, cannot perform compaction table service. **To Reproduce** Steps to reproduce the behavior: 1.hoodie.table.cdc.enabled=true hoodie.table.cdc.supplemental.logging.mode=data_before_after 2.table type: mor **Expected behavior** A clear and concise description of what you expected to happen. **Environment Description** * Hudi version : master * Spark version : 3.1.1 * Hive version : none * Hadoop version :none * Storage (HDFS/S3/GCS..) : * Running on Docker? (yes/no) : **Additional context** Add any other context about the problem here. **Stacktrace** ``` 23/02/02 10:58:21 ERROR HoodieStreamingSink: Micro batch id=1 threw following expections,aborting streaming app to avoid data loss: org.apache.hudi.exception.HoodieCompactionException: Could not compact /tmp/hudi/cdc_test at org.apache.hudi.table.action.compact.RunCompactionActionExecutor.execute(RunCompactionActionExecutor.java:116) at org.apache.hudi.table.HoodieSparkMergeOnReadTable.compact(HoodieSparkMergeOnReadTable.java:140) at org.apache.hudi.client.SparkRDDTableServiceClient.compact(SparkRDDTableServiceClient.java:75) at org.apache.hudi.client.BaseHoodieTableServiceClient.lambda$runAnyPendingCompactions$2(BaseHoodieTableServiceClient.java:191) at java.util.ArrayList.forEach(ArrayList.java:1259) at org.apache.hudi.client.BaseHoodieTableServiceClient.runAnyPendingCompactions(BaseHoodieTableServiceClient.java:189) at org.apache.hudi.client.BaseHoodieTableServiceClient.inlineCompaction(BaseHoodieTableServiceClient.java:160) at org.apache.hudi.client.BaseHoodieTableServiceClient.runTableServicesInline(BaseHoodieTableServiceClient.java:334) at org.apache.hudi.client.BaseHoodieWriteClient.runTableServicesInline(BaseHoodieWriteClient.java:540) at org.apache.hudi.client.BaseHoodieWriteClient.commitStats(BaseHoodieWriteClient.java:249) at org.apache.hudi.client.SparkRDDWriteClient.commit(SparkRDDWriteClient.java:102) at org.apache.hudi.HoodieSparkSqlWriter$.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:903) at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:372) at org.apache.hudi.HoodieStreamingSink.$anonfun$addBatch$2(HoodieStreamingSink.scala:122) at scala.util.Try$.apply(Try.scala:213) at org.apache.hudi.HoodieStreamingSink.$anonfun$addBatch$1(HoodieStreamingSink.scala:120) at org.apache.hudi.HoodieStreamingSink.retry(HoodieStreamingSink.scala:244) at org.apache.hudi.HoodieStreamingSink.addBatch(HoodieStreamingSink.scala:119) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$16(MicroBatchExecution.scala:586) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$15(MicroBatchExecution.scala:584) at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:357) at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:355) at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:68) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:584) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:226) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:357) at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:355) at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:68) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:194) at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:57) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:188) at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:333) at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:244) Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 70.0 failed 1 times, most recent failure: Lost task 0.0 in stage 70.0 (TID 67) (LAPTOP-DONGSJ executor driver): org.apache.avro.UnresolvedUnionException: Not in union ["null","double"]: 20230202105806923_0_1 at org.apache.avro.generic.GenericData.resolveUnion(GenericData.java:740) at org.apache.avro.generic.GenericDatumWriter.resolveUnion(GenericDatumWriter.java:205) at org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:123) at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:75) at org.apache.avro.generic.GenericDatumWriter.writeField(GenericDatumWriter.java:166) at org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:156) at org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:118) at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:75) at org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:125) at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:75) at org.apache.avro.generic.GenericDatumWriter.writeField(GenericDatumWriter.java:166) at org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:156) at org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:118) at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:75) at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:62) at org.apache.hudi.avro.HoodieAvroUtils.indexedRecordToBytes(HoodieAvroUtils.java:136) at org.apache.hudi.avro.HoodieAvroUtils.avroToBytes(HoodieAvroUtils.java:128) at org.apache.hudi.common.model.HoodieAvroPayload.<init>(HoodieAvroPayload.java:47) at org.apache.hudi.io.HoodieCDCLogger.put(HoodieCDCLogger.java:175) at org.apache.hudi.io.HoodieMergeHandleWithChangeLog.writeInsertRecord(HoodieMergeHandleWithChangeLog.java:106) at org.apache.hudi.io.HoodieMergeHandle.writeIncomingRecords(HoodieMergeHandle.java:397) at org.apache.hudi.io.HoodieMergeHandle.close(HoodieMergeHandle.java:405) at org.apache.hudi.io.HoodieMergeHandleWithChangeLog.close(HoodieMergeHandleWithChangeLog.java:112) at org.apache.hudi.table.action.commit.HoodieMergeHelper.runMerge(HoodieMergeHelper.java:168) at org.apache.hudi.table.HoodieSparkCopyOnWriteTable.handleUpdateInternal(HoodieSparkCopyOnWriteTable.java:224) at org.apache.hudi.table.HoodieSparkCopyOnWriteTable.handleUpdate(HoodieSparkCopyOnWriteTable.java:215) at org.apache.hudi.table.action.compact.CompactionExecutionHelper.writeFileAndGetWriteStats(CompactionExecutionHelper.java:64) at org.apache.hudi.table.action.compact.HoodieCompactor.compact(HoodieCompactor.java:239) at org.apache.hudi.table.action.compact.HoodieCompactor.lambda$compact$9cd4b1be$1(HoodieCompactor.java:137) at org.apache.spark.api.java.JavaPairRDD$.$anonfun$toScalaFunction$1(JavaPairRDD.scala:1070) at scala.collection.Iterator$$anon$10.next(Iterator.scala:461) at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492) at org.apache.spark.storage.memory.MemoryStore.putIterator(MemoryStore.scala:221) at org.apache.spark.storage.memory.MemoryStore.putIteratorAsBytes(MemoryStore.scala:349) at org.apache.spark.storage.BlockManager.$anonfun$doPutIterator$1(BlockManager.scala:1440) at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$doPut(BlockManager.scala:1350) at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1414) at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:1237) at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:384) at org.apache.spark.rdd.RDD.iterator(RDD.scala:335) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2253) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2202) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2201) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2201) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1078) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1078) at scala.Option.foreach(Option.scala:407) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1078) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2440) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2382) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2371) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:868) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2202) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2223) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2242) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2267) at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1030) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:414) at org.apache.spark.rdd.RDD.collect(RDD.scala:1029) at org.apache.spark.api.java.JavaRDDLike.collect(JavaRDDLike.scala:362) at org.apache.spark.api.java.JavaRDDLike.collect$(JavaRDDLike.scala:361) at org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:45) at org.apache.hudi.data.HoodieJavaRDD.collectAsList(HoodieJavaRDD.java:163) at org.apache.hudi.table.action.compact.RunCompactionActionExecutor.execute(RunCompactionActionExecutor.java:101) ... 38 more ```
non_process
unresolvedunionexception not in union tips before filing an issue have you gone through our join the mailing list to engage in conversations and get faster support at dev subscribe hudi apache org if you have triaged this as a bug then file an directly describe the problem you faced enable cdc cannot perform compaction table service to reproduce steps to reproduce the behavior hoodie table cdc enabled true hoodie table cdc supplemental logging mode data before after table type mor expected behavior a clear and concise description of what you expected to happen environment description hudi version master spark version hive version none hadoop version none storage hdfs gcs running on docker yes no additional context add any other context about the problem here stacktrace error hoodiestreamingsink micro batch id threw following expections aborting streaming app to avoid data loss org apache hudi exception hoodiecompactionexception could not compact tmp hudi cdc test at org apache hudi table action compact runcompactionactionexecutor execute runcompactionactionexecutor java at org apache hudi table hoodiesparkmergeonreadtable compact hoodiesparkmergeonreadtable java at org apache hudi client sparkrddtableserviceclient compact sparkrddtableserviceclient java at org apache hudi client basehoodietableserviceclient lambda runanypendingcompactions basehoodietableserviceclient java at java util arraylist foreach arraylist java at org apache hudi client basehoodietableserviceclient runanypendingcompactions basehoodietableserviceclient java at org apache hudi client basehoodietableserviceclient inlinecompaction basehoodietableserviceclient java at org apache hudi client basehoodietableserviceclient runtableservicesinline basehoodietableserviceclient java at org apache hudi client basehoodiewriteclient runtableservicesinline basehoodiewriteclient java at org apache hudi client basehoodiewriteclient commitstats basehoodiewriteclient java at org apache hudi client sparkrddwriteclient commit sparkrddwriteclient java at org apache hudi hoodiesparksqlwriter commitandperformpostoperations hoodiesparksqlwriter scala at org apache hudi hoodiesparksqlwriter write hoodiesparksqlwriter scala at org apache hudi hoodiestreamingsink anonfun addbatch hoodiestreamingsink scala at scala util try apply try scala at org apache hudi hoodiestreamingsink anonfun addbatch hoodiestreamingsink scala at org apache hudi hoodiestreamingsink retry hoodiestreamingsink scala at org apache hudi hoodiestreamingsink addbatch hoodiestreamingsink scala at org apache spark sql execution streaming microbatchexecution anonfun runbatch microbatchexecution scala at org apache spark sql execution sqlexecution anonfun withnewexecutionid sqlexecution scala at org apache spark sql execution sqlexecution withsqlconfpropagated sqlexecution scala at org apache spark sql execution sqlexecution anonfun withnewexecutionid sqlexecution scala at org apache spark sql sparksession withactive sparksession scala at org apache spark sql execution sqlexecution withnewexecutionid sqlexecution scala at org apache spark sql execution streaming microbatchexecution anonfun runbatch microbatchexecution scala at org apache spark sql execution streaming progressreporter reporttimetaken progressreporter scala at org apache spark sql execution streaming progressreporter reporttimetaken progressreporter scala at org apache spark sql execution streaming streamexecution reporttimetaken streamexecution scala at org apache spark sql execution streaming microbatchexecution runbatch microbatchexecution scala at org apache spark sql execution streaming microbatchexecution anonfun runactivatedstream microbatchexecution scala at scala runtime mcv sp apply mcv sp java at org apache spark sql execution streaming progressreporter reporttimetaken progressreporter scala at org apache spark sql execution streaming progressreporter reporttimetaken progressreporter scala at org apache spark sql execution streaming streamexecution reporttimetaken streamexecution scala at org apache spark sql execution streaming microbatchexecution anonfun runactivatedstream microbatchexecution scala at org apache spark sql execution streaming processingtimeexecutor execute triggerexecutor scala at org apache spark sql execution streaming microbatchexecution runactivatedstream microbatchexecution scala at org apache spark sql execution streaming streamexecution org apache spark sql execution streaming streamexecution runstream streamexecution scala at org apache spark sql execution streaming streamexecution anon run streamexecution scala caused by org apache spark sparkexception job aborted due to stage failure task in stage failed times most recent failure lost task in stage tid laptop dongsj executor driver org apache avro unresolvedunionexception not in union at org apache avro generic genericdata resolveunion genericdata java at org apache avro generic genericdatumwriter resolveunion genericdatumwriter java at org apache avro generic genericdatumwriter writewithoutconversion genericdatumwriter java at org apache avro generic genericdatumwriter write genericdatumwriter java at org apache avro generic genericdatumwriter writefield genericdatumwriter java at org apache avro generic genericdatumwriter writerecord genericdatumwriter java at org apache avro generic genericdatumwriter writewithoutconversion genericdatumwriter java at org apache avro generic genericdatumwriter write genericdatumwriter java at org apache avro generic genericdatumwriter writewithoutconversion genericdatumwriter java at org apache avro generic genericdatumwriter write genericdatumwriter java at org apache avro generic genericdatumwriter writefield genericdatumwriter java at org apache avro generic genericdatumwriter writerecord genericdatumwriter java at org apache avro generic genericdatumwriter writewithoutconversion genericdatumwriter java at org apache avro generic genericdatumwriter write genericdatumwriter java at org apache avro generic genericdatumwriter write genericdatumwriter java at org apache hudi avro hoodieavroutils indexedrecordtobytes hoodieavroutils java at org apache hudi avro hoodieavroutils avrotobytes hoodieavroutils java at org apache hudi common model hoodieavropayload hoodieavropayload java at org apache hudi io hoodiecdclogger put hoodiecdclogger java at org apache hudi io hoodiemergehandlewithchangelog writeinsertrecord hoodiemergehandlewithchangelog java at org apache hudi io hoodiemergehandle writeincomingrecords hoodiemergehandle java at org apache hudi io hoodiemergehandle close hoodiemergehandle java at org apache hudi io hoodiemergehandlewithchangelog close hoodiemergehandlewithchangelog java at org apache hudi table action commit hoodiemergehelper runmerge hoodiemergehelper java at org apache hudi table hoodiesparkcopyonwritetable handleupdateinternal hoodiesparkcopyonwritetable java at org apache hudi table hoodiesparkcopyonwritetable handleupdate hoodiesparkcopyonwritetable java at org apache hudi table action compact compactionexecutionhelper writefileandgetwritestats compactionexecutionhelper java at org apache hudi table action compact hoodiecompactor compact hoodiecompactor java at org apache hudi table action compact hoodiecompactor lambda compact hoodiecompactor java at org apache spark api java javapairrdd anonfun toscalafunction javapairrdd scala at scala collection iterator anon next iterator scala at scala collection iterator anon nextcur iterator scala at scala collection iterator anon hasnext iterator scala at org apache spark storage memory memorystore putiterator memorystore scala at org apache spark storage memory memorystore putiteratorasbytes memorystore scala at org apache spark storage blockmanager anonfun doputiterator blockmanager scala at org apache spark storage blockmanager org apache spark storage blockmanager doput blockmanager scala at org apache spark storage blockmanager doputiterator blockmanager scala at org apache spark storage blockmanager getorelseupdate blockmanager scala at org apache spark rdd rdd getorcompute rdd scala at org apache spark rdd rdd iterator rdd scala at org apache spark rdd mappartitionsrdd compute mappartitionsrdd scala at org apache spark rdd rdd computeorreadcheckpoint rdd scala at org apache spark rdd rdd iterator rdd scala at org apache spark scheduler resulttask runtask resulttask scala at org apache spark scheduler task run task scala at org apache spark executor executor taskrunner anonfun run executor scala at org apache spark util utils trywithsafefinally utils scala at org apache spark executor executor taskrunner run executor scala at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java driver stacktrace at org apache spark scheduler dagscheduler failjobandindependentstages dagscheduler scala at org apache spark scheduler dagscheduler anonfun abortstage dagscheduler scala at org apache spark scheduler dagscheduler anonfun abortstage adapted dagscheduler scala at scala collection mutable resizablearray foreach resizablearray scala at scala collection mutable resizablearray foreach resizablearray scala at scala collection mutable arraybuffer foreach arraybuffer scala at org apache spark scheduler dagscheduler abortstage dagscheduler scala at org apache spark scheduler dagscheduler anonfun handletasksetfailed dagscheduler scala at org apache spark scheduler dagscheduler anonfun handletasksetfailed adapted dagscheduler scala at scala option foreach option scala at org apache spark scheduler dagscheduler handletasksetfailed dagscheduler scala at org apache spark scheduler dagschedulereventprocessloop doonreceive dagscheduler scala at org apache spark scheduler dagschedulereventprocessloop onreceive dagscheduler scala at org apache spark scheduler dagschedulereventprocessloop onreceive dagscheduler scala at org apache spark util eventloop anon run eventloop scala at org apache spark scheduler dagscheduler runjob dagscheduler scala at org apache spark sparkcontext runjob sparkcontext scala at org apache spark sparkcontext runjob sparkcontext scala at org apache spark sparkcontext runjob sparkcontext scala at org apache spark sparkcontext runjob sparkcontext scala at org apache spark rdd rdd anonfun collect rdd scala at org apache spark rdd rddoperationscope withscope rddoperationscope scala at org apache spark rdd rddoperationscope withscope rddoperationscope scala at org apache spark rdd rdd withscope rdd scala at org apache spark rdd rdd collect rdd scala at org apache spark api java javarddlike collect javarddlike scala at org apache spark api java javarddlike collect javarddlike scala at org apache spark api java abstractjavarddlike collect javarddlike scala at org apache hudi data hoodiejavardd collectaslist hoodiejavardd java at org apache hudi table action compact runcompactionactionexecutor execute runcompactionactionexecutor java more
0
166,499
6,305,815,806
IssuesEvent
2017-07-21 19:19:12
DashboardHub/PipelineDashboard
https://api.github.com/repos/DashboardHub/PipelineDashboard
closed
Create Generic Widgets to accept data from MicroServices
priority: high
- [ ] Large number blocks (collection) - [ ] Simple line graph - [ ] Tabular data
1.0
Create Generic Widgets to accept data from MicroServices - - [ ] Large number blocks (collection) - [ ] Simple line graph - [ ] Tabular data
non_process
create generic widgets to accept data from microservices large number blocks collection simple line graph tabular data
0
12,470
14,940,217,386
IssuesEvent
2021-01-25 17:58:11
threefoldtech/js-sdk
https://api.github.com/repos/threefoldtech/js-sdk
closed
3bots activate the testnet wallet using friendbot, not the testnet activation service
process_wontfix type_bug
### Description Although this works now and prevents our testnet activation service from being drained, this does not test the actual activation service/flow itself. It also does not test real life scenario's where the activation service is in need of new funding and operations needs to step in
1.0
3bots activate the testnet wallet using friendbot, not the testnet activation service - ### Description Although this works now and prevents our testnet activation service from being drained, this does not test the actual activation service/flow itself. It also does not test real life scenario's where the activation service is in need of new funding and operations needs to step in
process
activate the testnet wallet using friendbot not the testnet activation service description although this works now and prevents our testnet activation service from being drained this does not test the actual activation service flow itself it also does not test real life scenario s where the activation service is in need of new funding and operations needs to step in
1
17,044
22,421,400,668
IssuesEvent
2022-06-20 03:56:11
camunda/zeebe
https://api.github.com/repos/camunda/zeebe
closed
Reject ProcessInstanceCreation command targeting element inside multi-instance
team/process-automation
A target element of the ProcessInstanceCreation command with start instructions, should not be part of a multi-instance embedded sub-process. It would be unclear which specific iteration the instance should be part of. The `ProcessInstanceCreation` command should be rejected: - when one of the target element ids refers to an element that has a multi-instance marked embedded sub-process as one of its flow scopes Blocked by #9390
1.0
Reject ProcessInstanceCreation command targeting element inside multi-instance - A target element of the ProcessInstanceCreation command with start instructions, should not be part of a multi-instance embedded sub-process. It would be unclear which specific iteration the instance should be part of. The `ProcessInstanceCreation` command should be rejected: - when one of the target element ids refers to an element that has a multi-instance marked embedded sub-process as one of its flow scopes Blocked by #9390
process
reject processinstancecreation command targeting element inside multi instance a target element of the processinstancecreation command with start instructions should not be part of a multi instance embedded sub process it would be unclear which specific iteration the instance should be part of the processinstancecreation command should be rejected when one of the target element ids refers to an element that has a multi instance marked embedded sub process as one of its flow scopes blocked by
1
107,681
23,465,791,058
IssuesEvent
2022-08-16 16:38:05
pulumi/pulumi-yaml
https://api.github.com/repos/pulumi/pulumi-yaml
closed
[Go] Convert for Kubernetes provider generates invalid code
resolution/fixed kind/bug area/codegen
### What happened? `pulumi convert` generates invalid Go code for Kubernetes providers. ### Steps to reproduce ```yaml name: pulumi-yaml-patch runtime: yaml description: A minimal Kubernetes Pulumi YAML program resources: provider: type: pulumi:providers:kubernetes properties: enableServerSideApply: true ``` ### Expected Behavior The Kubernetes provider lives in the top-level package rather than in `/providers`. ```go package main import ( corev1 "github.com/pulumi/pulumi-kubernetes/sdk/v3/go/kubernetes/core/v1" metav1 "github.com/pulumi/pulumi-kubernetes/sdk/v3/go/kubernetes/meta/v1" "github.com/pulumi/pulumi-kubernetes/sdk/v3/go/kubernetes" "github.com/pulumi/pulumi/sdk/v3/go/pulumi" ) func main() { pulumi.Run(func(ctx *pulumi.Context) error { provider, err := kubernetes.NewProvider(ctx, "provider", &kubernetes.ProviderArgs{ EnableServerSideApply: true, }) if err != nil { return err } return nil }) } ``` ### Actual Behavior ```go package main import ( corev1 "github.com/pulumi/pulumi-kubernetes/sdk/v3/go/kubernetes/core/v1" metav1 "github.com/pulumi/pulumi-kubernetes/sdk/v3/go/kubernetes/meta/v1" "github.com/pulumi/pulumi-kubernetes/sdk/v3/go/kubernetes/providers" "github.com/pulumi/pulumi/sdk/v3/go/pulumi" ) func main() { pulumi.Run(func(ctx *pulumi.Context) error { provider, err := providers.Newkubernetes(ctx, "provider", &providers.kubernetesArgs{ EnableServerSideApply: true, }) if err != nil { return err } return nil }) } ``` ### Versions used ``` pulumi about CLI Version 3.35.3 Go Version go1.18.3 Go Compiler gc Plugins NAME VERSION kubernetes unknown yaml unknown Host OS darwin Version 12.4 Arch arm64 This project is written in yaml ``` ### Additional context _No response_ ### Contributing Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
1.0
[Go] Convert for Kubernetes provider generates invalid code - ### What happened? `pulumi convert` generates invalid Go code for Kubernetes providers. ### Steps to reproduce ```yaml name: pulumi-yaml-patch runtime: yaml description: A minimal Kubernetes Pulumi YAML program resources: provider: type: pulumi:providers:kubernetes properties: enableServerSideApply: true ``` ### Expected Behavior The Kubernetes provider lives in the top-level package rather than in `/providers`. ```go package main import ( corev1 "github.com/pulumi/pulumi-kubernetes/sdk/v3/go/kubernetes/core/v1" metav1 "github.com/pulumi/pulumi-kubernetes/sdk/v3/go/kubernetes/meta/v1" "github.com/pulumi/pulumi-kubernetes/sdk/v3/go/kubernetes" "github.com/pulumi/pulumi/sdk/v3/go/pulumi" ) func main() { pulumi.Run(func(ctx *pulumi.Context) error { provider, err := kubernetes.NewProvider(ctx, "provider", &kubernetes.ProviderArgs{ EnableServerSideApply: true, }) if err != nil { return err } return nil }) } ``` ### Actual Behavior ```go package main import ( corev1 "github.com/pulumi/pulumi-kubernetes/sdk/v3/go/kubernetes/core/v1" metav1 "github.com/pulumi/pulumi-kubernetes/sdk/v3/go/kubernetes/meta/v1" "github.com/pulumi/pulumi-kubernetes/sdk/v3/go/kubernetes/providers" "github.com/pulumi/pulumi/sdk/v3/go/pulumi" ) func main() { pulumi.Run(func(ctx *pulumi.Context) error { provider, err := providers.Newkubernetes(ctx, "provider", &providers.kubernetesArgs{ EnableServerSideApply: true, }) if err != nil { return err } return nil }) } ``` ### Versions used ``` pulumi about CLI Version 3.35.3 Go Version go1.18.3 Go Compiler gc Plugins NAME VERSION kubernetes unknown yaml unknown Host OS darwin Version 12.4 Arch arm64 This project is written in yaml ``` ### Additional context _No response_ ### Contributing Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
non_process
convert for kubernetes provider generates invalid code what happened pulumi convert generates invalid go code for kubernetes providers steps to reproduce yaml name pulumi yaml patch runtime yaml description a minimal kubernetes pulumi yaml program resources provider type pulumi providers kubernetes properties enableserversideapply true expected behavior the kubernetes provider lives in the top level package rather than in providers go package main import github com pulumi pulumi kubernetes sdk go kubernetes core github com pulumi pulumi kubernetes sdk go kubernetes meta github com pulumi pulumi kubernetes sdk go kubernetes github com pulumi pulumi sdk go pulumi func main pulumi run func ctx pulumi context error provider err kubernetes newprovider ctx provider kubernetes providerargs enableserversideapply true if err nil return err return nil actual behavior go package main import github com pulumi pulumi kubernetes sdk go kubernetes core github com pulumi pulumi kubernetes sdk go kubernetes meta github com pulumi pulumi kubernetes sdk go kubernetes providers github com pulumi pulumi sdk go pulumi func main pulumi run func ctx pulumi context error provider err providers newkubernetes ctx provider providers kubernetesargs enableserversideapply true if err nil return err return nil versions used pulumi about cli version go version go compiler gc plugins name version kubernetes unknown yaml unknown host os darwin version arch this project is written in yaml additional context no response contributing vote on this issue by adding a 👍 reaction to contribute a fix for this issue leave a comment and link to your pull request if you ve opened one already
0
4,513
7,359,401,952
IssuesEvent
2018-03-10 06:02:25
hashicorp/packer
https://api.github.com/repos/hashicorp/packer
closed
Builder vmware-iso type:esx5 post-processor:vagrant format:ovf can't find vm-name-flat.vmdk
post-processor/vagrant
FOR BUGS: The machine gets provisioned correctly on remote on Esxi 6.0 box. However when doing the post processing it is unable to get the files to run through the ovftool. Then resources are deleted and unless I use debug mode I can not grab the disk image to convert manually. I made sure that VNC ports were configured on Esxi host were configured correctly. - Packer version from `packer version` Packer v0.12.0 - Host platform Windows 7 - Debug log output from `PACKER_LOG=1 packer build template.json`. https://gist.github.com/nowhammies/fd9255f1ac0bc149de469af2ecec7974 - My json template is from a modified version of https://github.com/kaorimatz/packer-templates https://gist.github.com/nowhammies/fd9255f1ac0bc149de469af2ecec7974
1.0
Builder vmware-iso type:esx5 post-processor:vagrant format:ovf can't find vm-name-flat.vmdk - FOR BUGS: The machine gets provisioned correctly on remote on Esxi 6.0 box. However when doing the post processing it is unable to get the files to run through the ovftool. Then resources are deleted and unless I use debug mode I can not grab the disk image to convert manually. I made sure that VNC ports were configured on Esxi host were configured correctly. - Packer version from `packer version` Packer v0.12.0 - Host platform Windows 7 - Debug log output from `PACKER_LOG=1 packer build template.json`. https://gist.github.com/nowhammies/fd9255f1ac0bc149de469af2ecec7974 - My json template is from a modified version of https://github.com/kaorimatz/packer-templates https://gist.github.com/nowhammies/fd9255f1ac0bc149de469af2ecec7974
process
builder vmware iso type post processor vagrant format ovf can t find vm name flat vmdk for bugs the machine gets provisioned correctly on remote on esxi box however when doing the post processing it is unable to get the files to run through the ovftool then resources are deleted and unless i use debug mode i can not grab the disk image to convert manually i made sure that vnc ports were configured on esxi host were configured correctly packer version from packer version packer host platform windows debug log output from packer log packer build template json my json template is from a modified version of
1
41,542
10,512,188,585
IssuesEvent
2019-09-27 17:15:40
google/caliper
https://api.github.com/repos/google/caliper
closed
Method not found: org.apache.commons.math.stat.descriptive.rank.Percentile.setData
type=defect
``` What steps will reproduce the problem? 1. This very simple benchmark: public class ArrayDigestBench { double[] data = new double[10000]; @Param({"16", "32"}) int pageSize; @BeforeExperiment protected void setUp() throws Exception { Random random = new Random(); for (int i = 0; i < data.length; i++) { data[i] = random.nextDouble(); } } @Benchmark double timeArrayDigest(int reps) { double r = 0; TDigest ad = new ArrayDigest(pageSize, 100); for (int i = 0; i < reps; i++) { for (double x : data) { ad.add(x); } System.out.printf("%d\n", i); r += ad.quantile(0.999); } System.out.printf("returning\n"); return r; } } 2. mvn compile 3. mvn exec:java -Dexec.mainClass="com.google.caliper.runner.CaliperMain" -Dexec.args="com.tdunning.math.stats.ArrayDigestBench" What is the expected output? What do you see instead? I would expect the normal caliper sort of output. What version of the product are you using? On what operating system? OSX version 10.9.2, 16GB RAM java version "1.7.0_11" Java(TM) SE Runtime Environment (build 1.7.0_11-b21) Java HotSpot(TM) 64-Bit Server VM (build 23.6-b04, mixed mode) <dependency> <groupId>com.google.caliper</groupId> <artifactId>caliper</artifactId> <version>1.0-beta-SNAPSHOT</version> </dependency> Apache Maven 3.0.5 (r01de14724cdef164cd33c7c8c2fe155faf9602da; 2013-02-19 05:51:28-0800) Please provide any additional information below java.lang.NoSuchMethodError: org.apache.commons.math.stat.descriptive.rank.Percentile.setData([D)V at com.google.caliper.runner.ConsoleResultProcessor.processTrial(ConsoleResultProcessor.java:86) at com.google.caliper.runner.ExperimentingCaliperRun.run(ExperimentingCaliperRun.java:154) at com.google.caliper.runner.CaliperMain.exitlessMain(CaliperMain.java:140) at com.google.caliper.runner.CaliperMain.main(CaliperMain.java:79) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:297) at java.lang.Thread.run(Thread.java:722) ``` Original issue reported on code.google.com by `ted.dunn...@gmail.com` on 4 Mar 2014 at 1:11
1.0
Method not found: org.apache.commons.math.stat.descriptive.rank.Percentile.setData - ``` What steps will reproduce the problem? 1. This very simple benchmark: public class ArrayDigestBench { double[] data = new double[10000]; @Param({"16", "32"}) int pageSize; @BeforeExperiment protected void setUp() throws Exception { Random random = new Random(); for (int i = 0; i < data.length; i++) { data[i] = random.nextDouble(); } } @Benchmark double timeArrayDigest(int reps) { double r = 0; TDigest ad = new ArrayDigest(pageSize, 100); for (int i = 0; i < reps; i++) { for (double x : data) { ad.add(x); } System.out.printf("%d\n", i); r += ad.quantile(0.999); } System.out.printf("returning\n"); return r; } } 2. mvn compile 3. mvn exec:java -Dexec.mainClass="com.google.caliper.runner.CaliperMain" -Dexec.args="com.tdunning.math.stats.ArrayDigestBench" What is the expected output? What do you see instead? I would expect the normal caliper sort of output. What version of the product are you using? On what operating system? OSX version 10.9.2, 16GB RAM java version "1.7.0_11" Java(TM) SE Runtime Environment (build 1.7.0_11-b21) Java HotSpot(TM) 64-Bit Server VM (build 23.6-b04, mixed mode) <dependency> <groupId>com.google.caliper</groupId> <artifactId>caliper</artifactId> <version>1.0-beta-SNAPSHOT</version> </dependency> Apache Maven 3.0.5 (r01de14724cdef164cd33c7c8c2fe155faf9602da; 2013-02-19 05:51:28-0800) Please provide any additional information below java.lang.NoSuchMethodError: org.apache.commons.math.stat.descriptive.rank.Percentile.setData([D)V at com.google.caliper.runner.ConsoleResultProcessor.processTrial(ConsoleResultProcessor.java:86) at com.google.caliper.runner.ExperimentingCaliperRun.run(ExperimentingCaliperRun.java:154) at com.google.caliper.runner.CaliperMain.exitlessMain(CaliperMain.java:140) at com.google.caliper.runner.CaliperMain.main(CaliperMain.java:79) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:297) at java.lang.Thread.run(Thread.java:722) ``` Original issue reported on code.google.com by `ted.dunn...@gmail.com` on 4 Mar 2014 at 1:11
non_process
method not found org apache commons math stat descriptive rank percentile setdata what steps will reproduce the problem this very simple benchmark public class arraydigestbench double data new double param int pagesize beforeexperiment protected void setup throws exception random random new random for int i i data length i data random nextdouble benchmark double timearraydigest int reps double r tdigest ad new arraydigest pagesize for int i i reps i for double x data ad add x system out printf d n i r ad quantile system out printf returning n return r mvn compile mvn exec java dexec mainclass com google caliper runner calipermain dexec args com tdunning math stats arraydigestbench what is the expected output what do you see instead i would expect the normal caliper sort of output what version of the product are you using on what operating system osx version ram java version java tm se runtime environment build java hotspot tm bit server vm build mixed mode com google caliper caliper beta snapshot apache maven please provide any additional information below java lang nosuchmethoderror org apache commons math stat descriptive rank percentile setdata d v at com google caliper runner consoleresultprocessor processtrial consoleresultprocessor java at com google caliper runner experimentingcaliperrun run experimentingcaliperrun java at com google caliper runner calipermain exitlessmain calipermain java at com google caliper runner calipermain main calipermain java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org codehaus mojo exec execjavamojo run execjavamojo java at java lang thread run thread java original issue reported on code google com by ted dunn gmail com on mar at
0
315,754
23,596,095,244
IssuesEvent
2022-08-23 19:24:01
comp426-2022-fall/a00
https://api.github.com/repos/comp426-2022-fall/a00
closed
Changing a UNIX username
documentation
#### URL of file with confusing thing https://www.cyberciti.biz/faq/howto-change-rename-user-name-id/ #### Line number of confusing thing Linux Change or Rename User Command Syntax #### What is confusing? When I first downloaded Ubuntu, I accidentally closed the window where you create your UNIX login info. I was able to figure out how to change my password, but when I try using the usermod command to change my username, it says "usermod: user root is currently used by process 1." How do I fix this?
1.0
Changing a UNIX username - #### URL of file with confusing thing https://www.cyberciti.biz/faq/howto-change-rename-user-name-id/ #### Line number of confusing thing Linux Change or Rename User Command Syntax #### What is confusing? When I first downloaded Ubuntu, I accidentally closed the window where you create your UNIX login info. I was able to figure out how to change my password, but when I try using the usermod command to change my username, it says "usermod: user root is currently used by process 1." How do I fix this?
non_process
changing a unix username url of file with confusing thing line number of confusing thing linux change or rename user command syntax what is confusing when i first downloaded ubuntu i accidentally closed the window where you create your unix login info i was able to figure out how to change my password but when i try using the usermod command to change my username it says usermod user root is currently used by process how do i fix this
0
503,400
14,591,042,980
IssuesEvent
2020-12-19 10:59:58
Energy-Innovation/eps-us
https://api.github.com/repos/Energy-Innovation/eps-us
closed
Calculate the domestic content share of consumption for ISIC 05T06 in Vensim to account for differences in coal vs. crude oil
3.1.2 high priority
India has significant coal production, but little crude oil production. But coal and crude production are both lumped under ISIC 05T06. In a scenario with lower transportation fuel use, for example, you see a substantial negative change in ISIC 05T06 output, which should be pretty much entirely due to changes in crude demand. That change in output gets multiplied by a DCSoCbIC value of ~0.45, based on OECD data. But looking at our data from BFPIaE, the domestic content share of crude oil consumption specifically is more like 10-15%. Could we calculate the domestic content of each fuel type using BFPIaE, then use that to swap in a calculated variable for the ISIC 05T06 domestic content share of consumption? We could weight the domestic content of each fuel type by the change in revenue associated with that fuel. If this makes sense and is straightforward, could we add this to the 3.1.1 update list?
1.0
Calculate the domestic content share of consumption for ISIC 05T06 in Vensim to account for differences in coal vs. crude oil - India has significant coal production, but little crude oil production. But coal and crude production are both lumped under ISIC 05T06. In a scenario with lower transportation fuel use, for example, you see a substantial negative change in ISIC 05T06 output, which should be pretty much entirely due to changes in crude demand. That change in output gets multiplied by a DCSoCbIC value of ~0.45, based on OECD data. But looking at our data from BFPIaE, the domestic content share of crude oil consumption specifically is more like 10-15%. Could we calculate the domestic content of each fuel type using BFPIaE, then use that to swap in a calculated variable for the ISIC 05T06 domestic content share of consumption? We could weight the domestic content of each fuel type by the change in revenue associated with that fuel. If this makes sense and is straightforward, could we add this to the 3.1.1 update list?
non_process
calculate the domestic content share of consumption for isic in vensim to account for differences in coal vs crude oil india has significant coal production but little crude oil production but coal and crude production are both lumped under isic in a scenario with lower transportation fuel use for example you see a substantial negative change in isic output which should be pretty much entirely due to changes in crude demand that change in output gets multiplied by a dcsocbic value of based on oecd data but looking at our data from bfpiae the domestic content share of crude oil consumption specifically is more like could we calculate the domestic content of each fuel type using bfpiae then use that to swap in a calculated variable for the isic domestic content share of consumption we could weight the domestic content of each fuel type by the change in revenue associated with that fuel if this makes sense and is straightforward could we add this to the update list
0
2,366
5,166,999,579
IssuesEvent
2017-01-17 17:35:45
opentrials/opentrials
https://api.github.com/repos/opentrials/opentrials
closed
Avoid a single error stopping our processors/collectors
3. In Development Collectors Processors
We should keep trying with the other records when it makes sense. For example, check https://github.com/opentrials/processors/pull/52.
1.0
Avoid a single error stopping our processors/collectors - We should keep trying with the other records when it makes sense. For example, check https://github.com/opentrials/processors/pull/52.
process
avoid a single error stopping our processors collectors we should keep trying with the other records when it makes sense for example check
1
7,586
10,697,510,412
IssuesEvent
2019-10-23 16:40:27
IIIF/api
https://api.github.com/repos/IIIF/api
opened
Image and Presentation 3.0 Feature Implementations
editorial process
The [Evaluation and Testing](https://iiif.io/community/policy/editorial/#evaluation-and-testing) criteria in the IIIF Editorial Process are: > In order to be considered ready for final review, new features must have two open-source server-side implementations, at least one of which should be in production. New features must also have at least one open-source client-side implementation, which may be a proof-of-concept. We'll use this ticket to track implementations of Image and Presentation 3.0 features
1.0
Image and Presentation 3.0 Feature Implementations - The [Evaluation and Testing](https://iiif.io/community/policy/editorial/#evaluation-and-testing) criteria in the IIIF Editorial Process are: > In order to be considered ready for final review, new features must have two open-source server-side implementations, at least one of which should be in production. New features must also have at least one open-source client-side implementation, which may be a proof-of-concept. We'll use this ticket to track implementations of Image and Presentation 3.0 features
process
image and presentation feature implementations the criteria in the iiif editorial process are in order to be considered ready for final review new features must have two open source server side implementations at least one of which should be in production new features must also have at least one open source client side implementation which may be a proof of concept we ll use this ticket to track implementations of image and presentation features
1
474
2,911,381,539
IssuesEvent
2015-06-22 09:11:57
haskell-distributed/distributed-process-azure
https://api.github.com/repos/haskell-distributed/distributed-process-azure
opened
Make all distributed-process-demos work on Azure
distributed-process-azure Feature Request
_From @edsko on October 16, 2012 9:44_ _Copied from original issue: haskell-distributed/distributed-process#45_
1.0
Make all distributed-process-demos work on Azure - _From @edsko on October 16, 2012 9:44_ _Copied from original issue: haskell-distributed/distributed-process#45_
process
make all distributed process demos work on azure from edsko on october copied from original issue haskell distributed distributed process
1
20,796
14,167,960,484
IssuesEvent
2020-11-12 11:01:26
pymor/pymor
https://api.github.com/repos/pymor/pymor
opened
Remove need for ignoring coverage errors
bug infrastructure
Currently generating the coverage xml report needs `--ignore-errors`. Otherwise we fail with a `source for pymor/(builtin) missing` error from coverage. This is triggered in the mpirun_demos for the [ipython test](https://github.com/pymor/pymor/blob/master/src/pymortests/demos.py#L260). Even an empty function body will trigger the issue. Excluding the function with coverage pragmas has no effect. To replicate the issue, remove `--ignore-errors` [here](https://github.com/pymor/pymor/blob/master/.ci/gitlab/test_mpi.bash#L9) and run `make docker_test PYMOR_TEST_SCRIPT=mpi`.
1.0
Remove need for ignoring coverage errors - Currently generating the coverage xml report needs `--ignore-errors`. Otherwise we fail with a `source for pymor/(builtin) missing` error from coverage. This is triggered in the mpirun_demos for the [ipython test](https://github.com/pymor/pymor/blob/master/src/pymortests/demos.py#L260). Even an empty function body will trigger the issue. Excluding the function with coverage pragmas has no effect. To replicate the issue, remove `--ignore-errors` [here](https://github.com/pymor/pymor/blob/master/.ci/gitlab/test_mpi.bash#L9) and run `make docker_test PYMOR_TEST_SCRIPT=mpi`.
non_process
remove need for ignoring coverage errors currently generating the coverage xml report needs ignore errors otherwise we fail with a source for pymor builtin missing error from coverage this is triggered in the mpirun demos for the even an empty function body will trigger the issue excluding the function with coverage pragmas has no effect to replicate the issue remove ignore errors and run make docker test pymor test script mpi
0
352,387
25,064,958,148
IssuesEvent
2022-11-07 07:22:44
crispindeity/issue-tracker
https://api.github.com/repos/crispindeity/issue-tracker
opened
Add Issue API Documentation
📃 Documentation 📬 API BE
# Description - Spring Rest Doc 을 사용하여 Issue API 를 문서화를 진행한다. # Progress - [ ] Issue API Documentation
1.0
Add Issue API Documentation - # Description - Spring Rest Doc 을 사용하여 Issue API 를 문서화를 진행한다. # Progress - [ ] Issue API Documentation
non_process
add issue api documentation description spring rest doc 을 사용하여 issue api 를 문서화를 진행한다 progress issue api documentation
0
390
2,838,856,862
IssuesEvent
2015-05-27 10:15:20
Graylog2/graylog2-server
https://api.github.com/repos/Graylog2/graylog2-server
closed
Alert on field content?
processing
I am sure that this has been mentioned somewhere previously but it would be awesome to be able to alert based on field content (string and regex matching) - currently field condition alerts only support metrics & statistics. Inverting matches should be supported, too.
1.0
Alert on field content? - I am sure that this has been mentioned somewhere previously but it would be awesome to be able to alert based on field content (string and regex matching) - currently field condition alerts only support metrics & statistics. Inverting matches should be supported, too.
process
alert on field content i am sure that this has been mentioned somewhere previously but it would be awesome to be able to alert based on field content string and regex matching currently field condition alerts only support metrics statistics inverting matches should be supported too
1
17,201
22,778,056,433
IssuesEvent
2022-07-08 16:21:39
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
Can't use traceID in transform processor because it returns struct
bug priority:p2 processor/transform
trace_id: Currently accessed as a TraceID struct so does not work in conditions. The transform processor's Getters normally returns a field as the type it is represented in pdata. This can cause problems when the field is used in set expression. I cannot use traceId as it is returned as struct.
1.0
Can't use traceID in transform processor because it returns struct - trace_id: Currently accessed as a TraceID struct so does not work in conditions. The transform processor's Getters normally returns a field as the type it is represented in pdata. This can cause problems when the field is used in set expression. I cannot use traceId as it is returned as struct.
process
can t use traceid in transform processor because it returns struct trace id currently accessed as a traceid struct so does not work in conditions the transform processor s getters normally returns a field as the type it is represented in pdata this can cause problems when the field is used in set expression i cannot use traceid as it is returned as struct
1
55,915
13,697,032,373
IssuesEvent
2020-10-01 01:49:11
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
IL Link warning failures in System.Private.DataContractSerialization breaking the build
area-Serialization blocking-clean-ci blocking-official-build blocking-outerloop linkable-framework untriaged
I'm seeing a variety of trim analysis warnings causing build failures in master and locally due to #42824 Here's some of the warnings I've seen: > D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\ClassDataContract.cs(390,17): Trim analysis error IL2070: System.Runtime.Serialization.ClassDataContract.IsNonAttributedTypeValidForSerialization(Type): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the parameter 'type' of method 'System.Runtime.Serialization.ClassDataContract.IsNonAttributedTypeValidForSerialization(Type)' don't match those on the implicit 'this' parameter of method 'System.Type.GetConstructor(BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\ClassDataContract.cs(1428,17): Trim analysis error IL2075: System.Runtime.Serialization.ClassDataContract.ClassDataContractCriticalHelper.GetNonAttributedTypeConstructor(): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.DataContract.DataContractCriticalHelper.get_UnderlyingType()' don't match those on the implicit 'this' parameter of method 'System.Type.GetConstructor(BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\DataContract.cs(1981,29): Trim analysis error IL2070: System.Runtime.Serialization.DataContract.ImportKnownTypeAttributes(Type,Dictionary<Type,Type>,Dictionary`2&): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the parameter 'type' of method 'System.Runtime.Serialization.DataContract.ImportKnownTypeAttributes(Type,Dictionary<Type,Type>,Dictionary`2&)' don't match those on the implicit 'this' parameter of method 'System.Type.GetMethod(String,BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\XmlDataContract.cs(288,25): Trim analysis error IL2075: System.Runtime.Serialization.XmlDataContract.GenerateCreateXmlSerializableDelegate(): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Reflection.Assembly.GetType(String)' don't match those on the implicit 'this' parameter of method 'System.Type.GetMethod(String,BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\XmlDataContract.cs(295,25): Trim analysis error IL2075: System.Runtime.Serialization.XmlDataContract.GenerateCreateXmlSerializableDelegate(): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.DataContract.get_UnderlyingType()' don't match those on the implicit 'this' parameter of method 'System.Type.GetConstructor(BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\Json\JsonFormatReaderGenerator.cs(576,29): Trim analysis error IL2075: System.D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\Json\JsonFormatReaderGenerator.cs(576,29): Trim analysis error IL2075: System.D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\Json\JsonFormatReaderGenerator.cs(576,29): Trim analysis error IL2075: System.D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\Json\JsonFormatReaderGenerator.cs(576,29): Trim analysis error IL2075: System.D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\Json\JsonFormatReaderGenerator.cs(576,29): Trim analysis error IL2075: System.D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\Json\JsonFormatReaderGenerator.cs(576,29): Trim analysis error IL2075: System.Runtime.Serialization.Json.JsonFormatReaderGenerator.CriticalHelper.ReadCollection(CollectionDataContract): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.DataContract.get_UnderlyingType()' don't match those on the implicit 'this' parameter of method 'System.Type.GetConstructor(BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\Json\JsonFormatReaderGenerator.cs(580,29): Trim analysis error IL2075: System.Runtime.Serialization.Json.JsonFormatReaderGenerator.CriticalHelper.ReadCollection(CollectionDataContract): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.DataContract.get_UnderlyingType()' don't match those on the implicit 'this' parameter of method 'System.Type.GetConstructor(BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\XmlDataContract.cs(244,13): Trim analysis error IL2075: System.Runtime.Serialization.XmlDataContract.GetConstructor(): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.DataContract.get_UnderlyingType()' don't match those on the implicit 'this' parameter of method 'System.Type.GetConstructor(BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\CollectionDataContract.cs(886,17): Trim analysis error IL2075: System.Runtime.Serialization.CollectionDataContract.CollectionDataContractCriticalHelper.GetCollectionElementType(): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.Globals.get_TypeOfGenericDictionaryEnumerator()' don't match those on the implicit 'this' parameter of method 'System.Type.GetMethod(String,BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\XmlFormatReaderGenerator.cs(599,29): Trim analysis error IL2075: System.Runtime.Serialization.XmlFormatReaderGenerator.CriticalHelper.ReadCollection(CollectionDataContract): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.DataContract.get_UnderlyingType()' don't match those on the implicit 'this' parameter of method 'System.Type.GetConstructor(BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\XmlFormatWriterGenerator.cs(412,21): Trim analysis error IL2075: System.Runtime.Serialization.XmlFormatWriterGenerator.CriticalHelper.WriteCollection(CollectionDataContract): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.Globals.get_TypeOfGenericDictionaryEnumerator()' don't match those on the implicit 'this' parameter of method 'System.Type.GetMethod(String,BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\XmlFormatWriterGenerator.cs(413,21): Trim analysis error IL2075: System.Runtime.Serialization.XmlFormatWriterGenerator.CriticalHelper.WriteCollection(CollectionDataContract): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.Globals.get_TypeOfGenericDictionaryEnumerator()' don't match those on the implicit 'this' parameter of method 'System.Type.GetMethod(String,BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\XmlFormatWriterGenerator.cs(460,25): Trim analysis error IL2075: System.Runtime.Serialization.XmlFormatWriterGenerator.CriticalHelper.WriteCollection(CollectionDataContract): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.Globals.get_TypeOfGenericDictionaryEnumerator()' don't match those on the implicit 'this' parameter of method 'System.Type.GetConstructor(BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\XmlFormatGeneratorStatics.cs(211,21): Trim analysis error IL2075: System.Runtime.Serialization.XmlFormatGeneratorStatics.get_HashtableCtor(): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.Globals.get_TypeOfHashtable()' don't match those on the implicit 'this' parameter of method 'System.Type.GetConstructor(BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\DataContract.cs(1128,25): Trim analysis error IL2075: System.Runtime.Serialization.DataContract.DataContractCriticalHelper.get_ParseMethod(): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.DataContract.DataContractCriticalHelper.get_UnderlyingType()' don't match those on the implicit 'this' parameter of method 'System.Type.GetMethod(String,BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] cc: @eerhardt
1.0
IL Link warning failures in System.Private.DataContractSerialization breaking the build - I'm seeing a variety of trim analysis warnings causing build failures in master and locally due to #42824 Here's some of the warnings I've seen: > D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\ClassDataContract.cs(390,17): Trim analysis error IL2070: System.Runtime.Serialization.ClassDataContract.IsNonAttributedTypeValidForSerialization(Type): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the parameter 'type' of method 'System.Runtime.Serialization.ClassDataContract.IsNonAttributedTypeValidForSerialization(Type)' don't match those on the implicit 'this' parameter of method 'System.Type.GetConstructor(BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\ClassDataContract.cs(1428,17): Trim analysis error IL2075: System.Runtime.Serialization.ClassDataContract.ClassDataContractCriticalHelper.GetNonAttributedTypeConstructor(): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.DataContract.DataContractCriticalHelper.get_UnderlyingType()' don't match those on the implicit 'this' parameter of method 'System.Type.GetConstructor(BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\DataContract.cs(1981,29): Trim analysis error IL2070: System.Runtime.Serialization.DataContract.ImportKnownTypeAttributes(Type,Dictionary<Type,Type>,Dictionary`2&): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the parameter 'type' of method 'System.Runtime.Serialization.DataContract.ImportKnownTypeAttributes(Type,Dictionary<Type,Type>,Dictionary`2&)' don't match those on the implicit 'this' parameter of method 'System.Type.GetMethod(String,BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\XmlDataContract.cs(288,25): Trim analysis error IL2075: System.Runtime.Serialization.XmlDataContract.GenerateCreateXmlSerializableDelegate(): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Reflection.Assembly.GetType(String)' don't match those on the implicit 'this' parameter of method 'System.Type.GetMethod(String,BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\XmlDataContract.cs(295,25): Trim analysis error IL2075: System.Runtime.Serialization.XmlDataContract.GenerateCreateXmlSerializableDelegate(): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.DataContract.get_UnderlyingType()' don't match those on the implicit 'this' parameter of method 'System.Type.GetConstructor(BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\Json\JsonFormatReaderGenerator.cs(576,29): Trim analysis error IL2075: System.D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\Json\JsonFormatReaderGenerator.cs(576,29): Trim analysis error IL2075: System.D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\Json\JsonFormatReaderGenerator.cs(576,29): Trim analysis error IL2075: System.D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\Json\JsonFormatReaderGenerator.cs(576,29): Trim analysis error IL2075: System.D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\Json\JsonFormatReaderGenerator.cs(576,29): Trim analysis error IL2075: System.D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\Json\JsonFormatReaderGenerator.cs(576,29): Trim analysis error IL2075: System.Runtime.Serialization.Json.JsonFormatReaderGenerator.CriticalHelper.ReadCollection(CollectionDataContract): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.DataContract.get_UnderlyingType()' don't match those on the implicit 'this' parameter of method 'System.Type.GetConstructor(BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\Json\JsonFormatReaderGenerator.cs(580,29): Trim analysis error IL2075: System.Runtime.Serialization.Json.JsonFormatReaderGenerator.CriticalHelper.ReadCollection(CollectionDataContract): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.DataContract.get_UnderlyingType()' don't match those on the implicit 'this' parameter of method 'System.Type.GetConstructor(BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\XmlDataContract.cs(244,13): Trim analysis error IL2075: System.Runtime.Serialization.XmlDataContract.GetConstructor(): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.DataContract.get_UnderlyingType()' don't match those on the implicit 'this' parameter of method 'System.Type.GetConstructor(BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\CollectionDataContract.cs(886,17): Trim analysis error IL2075: System.Runtime.Serialization.CollectionDataContract.CollectionDataContractCriticalHelper.GetCollectionElementType(): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.Globals.get_TypeOfGenericDictionaryEnumerator()' don't match those on the implicit 'this' parameter of method 'System.Type.GetMethod(String,BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\XmlFormatReaderGenerator.cs(599,29): Trim analysis error IL2075: System.Runtime.Serialization.XmlFormatReaderGenerator.CriticalHelper.ReadCollection(CollectionDataContract): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.DataContract.get_UnderlyingType()' don't match those on the implicit 'this' parameter of method 'System.Type.GetConstructor(BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\XmlFormatWriterGenerator.cs(412,21): Trim analysis error IL2075: System.Runtime.Serialization.XmlFormatWriterGenerator.CriticalHelper.WriteCollection(CollectionDataContract): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.Globals.get_TypeOfGenericDictionaryEnumerator()' don't match those on the implicit 'this' parameter of method 'System.Type.GetMethod(String,BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\XmlFormatWriterGenerator.cs(413,21): Trim analysis error IL2075: System.Runtime.Serialization.XmlFormatWriterGenerator.CriticalHelper.WriteCollection(CollectionDataContract): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.Globals.get_TypeOfGenericDictionaryEnumerator()' don't match those on the implicit 'this' parameter of method 'System.Type.GetMethod(String,BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\XmlFormatWriterGenerator.cs(460,25): Trim analysis error IL2075: System.Runtime.Serialization.XmlFormatWriterGenerator.CriticalHelper.WriteCollection(CollectionDataContract): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.Globals.get_TypeOfGenericDictionaryEnumerator()' don't match those on the implicit 'this' parameter of method 'System.Type.GetConstructor(BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\XmlFormatGeneratorStatics.cs(211,21): Trim analysis error IL2075: System.Runtime.Serialization.XmlFormatGeneratorStatics.get_HashtableCtor(): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.Globals.get_TypeOfHashtable()' don't match those on the implicit 'this' parameter of method 'System.Type.GetConstructor(BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] D:\source\runtime\src\libraries\System.Private.DataContractSerialization\src\System\Runtime\Serialization\DataContract.cs(1128,25): Trim analysis error IL2075: System.Runtime.Serialization.DataContract.DataContractCriticalHelper.get_ParseMethod(): The requirements declared via the 'DynamicallyAccessedMembersAttribute' on the return value of method 'System.Runtime.Serialization.DataContract.DataContractCriticalHelper.get_UnderlyingType()' don't match those on the implicit 'this' parameter of method 'System.Type.GetMethod(String,BindingFlags,Binder,Type[],ParameterModifier[])'. The source value must declare at least the same requirements as those declared on the target location it's assigned to [D:\source\runtime\src\libraries\src.proj] cc: @eerhardt
non_process
il link warning failures in system private datacontractserialization breaking the build i m seeing a variety of trim analysis warnings causing build failures in master and locally due to here s some of the warnings i ve seen d source runtime src libraries system private datacontractserialization src system runtime serialization classdatacontract cs trim analysis error system runtime serialization classdatacontract isnonattributedtypevalidforserialization type the requirements declared via the dynamicallyaccessedmembersattribute on the parameter type of method system runtime serialization classdatacontract isnonattributedtypevalidforserialization type don t match those on the implicit this parameter of method system type getconstructor bindingflags binder type parametermodifier the source value must declare at least the same requirements as those declared on the target location it s assigned to d source runtime src libraries system private datacontractserialization src system runtime serialization classdatacontract cs trim analysis error system runtime serialization classdatacontract classdatacontractcriticalhelper getnonattributedtypeconstructor the requirements declared via the dynamicallyaccessedmembersattribute on the return value of method system runtime serialization datacontract datacontractcriticalhelper get underlyingtype don t match those on the implicit this parameter of method system type getconstructor bindingflags binder type parametermodifier the source value must declare at least the same requirements as those declared on the target location it s assigned to d source runtime src libraries system private datacontractserialization src system runtime serialization datacontract cs trim analysis error system runtime serialization datacontract importknowntypeattributes type dictionary dictionary the requirements declared via the dynamicallyaccessedmembersattribute on the parameter type of method system runtime serialization datacontract importknowntypeattributes type dictionary dictionary don t match those on the implicit this parameter of method system type getmethod string bindingflags binder type parametermodifier the source value must declare at least the same requirements as those declared on the target location it s assigned to d source runtime src libraries system private datacontractserialization src system runtime serialization xmldatacontract cs trim analysis error system runtime serialization xmldatacontract generatecreatexmlserializabledelegate the requirements declared via the dynamicallyaccessedmembersattribute on the return value of method system reflection assembly gettype string don t match those on the implicit this parameter of method system type getmethod string bindingflags binder type parametermodifier the source value must declare at least the same requirements as those declared on the target location it s assigned to d source runtime src libraries system private datacontractserialization src system runtime serialization xmldatacontract cs trim analysis error system runtime serialization xmldatacontract generatecreatexmlserializabledelegate the requirements declared via the dynamicallyaccessedmembersattribute on the return value of method system runtime serialization datacontract get underlyingtype don t match those on the implicit this parameter of method system type getconstructor bindingflags binder type parametermodifier the source value must declare at least the same requirements as those declared on the target location it s assigned to d source runtime src libraries system private datacontractserialization src system runtime serialization json jsonformatreadergenerator cs trim analysis error system d source runtime src libraries system private datacontractserialization src system runtime serialization json jsonformatreadergenerator cs trim analysis error system d source runtime src libraries system private datacontractserialization src system runtime serialization json jsonformatreadergenerator cs trim analysis error system d source runtime src libraries system private datacontractserialization src system runtime serialization json jsonformatreadergenerator cs trim analysis error system d source runtime src libraries system private datacontractserialization src system runtime serialization json jsonformatreadergenerator cs trim analysis error system d source runtime src libraries system private datacontractserialization src system runtime serialization json jsonformatreadergenerator cs trim analysis error system runtime serialization json jsonformatreadergenerator criticalhelper readcollection collectiondatacontract the requirements declared via the dynamicallyaccessedmembersattribute on the return value of method system runtime serialization datacontract get underlyingtype don t match those on the implicit this parameter of method system type getconstructor bindingflags binder type parametermodifier the source value must declare at least the same requirements as those declared on the target location it s assigned to d source runtime src libraries system private datacontractserialization src system runtime serialization json jsonformatreadergenerator cs trim analysis error system runtime serialization json jsonformatreadergenerator criticalhelper readcollection collectiondatacontract the requirements declared via the dynamicallyaccessedmembersattribute on the return value of method system runtime serialization datacontract get underlyingtype don t match those on the implicit this parameter of method system type getconstructor bindingflags binder type parametermodifier the source value must declare at least the same requirements as those declared on the target location it s assigned to d source runtime src libraries system private datacontractserialization src system runtime serialization xmldatacontract cs trim analysis error system runtime serialization xmldatacontract getconstructor the requirements declared via the dynamicallyaccessedmembersattribute on the return value of method system runtime serialization datacontract get underlyingtype don t match those on the implicit this parameter of method system type getconstructor bindingflags binder type parametermodifier the source value must declare at least the same requirements as those declared on the target location it s assigned to d source runtime src libraries system private datacontractserialization src system runtime serialization collectiondatacontract cs trim analysis error system runtime serialization collectiondatacontract collectiondatacontractcriticalhelper getcollectionelementtype the requirements declared via the dynamicallyaccessedmembersattribute on the return value of method system runtime serialization globals get typeofgenericdictionaryenumerator don t match those on the implicit this parameter of method system type getmethod string bindingflags binder type parametermodifier the source value must declare at least the same requirements as those declared on the target location it s assigned to d source runtime src libraries system private datacontractserialization src system runtime serialization xmlformatreadergenerator cs trim analysis error system runtime serialization xmlformatreadergenerator criticalhelper readcollection collectiondatacontract the requirements declared via the dynamicallyaccessedmembersattribute on the return value of method system runtime serialization datacontract get underlyingtype don t match those on the implicit this parameter of method system type getconstructor bindingflags binder type parametermodifier the source value must declare at least the same requirements as those declared on the target location it s assigned to d source runtime src libraries system private datacontractserialization src system runtime serialization xmlformatwritergenerator cs trim analysis error system runtime serialization xmlformatwritergenerator criticalhelper writecollection collectiondatacontract the requirements declared via the dynamicallyaccessedmembersattribute on the return value of method system runtime serialization globals get typeofgenericdictionaryenumerator don t match those on the implicit this parameter of method system type getmethod string bindingflags binder type parametermodifier the source value must declare at least the same requirements as those declared on the target location it s assigned to d source runtime src libraries system private datacontractserialization src system runtime serialization xmlformatwritergenerator cs trim analysis error system runtime serialization xmlformatwritergenerator criticalhelper writecollection collectiondatacontract the requirements declared via the dynamicallyaccessedmembersattribute on the return value of method system runtime serialization globals get typeofgenericdictionaryenumerator don t match those on the implicit this parameter of method system type getmethod string bindingflags binder type parametermodifier the source value must declare at least the same requirements as those declared on the target location it s assigned to d source runtime src libraries system private datacontractserialization src system runtime serialization xmlformatwritergenerator cs trim analysis error system runtime serialization xmlformatwritergenerator criticalhelper writecollection collectiondatacontract the requirements declared via the dynamicallyaccessedmembersattribute on the return value of method system runtime serialization globals get typeofgenericdictionaryenumerator don t match those on the implicit this parameter of method system type getconstructor bindingflags binder type parametermodifier the source value must declare at least the same requirements as those declared on the target location it s assigned to d source runtime src libraries system private datacontractserialization src system runtime serialization xmlformatgeneratorstatics cs trim analysis error system runtime serialization xmlformatgeneratorstatics get hashtablector the requirements declared via the dynamicallyaccessedmembersattribute on the return value of method system runtime serialization globals get typeofhashtable don t match those on the implicit this parameter of method system type getconstructor bindingflags binder type parametermodifier the source value must declare at least the same requirements as those declared on the target location it s assigned to d source runtime src libraries system private datacontractserialization src system runtime serialization datacontract cs trim analysis error system runtime serialization datacontract datacontractcriticalhelper get parsemethod the requirements declared via the dynamicallyaccessedmembersattribute on the return value of method system runtime serialization datacontract datacontractcriticalhelper get underlyingtype don t match those on the implicit this parameter of method system type getmethod string bindingflags binder type parametermodifier the source value must declare at least the same requirements as those declared on the target location it s assigned to cc eerhardt
0
617,849
19,406,365,681
IssuesEvent
2021-12-20 01:40:11
eatmyvenom/HyArcade
https://api.github.com/repos/eatmyvenom/HyArcade
closed
[FEATURE] add a configurable json replacer to database queries
enhancement question Medium priority t:database
When getting the `/db` endpoint the pure time of stringifying and sending the data slows down everything due to the blocking nature of JSON handling. Not only can the json stringing process be changed slightly to allow more ticks to pass for other queries to be handled. But the data can be dramatically reduced with the JSON reducer. A simple argument in the url would suffice and would allow for any level of configuration.
1.0
[FEATURE] add a configurable json replacer to database queries - When getting the `/db` endpoint the pure time of stringifying and sending the data slows down everything due to the blocking nature of JSON handling. Not only can the json stringing process be changed slightly to allow more ticks to pass for other queries to be handled. But the data can be dramatically reduced with the JSON reducer. A simple argument in the url would suffice and would allow for any level of configuration.
non_process
add a configurable json replacer to database queries when getting the db endpoint the pure time of stringifying and sending the data slows down everything due to the blocking nature of json handling not only can the json stringing process be changed slightly to allow more ticks to pass for other queries to be handled but the data can be dramatically reduced with the json reducer a simple argument in the url would suffice and would allow for any level of configuration
0
3,721
6,732,894,517
IssuesEvent
2017-10-18 13:13:04
lockedata/rcms
https://api.github.com/repos/lockedata/rcms
opened
Register
attendee osem processes
## Detailed task Buy a ticket for the conference ## Assessing the task Try to perform the task. Use google and the system documentation to help - part of what we're trying to assess how easy it is for people to work out how to do tasks. Use a 👍 (`:+1:`) reaction to this task if you were able to perform the task. Use a 👎 (`:-1:`) reaction to the task if you could not complete it. Add a reply with any comments or feedback. ## Extra Info - Site: [osem](https://intense-shore-93790.herokuapp.com/) - System documentation: [osem docs](http://osem.io/) - Role: Attendee - Area: Processes
1.0
Register - ## Detailed task Buy a ticket for the conference ## Assessing the task Try to perform the task. Use google and the system documentation to help - part of what we're trying to assess how easy it is for people to work out how to do tasks. Use a 👍 (`:+1:`) reaction to this task if you were able to perform the task. Use a 👎 (`:-1:`) reaction to the task if you could not complete it. Add a reply with any comments or feedback. ## Extra Info - Site: [osem](https://intense-shore-93790.herokuapp.com/) - System documentation: [osem docs](http://osem.io/) - Role: Attendee - Area: Processes
process
register detailed task buy a ticket for the conference assessing the task try to perform the task use google and the system documentation to help part of what we re trying to assess how easy it is for people to work out how to do tasks use a 👍 reaction to this task if you were able to perform the task use a 👎 reaction to the task if you could not complete it add a reply with any comments or feedback extra info site system documentation role attendee area processes
1
12,123
14,740,777,733
IssuesEvent
2021-01-07 09:36:51
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
Laser - New account error
anc-process anp-important ant-bug ant-parent/primary
In GitLab by @kdjstudios on Dec 3, 2018, 10:03 **Submitted by:** "'Jessica Hinkle'" <jhinkle@laseranswering.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-12-03-19324 **Server:** External **Client/Site:** Laser **Account:** ALL **Issue:** I am trying to enter a new customer in SA Billing and it wont allow me to save or enter the recurring fees, etc. Thanks in advance for your help. This is the attached screen shot that shows the page that wont allow me to edit recurring fees, etc or save the account as a new customer. **NOTE:** I was unable to attach the screen shot sent by Jessica as it was is html format and when opened didn't display anything.
1.0
Laser - New account error - In GitLab by @kdjstudios on Dec 3, 2018, 10:03 **Submitted by:** "'Jessica Hinkle'" <jhinkle@laseranswering.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-12-03-19324 **Server:** External **Client/Site:** Laser **Account:** ALL **Issue:** I am trying to enter a new customer in SA Billing and it wont allow me to save or enter the recurring fees, etc. Thanks in advance for your help. This is the attached screen shot that shows the page that wont allow me to edit recurring fees, etc or save the account as a new customer. **NOTE:** I was unable to attach the screen shot sent by Jessica as it was is html format and when opened didn't display anything.
process
laser new account error in gitlab by kdjstudios on dec submitted by jessica hinkle helpdesk server external client site laser account all issue i am trying to enter a new customer in sa billing and it wont allow me to save or enter the recurring fees etc thanks in advance for your help this is the attached screen shot that shows the page that wont allow me to edit recurring fees etc or save the account as a new customer note i was unable to attach the screen shot sent by jessica as it was is html format and when opened didn t display anything
1
6,160
2,584,002,073
IssuesEvent
2015-02-16 12:07:14
TrinityCore/TrinityCore
https://api.github.com/repos/TrinityCore/TrinityCore
closed
[Bug]Tome of Mel'Thandris event missing
Comp-Database Feedback-FixOutdatedMissingWip Priority-Low Sub-Quests
How it should work: 1. Accept "The Howling Vale". 2. Use "Tome of Mel'Thandris". 3. "Velinde Starsong" will appear and say: "The numbers of my companions dwindles, goddess, and my own power shall soon be insufficient to hold back the demons of Felwood." "Goddess, grant me the power to overcome my enemies! Hear me, please, my need is desperate and my cause is just!" "What... what is this? Could this be the answer to my prayers? Elune has granted me a weapon -- this scythe -- to defeat the demons." Bug: "Velinde Starsong" never appears. Links: The Howling Vale - http://www.wowwiki.com/Quest:The_Howling_Vale Velinde Starsong - http://www.wowhead.com/npc=3946 Tome of Mel'Thandris - http://www.wowhead.com/object=19027 Core revision: d943b1d67980 Database version: TDB 335.53 Addons: Anticheat1
1.0
[Bug]Tome of Mel'Thandris event missing - How it should work: 1. Accept "The Howling Vale". 2. Use "Tome of Mel'Thandris". 3. "Velinde Starsong" will appear and say: "The numbers of my companions dwindles, goddess, and my own power shall soon be insufficient to hold back the demons of Felwood." "Goddess, grant me the power to overcome my enemies! Hear me, please, my need is desperate and my cause is just!" "What... what is this? Could this be the answer to my prayers? Elune has granted me a weapon -- this scythe -- to defeat the demons." Bug: "Velinde Starsong" never appears. Links: The Howling Vale - http://www.wowwiki.com/Quest:The_Howling_Vale Velinde Starsong - http://www.wowhead.com/npc=3946 Tome of Mel'Thandris - http://www.wowhead.com/object=19027 Core revision: d943b1d67980 Database version: TDB 335.53 Addons: Anticheat1
non_process
tome of mel thandris event missing how it should work accept the howling vale use tome of mel thandris velinde starsong will appear and say the numbers of my companions dwindles goddess and my own power shall soon be insufficient to hold back the demons of felwood goddess grant me the power to overcome my enemies hear me please my need is desperate and my cause is just what what is this could this be the answer to my prayers elune has granted me a weapon this scythe to defeat the demons bug velinde starsong never appears links the howling vale velinde starsong tome of mel thandris core revision database version tdb addons
0
809,858
30,214,999,681
IssuesEvent
2023-07-05 15:01:36
sdatkinson/NeuralAmpModelerPlugin
https://api.github.com/repos/sdatkinson/NeuralAmpModelerPlugin
closed
Please make "Select model" and "Select IR" text labels clickable, would improve GUI accessibility
enhancement priority:high
Hi, I'm a totally blind guitar player who uses open source screen reader software and DAW parameters to control plug-ins. Just been checking out NAM, and it's good that you've already got all adjustable controls exposed to DAW automation, but at the moment I'm not able to load models or IRs because the iPlug2 GUI isn't natively accessible at all. Can you make the "Select model" and "Select IR" text labels clickable? Screen reader software can scan The GUI using optical character recognition, so if clicking those lines of text performed the same function as clicking the file icons), it would make it possible for screen reader users like me to get stuff loaded. OCR won't detect the file icons though, so right now there's nothin' doin'. Thanks in advance for any improvements you can rustle up. Happy to clarify any questions of course. Cheers, Scott (London)
1.0
Please make "Select model" and "Select IR" text labels clickable, would improve GUI accessibility - Hi, I'm a totally blind guitar player who uses open source screen reader software and DAW parameters to control plug-ins. Just been checking out NAM, and it's good that you've already got all adjustable controls exposed to DAW automation, but at the moment I'm not able to load models or IRs because the iPlug2 GUI isn't natively accessible at all. Can you make the "Select model" and "Select IR" text labels clickable? Screen reader software can scan The GUI using optical character recognition, so if clicking those lines of text performed the same function as clicking the file icons), it would make it possible for screen reader users like me to get stuff loaded. OCR won't detect the file icons though, so right now there's nothin' doin'. Thanks in advance for any improvements you can rustle up. Happy to clarify any questions of course. Cheers, Scott (London)
non_process
please make select model and select ir text labels clickable would improve gui accessibility hi i m a totally blind guitar player who uses open source screen reader software and daw parameters to control plug ins just been checking out nam and it s good that you ve already got all adjustable controls exposed to daw automation but at the moment i m not able to load models or irs because the gui isn t natively accessible at all can you make the select model and select ir text labels clickable screen reader software can scan the gui using optical character recognition so if clicking those lines of text performed the same function as clicking the file icons it would make it possible for screen reader users like me to get stuff loaded ocr won t detect the file icons though so right now there s nothin doin thanks in advance for any improvements you can rustle up happy to clarify any questions of course cheers scott london
0
9,492
12,484,677,222
IssuesEvent
2020-05-30 15:54:26
Arch666Angel/mods
https://api.github.com/repos/Arch666Angel/mods
closed
[BUG] Unknown key: "recipe-name.puffer-puffing-12" to "...-15"
Angels Bio Processing Impact: Bug
**Describe the bug** The new puffer puffing recipes are missing the locale key: * recipe-name.puffer-puffing-12 * recipe-name.puffer-puffing-13 * recipe-name.puffer-puffing-14 * recipe-name.puffer-puffing-15 **To Reproduce** Factorio 0.18.28 with Angel's Bio Processing 0.7.9. Hover over the recipe in either the technology tree before researched or when selecting it in the Puffer Refugiume. **Screenshots** ![image](https://user-images.githubusercontent.com/458548/83272725-d019b480-a1cb-11ea-9ec4-83dc7b3b6b89.png)
1.0
[BUG] Unknown key: "recipe-name.puffer-puffing-12" to "...-15" - **Describe the bug** The new puffer puffing recipes are missing the locale key: * recipe-name.puffer-puffing-12 * recipe-name.puffer-puffing-13 * recipe-name.puffer-puffing-14 * recipe-name.puffer-puffing-15 **To Reproduce** Factorio 0.18.28 with Angel's Bio Processing 0.7.9. Hover over the recipe in either the technology tree before researched or when selecting it in the Puffer Refugiume. **Screenshots** ![image](https://user-images.githubusercontent.com/458548/83272725-d019b480-a1cb-11ea-9ec4-83dc7b3b6b89.png)
process
unknown key recipe name puffer puffing to describe the bug the new puffer puffing recipes are missing the locale key recipe name puffer puffing recipe name puffer puffing recipe name puffer puffing recipe name puffer puffing to reproduce factorio with angel s bio processing hover over the recipe in either the technology tree before researched or when selecting it in the puffer refugiume screenshots
1
6,854
9,992,266,751
IssuesEvent
2019-07-11 13:06:55
bisq-network/bisq
https://api.github.com/repos/bisq-network/bisq
closed
Timeout error when taking an offer
in:network in:trade-process was:dropped
I am trying to take a HalCash sell offer but I am getting a timeout error. I am using Windows 7 Professional 64 bits and Bisq version is 0.9.7. Logs attached: [Bisq_log_20190403.txt](https://github.com/bisq-network/bisq/files/3038137/Bisq_log_20190403.txt)
1.0
Timeout error when taking an offer - I am trying to take a HalCash sell offer but I am getting a timeout error. I am using Windows 7 Professional 64 bits and Bisq version is 0.9.7. Logs attached: [Bisq_log_20190403.txt](https://github.com/bisq-network/bisq/files/3038137/Bisq_log_20190403.txt)
process
timeout error when taking an offer i am trying to take a halcash sell offer but i am getting a timeout error i am using windows professional bits and bisq version is logs attached
1
8,921
12,032,269,998
IssuesEvent
2020-04-13 11:43:15
nanoframework/Home
https://api.github.com/repos/nanoframework/Home
closed
MDP error with some device libs
Area: Metadata Processor Status: FIXED Type: Bug
#Details about Problem When building some device libraries with the new MDP, it fails. due to a missing reference table for System.DateTime Found so far: Windows.Devices.SerialCommunication Windows.Devices.GPIO nanoFramework.Devices.Can **VS version** 2019 **VS extension version** 2019.1.8.10 ## Detailed repro steps so we can see the same problem 1. Clone the repo 2. Open with Visual Studio 3. Change to release mode 4. Rebuild solution ## Screenshot ![image](https://user-images.githubusercontent.com/11439699/79066608-35175a80-7cb1-11ea-96e4-70b03259485b.png) ![image](https://user-images.githubusercontent.com/11439699/79066894-229e2080-7cb3-11ea-8408-79aa18734fcc.png) ![image](https://user-images.githubusercontent.com/11439699/79072164-76bafc00-7cd7-11ea-883d-6cfc7984c728.png)
1.0
MDP error with some device libs - #Details about Problem When building some device libraries with the new MDP, it fails. due to a missing reference table for System.DateTime Found so far: Windows.Devices.SerialCommunication Windows.Devices.GPIO nanoFramework.Devices.Can **VS version** 2019 **VS extension version** 2019.1.8.10 ## Detailed repro steps so we can see the same problem 1. Clone the repo 2. Open with Visual Studio 3. Change to release mode 4. Rebuild solution ## Screenshot ![image](https://user-images.githubusercontent.com/11439699/79066608-35175a80-7cb1-11ea-96e4-70b03259485b.png) ![image](https://user-images.githubusercontent.com/11439699/79066894-229e2080-7cb3-11ea-8408-79aa18734fcc.png) ![image](https://user-images.githubusercontent.com/11439699/79072164-76bafc00-7cd7-11ea-883d-6cfc7984c728.png)
process
mdp error with some device libs details about problem when building some device libraries with the new mdp it fails due to a missing reference table for system datetime found so far windows devices serialcommunication windows devices gpio nanoframework devices can vs version vs extension version detailed repro steps so we can see the same problem clone the repo open with visual studio change to release mode rebuild solution screenshot
1
18,826
24,726,281,896
IssuesEvent
2022-10-20 14:16:18
km4ack/pi-build
https://api.github.com/repos/km4ack/pi-build
closed
repeater start app check logic issue
Slated in process
a dorky issue but the repeater start logic is saying needs update when not installed
1.0
repeater start app check logic issue - a dorky issue but the repeater start logic is saying needs update when not installed
process
repeater start app check logic issue a dorky issue but the repeater start logic is saying needs update when not installed
1
4,031
6,968,995,291
IssuesEvent
2017-12-11 02:01:35
triplea-game/triplea
https://api.github.com/repos/triplea-game/triplea
closed
Code Coverage - Require PRs to be green and have test coverage?
category: dev & admin process discussion
It's been noted since we added code coverage that we are getting red 'X' marks on PR builds, and we are effectively ignoring them, eg: ![codecov](https://user-images.githubusercontent.com/12397753/30892784-4e5d68ca-a2ef-11e7-9ee4-786690203ae8.png) Thinking about this a bit, I think this is a bad state/process to have. We are incurring overhead to check if a build "actually" passed or not, yet meanwhile we are not getting any good value from code coverage since we are ignoring it. I was tempted initially to suggest we just remove code coverage. Though, I think it might actually be better to go the other route and start requiring for the minimum code coverage to be hit. This can be a lot to ask for this code base since a lot of the code needs quite a bit of work to be testable. I suspect though, we do need to bite the bullet at some point and really start adding unit tests for anything we modify. After all, any modified code is effectively new code, and should be tested as such. Currently it seems we are still hand testing these updates, and that is a pattern we need to break out of so we can reduce our manual test burden. Thoughts? Is there a general concensus to start requiring code coverage plug in to start going green (currently meaning 20% minimum code coverage on PRs)
1.0
Code Coverage - Require PRs to be green and have test coverage? - It's been noted since we added code coverage that we are getting red 'X' marks on PR builds, and we are effectively ignoring them, eg: ![codecov](https://user-images.githubusercontent.com/12397753/30892784-4e5d68ca-a2ef-11e7-9ee4-786690203ae8.png) Thinking about this a bit, I think this is a bad state/process to have. We are incurring overhead to check if a build "actually" passed or not, yet meanwhile we are not getting any good value from code coverage since we are ignoring it. I was tempted initially to suggest we just remove code coverage. Though, I think it might actually be better to go the other route and start requiring for the minimum code coverage to be hit. This can be a lot to ask for this code base since a lot of the code needs quite a bit of work to be testable. I suspect though, we do need to bite the bullet at some point and really start adding unit tests for anything we modify. After all, any modified code is effectively new code, and should be tested as such. Currently it seems we are still hand testing these updates, and that is a pattern we need to break out of so we can reduce our manual test burden. Thoughts? Is there a general concensus to start requiring code coverage plug in to start going green (currently meaning 20% minimum code coverage on PRs)
process
code coverage require prs to be green and have test coverage it s been noted since we added code coverage that we are getting red x marks on pr builds and we are effectively ignoring them eg thinking about this a bit i think this is a bad state process to have we are incurring overhead to check if a build actually passed or not yet meanwhile we are not getting any good value from code coverage since we are ignoring it i was tempted initially to suggest we just remove code coverage though i think it might actually be better to go the other route and start requiring for the minimum code coverage to be hit this can be a lot to ask for this code base since a lot of the code needs quite a bit of work to be testable i suspect though we do need to bite the bullet at some point and really start adding unit tests for anything we modify after all any modified code is effectively new code and should be tested as such currently it seems we are still hand testing these updates and that is a pattern we need to break out of so we can reduce our manual test burden thoughts is there a general concensus to start requiring code coverage plug in to start going green currently meaning minimum code coverage on prs
1
333,695
24,387,147,715
IssuesEvent
2022-10-04 12:42:14
OML-Team/open-metric-learning
https://api.github.com/repos/OML-Team/open-metric-learning
closed
Add pre-commit hook for checking if README was built.
documentation technical
You can info about pre-commit hooks here: https://pre-commit.com/#new-hooks You will probably need to edit `id`, `name` and `entry` parameters.
1.0
Add pre-commit hook for checking if README was built. - You can info about pre-commit hooks here: https://pre-commit.com/#new-hooks You will probably need to edit `id`, `name` and `entry` parameters.
non_process
add pre commit hook for checking if readme was built you can info about pre commit hooks here you will probably need to edit id name and entry parameters
0
21,358
29,189,627,921
IssuesEvent
2023-05-19 18:37:50
acumenlabs/status-page
https://api.github.com/repos/acumenlabs/status-page
closed
⚠️ Processors has degraded performance
status processors
In [`52ad242`](https://github.com/acumenlabs/status-page/commit/52ad2420b3f41995b3fbce0d17552a35d48ca3fa ), Processors ($STATUS_URL) experienced **degraded performance**: - HTTP code: 200 - Response time: 80 ms
1.0
⚠️ Processors has degraded performance - In [`52ad242`](https://github.com/acumenlabs/status-page/commit/52ad2420b3f41995b3fbce0d17552a35d48ca3fa ), Processors ($STATUS_URL) experienced **degraded performance**: - HTTP code: 200 - Response time: 80 ms
process
⚠️ processors has degraded performance in processors status url experienced degraded performance http code response time ms
1
227,369
17,381,388,675
IssuesEvent
2021-07-31 19:43:35
paragonie/paseto
https://api.github.com/repos/paragonie/paseto
closed
Maybe wrong documentation ?
documentation
I decided to use paseto and I starting to read the docs but this seems to have an error [https://github.com/paragonie/paseto/tree/master/docs](https://github.com/paragonie/paseto/tree/master/docs) ```php php /** * @var SymmetricKey $sharedKey */ $token = Builder::getLocal($sharedKey, new Version2()); $token = (new Builder()) ->setKey($sharedKey) .... ``` in fact is missing the variable ``` $sharedKey = new SymmetricKey(random_bytes(32)); ``` in both example
1.0
Maybe wrong documentation ? - I decided to use paseto and I starting to read the docs but this seems to have an error [https://github.com/paragonie/paseto/tree/master/docs](https://github.com/paragonie/paseto/tree/master/docs) ```php php /** * @var SymmetricKey $sharedKey */ $token = Builder::getLocal($sharedKey, new Version2()); $token = (new Builder()) ->setKey($sharedKey) .... ``` in fact is missing the variable ``` $sharedKey = new SymmetricKey(random_bytes(32)); ``` in both example
non_process
maybe wrong documentation i decided to use paseto and i starting to read the docs but this seems to have an error php php var symmetrickey sharedkey token builder getlocal sharedkey new token new builder setkey sharedkey in fact is missing the variable sharedkey new symmetrickey random bytes in both example
0
8,572
11,740,082,167
IssuesEvent
2020-03-11 18:54:08
MicrosoftDocs/vsts-docs
https://api.github.com/repos/MicrosoftDocs/vsts-docs
reopened
variable as environment name not working
Pri2 devops-cicd-process/tech devops/prod doc-bug
when i try doing this: `environment: '$(envName).$(resourceName)'` (variables are in a stage level variable group) i get this error: `Job Deploy Resource $(resourceName) does not exist in environment $(envName)` --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 7730ae4d-4101-9c83-1823-4ff43ff161ce * Version Independent ID: 20a7e263-4819-783e-c984-c4f3b459e22f * Content: [Environment - Kubernetes resource - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments-kubernetes?view=azure-devops) * Content Source: [docs/pipelines/process/environments-kubernetes.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/environments-kubernetes.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
variable as environment name not working - when i try doing this: `environment: '$(envName).$(resourceName)'` (variables are in a stage level variable group) i get this error: `Job Deploy Resource $(resourceName) does not exist in environment $(envName)` --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 7730ae4d-4101-9c83-1823-4ff43ff161ce * Version Independent ID: 20a7e263-4819-783e-c984-c4f3b459e22f * Content: [Environment - Kubernetes resource - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments-kubernetes?view=azure-devops) * Content Source: [docs/pipelines/process/environments-kubernetes.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/environments-kubernetes.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
variable as environment name not working when i try doing this environment envname resourcename variables are in a stage level variable group i get this error job deploy resource resourcename does not exist in environment envname document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
19,508
25,820,684,074
IssuesEvent
2022-12-12 09:21:20
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
Test failure JIT/Methodical/Arrays/lcs/lcs2_r/lcs2_r.cmd
arch-arm64 area-System.Diagnostics.Process os-mac-os-x GCStress blocking-clean-ci-optional needs-further-triage in-pr
Run:[runtime-coreclr gcstress-extra 20220508.1](https://dev.azure.com/dnceng/public/_build/results?buildId=1759074&view=ms.vss-test-web.build-test-results-tab&runId=47379698&paneView=debug&resultId=110828) Failed test: ``` coreclr OSX arm64 Checked gcstress0xc_jitstress2 @ OSX.1200.ARM64.Open - JIT/Methodical/Arrays/lcs/lcs2_r/lcs2_r.cmd ``` **Error message:** ``` cmdLine:/private/tmp/helix/working/B4220957/w/A7D308F6/e/JIT/Methodical/Methodical_r1/../Arrays/lcs/lcs2_r/lcs2_r.sh Timed Out (timeout in milliseconds: 5400000 from variable __TestTimeout, start: 5/8/2022 5:10:55 PM, end: 5/8/2022 6:40:58 PM) Return code: -100 Raw output file: /tmp/helix/working/B4220957/w/A7D308F6/uploads/Arrays/lcs/lcs2_r/output.txt Raw output: SKIPPING EXECUTION BECAUSE COMPlus_GCStress IS SET cmdLine:/private/tmp/helix/working/B4220957/w/A7D308F6/e/JIT/Methodical/Methodical_r1/../Arrays/lcs/lcs2_r/lcs2_r.sh Timed Out (timeout in milliseconds: 5400000 from variable __TestTimeout, start: 5/8/2022 5:10:55 PM, end: 5/8/2022 6:40:58 PM) Test Harness Exitcode is : -100 To run the test: Set up CORE_ROOT and run. /private/tmp/helix/working/B4220957/w/A7D308F6/e/JIT/Methodical/Methodical_r1/../Arrays/lcs/lcs2_r/lcs2_r.sh Expected: True Actual: False Stack trace at TestLibrary.OutOfProcessTest.RunOutOfProcessTest(String basePath, String assemblyPath) at Program.<Main>$(String[] args) ``` Queued | OS | Arch | Pipeline -- | -- | -- | -- 2022-05-09T00:03:15.936Z | osx.1200.arm64.open | arm64 | [runtime-coreclr gcstress-extra Checked-gcstress0xc_jitstress2](https://dev.azure.com/dnceng/public/_build/results?buildId=1759074&view=ms.vss-test-web.build-test-results-tab&runId=47379698&resultId=110828&paneView=debug) 2022-04-09T08:49:58.428Z | osx.1015.amd64.open | x64 | [runtime-staging Release](https://dev.azure.com/dnceng/public/_build/results?buildId=1708665&view=ms.vss-test-web.build-test-results-tab&runId=46494230&resultId=100331&paneView=debug) 2022-03-30T20:41:58.472Z | osx.1015.amd64.open | x64 | [runtime-staging Release](https://dev.azure.com/dnceng/public/_build/results?buildId=1690994&view=ms.vss-test-web.build-test-results-tab&runId=46199434&resultId=100533&paneView=debug) 2022-03-16T07:23:50.554Z | ubuntu.1804.amd64.android.29.open.svc | x64 | [runtime-extra-platforms Release](https://dev.azure.com/dnceng/public/_build/results?buildId=1666049&view=ms.vss-test-web.build-test-results-tab&runId=45804474&resultId=100244&paneView=debug)
1.0
Test failure JIT/Methodical/Arrays/lcs/lcs2_r/lcs2_r.cmd - Run:[runtime-coreclr gcstress-extra 20220508.1](https://dev.azure.com/dnceng/public/_build/results?buildId=1759074&view=ms.vss-test-web.build-test-results-tab&runId=47379698&paneView=debug&resultId=110828) Failed test: ``` coreclr OSX arm64 Checked gcstress0xc_jitstress2 @ OSX.1200.ARM64.Open - JIT/Methodical/Arrays/lcs/lcs2_r/lcs2_r.cmd ``` **Error message:** ``` cmdLine:/private/tmp/helix/working/B4220957/w/A7D308F6/e/JIT/Methodical/Methodical_r1/../Arrays/lcs/lcs2_r/lcs2_r.sh Timed Out (timeout in milliseconds: 5400000 from variable __TestTimeout, start: 5/8/2022 5:10:55 PM, end: 5/8/2022 6:40:58 PM) Return code: -100 Raw output file: /tmp/helix/working/B4220957/w/A7D308F6/uploads/Arrays/lcs/lcs2_r/output.txt Raw output: SKIPPING EXECUTION BECAUSE COMPlus_GCStress IS SET cmdLine:/private/tmp/helix/working/B4220957/w/A7D308F6/e/JIT/Methodical/Methodical_r1/../Arrays/lcs/lcs2_r/lcs2_r.sh Timed Out (timeout in milliseconds: 5400000 from variable __TestTimeout, start: 5/8/2022 5:10:55 PM, end: 5/8/2022 6:40:58 PM) Test Harness Exitcode is : -100 To run the test: Set up CORE_ROOT and run. /private/tmp/helix/working/B4220957/w/A7D308F6/e/JIT/Methodical/Methodical_r1/../Arrays/lcs/lcs2_r/lcs2_r.sh Expected: True Actual: False Stack trace at TestLibrary.OutOfProcessTest.RunOutOfProcessTest(String basePath, String assemblyPath) at Program.<Main>$(String[] args) ``` Queued | OS | Arch | Pipeline -- | -- | -- | -- 2022-05-09T00:03:15.936Z | osx.1200.arm64.open | arm64 | [runtime-coreclr gcstress-extra Checked-gcstress0xc_jitstress2](https://dev.azure.com/dnceng/public/_build/results?buildId=1759074&view=ms.vss-test-web.build-test-results-tab&runId=47379698&resultId=110828&paneView=debug) 2022-04-09T08:49:58.428Z | osx.1015.amd64.open | x64 | [runtime-staging Release](https://dev.azure.com/dnceng/public/_build/results?buildId=1708665&view=ms.vss-test-web.build-test-results-tab&runId=46494230&resultId=100331&paneView=debug) 2022-03-30T20:41:58.472Z | osx.1015.amd64.open | x64 | [runtime-staging Release](https://dev.azure.com/dnceng/public/_build/results?buildId=1690994&view=ms.vss-test-web.build-test-results-tab&runId=46199434&resultId=100533&paneView=debug) 2022-03-16T07:23:50.554Z | ubuntu.1804.amd64.android.29.open.svc | x64 | [runtime-extra-platforms Release](https://dev.azure.com/dnceng/public/_build/results?buildId=1666049&view=ms.vss-test-web.build-test-results-tab&runId=45804474&resultId=100244&paneView=debug)
process
test failure jit methodical arrays lcs r r cmd run: failed test coreclr osx checked osx open jit methodical arrays lcs r r cmd error message cmdline private tmp helix working w e jit methodical methodical arrays lcs r r sh timed out timeout in milliseconds from variable testtimeout start pm end pm return code raw output file tmp helix working w uploads arrays lcs r output txt raw output skipping execution because complus gcstress is set cmdline private tmp helix working w e jit methodical methodical arrays lcs r r sh timed out timeout in milliseconds from variable testtimeout start pm end pm test harness exitcode is to run the test set up core root and run private tmp helix working w e jit methodical methodical arrays lcs r r sh expected true actual false stack trace at testlibrary outofprocesstest runoutofprocesstest string basepath string assemblypath at program string args queued os arch pipeline osx open osx open osx open ubuntu android open svc
1
17,128
22,649,039,683
IssuesEvent
2022-07-01 11:39:54
PyCQA/pylint
https://api.github.com/repos/PyCQA/pylint
closed
inconsistent result in multiple runs
Bug :beetle: topic-multiprocessing
I'm looking into the strange result from pylint on Apache Beam Python SDK. Multiple runs return slightly different results. It looks something related to multi-processing thing because I couldn't find any pattern from this behavior. ### Steps to reproduce 1. Clone https://github.com/apache/beam and checkout `0c8ccae9aa608f4d64b22c08d57b9aaa8724bfee` 2. Go into sdks/python directory. 2. Run scripts/run_pylint.sh multiple times and compare the result. ### Current behavior ``` beam/sdks/python$ for i in `seq 100`; do scripts/run_pylint.sh; done > lint_100.txt beam/sdks/python$ grep slots-on-old-class lint_100.txt | wc -l 100 beam/sdks/python$ grep no-self-argument lint_100.txt | wc -l 42 ``` ### Expected behavior Consistently return the same result ### pylint --version output ``` Using config file beam/sdks/python/.pylintrc pylint 1.9.3, astroid 1.6.5 Python 2.7.13 (default, Nov 24 2017, 17:33:09) [GCC 6.3.0 20170516] ``` Related issue: https://issues.apache.org/jira/browse/BEAM-5846
1.0
inconsistent result in multiple runs - I'm looking into the strange result from pylint on Apache Beam Python SDK. Multiple runs return slightly different results. It looks something related to multi-processing thing because I couldn't find any pattern from this behavior. ### Steps to reproduce 1. Clone https://github.com/apache/beam and checkout `0c8ccae9aa608f4d64b22c08d57b9aaa8724bfee` 2. Go into sdks/python directory. 2. Run scripts/run_pylint.sh multiple times and compare the result. ### Current behavior ``` beam/sdks/python$ for i in `seq 100`; do scripts/run_pylint.sh; done > lint_100.txt beam/sdks/python$ grep slots-on-old-class lint_100.txt | wc -l 100 beam/sdks/python$ grep no-self-argument lint_100.txt | wc -l 42 ``` ### Expected behavior Consistently return the same result ### pylint --version output ``` Using config file beam/sdks/python/.pylintrc pylint 1.9.3, astroid 1.6.5 Python 2.7.13 (default, Nov 24 2017, 17:33:09) [GCC 6.3.0 20170516] ``` Related issue: https://issues.apache.org/jira/browse/BEAM-5846
process
inconsistent result in multiple runs i m looking into the strange result from pylint on apache beam python sdk multiple runs return slightly different results it looks something related to multi processing thing because i couldn t find any pattern from this behavior steps to reproduce clone and checkout go into sdks python directory run scripts run pylint sh multiple times and compare the result current behavior beam sdks python for i in seq do scripts run pylint sh done lint txt beam sdks python grep slots on old class lint txt wc l beam sdks python grep no self argument lint txt wc l expected behavior consistently return the same result pylint version output using config file beam sdks python pylintrc pylint astroid python default nov related issue
1
5,305
8,125,061,299
IssuesEvent
2018-08-16 19:38:44
mkdocs/mkdocs
https://api.github.com/repos/mkdocs/mkdocs
closed
Bootswatch themes repo and maintenance
Process Theme
I've updated my version of bootswatch mkdocs themes to the latest mkdocs theme and bootswatch (upstream) themes. Now, I know there is the mkdocs/mkdocs-bootswatch repository but it's outdated both regarding mkdocs and regarding bootswatch. I've [automated (as much as possible)](https://github.com/mkdocs/mkdocs-bootswatch/issues/21) the process of building the entire familiy of themes from the upstream bootswatch themes and w.r.t. the latest mkdocs theme. Thanks to this automation, I keep my themes very up-to-date and they fix most of the issues reported to mkdocs/mkdocs-bootswatch tracker. A time ago I offered to become the maintainer for mkdocs/mkdocs-bootswatch but get no answer. Despite the automation, keeping the themes up-to-date is no free lunch, so I need to take a decision between: 1. I become the maintainer of mkdocs/mkdocs-bootswatch. 2. I add an entry to the wiki and keep my own version of mkdocs-bootswatch, which will be a sad state of affairs both because of the duplication and because I see no drawback in my version. 3. The current maintainer (I think there isn't one just right now) accepts my version. Note: this is not a simple PR because of some git tagging & branching the automation depends on. 4. I egoistically focus on my own specific need (yeti theme) and forget about this. What do you prefer?
1.0
Bootswatch themes repo and maintenance - I've updated my version of bootswatch mkdocs themes to the latest mkdocs theme and bootswatch (upstream) themes. Now, I know there is the mkdocs/mkdocs-bootswatch repository but it's outdated both regarding mkdocs and regarding bootswatch. I've [automated (as much as possible)](https://github.com/mkdocs/mkdocs-bootswatch/issues/21) the process of building the entire familiy of themes from the upstream bootswatch themes and w.r.t. the latest mkdocs theme. Thanks to this automation, I keep my themes very up-to-date and they fix most of the issues reported to mkdocs/mkdocs-bootswatch tracker. A time ago I offered to become the maintainer for mkdocs/mkdocs-bootswatch but get no answer. Despite the automation, keeping the themes up-to-date is no free lunch, so I need to take a decision between: 1. I become the maintainer of mkdocs/mkdocs-bootswatch. 2. I add an entry to the wiki and keep my own version of mkdocs-bootswatch, which will be a sad state of affairs both because of the duplication and because I see no drawback in my version. 3. The current maintainer (I think there isn't one just right now) accepts my version. Note: this is not a simple PR because of some git tagging & branching the automation depends on. 4. I egoistically focus on my own specific need (yeti theme) and forget about this. What do you prefer?
process
bootswatch themes repo and maintenance i ve updated my version of bootswatch mkdocs themes to the latest mkdocs theme and bootswatch upstream themes now i know there is the mkdocs mkdocs bootswatch repository but it s outdated both regarding mkdocs and regarding bootswatch i ve the process of building the entire familiy of themes from the upstream bootswatch themes and w r t the latest mkdocs theme thanks to this automation i keep my themes very up to date and they fix most of the issues reported to mkdocs mkdocs bootswatch tracker a time ago i offered to become the maintainer for mkdocs mkdocs bootswatch but get no answer despite the automation keeping the themes up to date is no free lunch so i need to take a decision between i become the maintainer of mkdocs mkdocs bootswatch i add an entry to the wiki and keep my own version of mkdocs bootswatch which will be a sad state of affairs both because of the duplication and because i see no drawback in my version the current maintainer i think there isn t one just right now accepts my version note this is not a simple pr because of some git tagging branching the automation depends on i egoistically focus on my own specific need yeti theme and forget about this what do you prefer
1
21,339
29,058,587,016
IssuesEvent
2023-05-15 02:00:07
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Mon, 15 May 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events There is no result ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP There is no result ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### HFLIC: Human Friendly Perceptual Learned Image Compression with Reinforced Transform - **Authors:** Peirong Ning, Wei Jiang, Ronggang Wang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2305.07519 - **Pdf link:** https://arxiv.org/pdf/2305.07519 - **Abstract** In recent years, there has been rapid development in learned image compression techniques that prioritize ratedistortion-perceptual compression, preserving fine details even at lower bit-rates. However, current learning-based image compression methods often sacrifice human-friendly compression and require long decoding times. In this paper, we propose enhancements to the backbone network and loss function of existing image compression model, focusing on improving human perception and efficiency. Our proposed approach achieves competitive subjective results compared to state-of-the-art end-to-end learned image compression methods and classic methods, while requiring less decoding time and offering human-friendly compression. Through empirical evaluation, we demonstrate the effectiveness of our proposed method in achieving outstanding performance, with more than 25% bit-rate saving at the same subjective quality. ## Keyword: RAW ### A Survey on Deep Learning-Based Monocular Spacecraft Pose Estimation: Current State, Limitations and Prospects - **Authors:** Leo Pauly, Wassim Rharbaoui, Carl Shneider, Arunkumar Rathinam, Vincent Gaudilliere, Djamila Aouada - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2305.07348 - **Pdf link:** https://arxiv.org/pdf/2305.07348 - **Abstract** Estimating the pose of an uncooperative spacecraft is an important computer vision problem for enabling the deployment of automatic vision-based systems in orbit, with applications ranging from on-orbit servicing to space debris removal. Following the general trend in computer vision, more and more works have been focusing on leveraging Deep Learning (DL) methods to address this problem. However and despite promising research-stage results, major challenges preventing the use of such methods in real-life missions still stand in the way. In particular, the deployment of such computation-intensive algorithms is still under-investigated, while the performance drop when training on synthetic and testing on real images remains to mitigate. The primary goal of this survey is to describe the current DL-based methods for spacecraft pose estimation in a comprehensive manner. The secondary goal is to help define the limitations towards the effective deployment of DL-based spacecraft pose estimation solutions for reliable autonomous vision-based applications. To this end, the survey first summarises the existing algorithms according to two approaches: hybrid modular pipelines and direct end-to-end regression methods. A comparison of algorithms is presented not only in terms of pose accuracy but also with a focus on network architectures and models' sizes keeping potential deployment in mind. Then, current monocular spacecraft pose estimation datasets used to train and test these methods are discussed. The data generation methods: simulators and testbeds, the domain gap and the performance drop between synthetically generated and lab/space collected images and the potential solutions are also discussed. Finally, the paper presents open research questions and future directions in the field, drawing parallels with other computer vision applications. ### Visual Information Extraction in the Wild: Practical Dataset and End-to-end Solution - **Authors:** Jianfeng Kuang, Wei Hua, Dingkang Liang, Mingkun Yang, Deqiang Jiang, Bo Ren, Yu Zhou, Xiang Bai - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2305.07498 - **Pdf link:** https://arxiv.org/pdf/2305.07498 - **Abstract** Visual information extraction (VIE), which aims to simultaneously perform OCR and information extraction in a unified framework, has drawn increasing attention due to its essential role in various applications like understanding receipts, goods, and traffic signs. However, as existing benchmark datasets for VIE mainly consist of document images without the adequate diversity of layout structures, background disturbs, and entity categories, they cannot fully reveal the challenges of real-world applications. In this paper, we propose a large-scale dataset consisting of camera images for VIE, which contains not only the larger variance of layout, backgrounds, and fonts but also much more types of entities. Besides, we propose a novel framework for end-to-end VIE that combines the stages of OCR and information extraction in an end-to-end learning fashion. Different from the previous end-to-end approaches that directly adopt OCR features as the input of an information extraction module, we propose to use contrastive learning to narrow the semantic gap caused by the difference between the tasks of OCR and information extraction. We evaluate the existing end-to-end methods for VIE on the proposed dataset and observe that the performance of these methods has a distinguishable drop from SROIE (a widely used English dataset) to our proposed dataset due to the larger variance of layout and entities. These results demonstrate our dataset is more practical for promoting advanced VIE algorithms. In addition, experiments demonstrate that the proposed VIE method consistently achieves the obvious performance gains on the proposed and SROIE datasets. ### BlendFields: Few-Shot Example-Driven Facial Modeling - **Authors:** Kacper Kania, Stephan J. Garbin, Andrea Tagliasacchi, Virginia Estellers, Kwang Moo Yi, Julien Valentin, Tomasz Trzciński, Marek Kowalski - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR) - **Arxiv link:** https://arxiv.org/abs/2305.07514 - **Pdf link:** https://arxiv.org/pdf/2305.07514 - **Abstract** Generating faithful visualizations of human faces requires capturing both coarse and fine-level details of the face geometry and appearance. Existing methods are either data-driven, requiring an extensive corpus of data not publicly accessible to the research community, or fail to capture fine details because they rely on geometric face models that cannot represent fine-grained details in texture with a mesh discretization and linear deformation designed to model only a coarse face geometry. We introduce a method that bridges this gap by drawing inspiration from traditional computer graphics techniques. Unseen expressions are modeled by blending appearance from a sparse set of extreme poses. This blending is performed by measuring local volumetric changes in those expressions and locally reproducing their appearance whenever a similar expression is performed at test time. We show that our method generalizes to unseen expressions, adding fine-grained effects on top of smooth volumetric deformations of a face, and demonstrate how it generalizes beyond faces. ### A Critical View Of Vision-Based Long-Term Dynamics Prediction Under Environment Misalignment - **Authors:** Hanchen Xie, Jiageng Zhu, Mahyar Khayatkhoei, Jiazhi Li, Mohamed E. Hussein, Wael AbdAlmgaeed - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2305.07648 - **Pdf link:** https://arxiv.org/pdf/2305.07648 - **Abstract** Dynamics prediction, which is the problem of predicting future states of scene objects based on current and prior states, is drawing increasing attention as an instance of learning physics. To solve this problem, Region Proposal Convolutional Interaction Network (RPCIN), a vision-based model, was proposed and achieved state-of-the-art performance in long-term prediction. RPCIN only takes raw images and simple object descriptions, such as the bounding box and segmentation mask of each object, as input. However, despite its success, the model's capability can be compromised under conditions of environment misalignment. In this paper, we investigate two challenging conditions for environment misalignment: Cross-Domain and Cross-Context by proposing four datasets that are designed for these challenges: SimB-Border, SimB-Split, BlenB-Border, and BlenB-Split. The datasets cover two domains and two contexts. Using RPCIN as a probe, experiments conducted on the combinations of the proposed datasets reveal potential weaknesses of the vision-based long-term dynamics prediction model. Furthermore, we propose a promising direction to mitigate the Cross-Domain challenge and provide concrete evidence supporting such a direction, which provides dramatic alleviation of the challenge on the proposed datasets. ## Keyword: raw image ### A Critical View Of Vision-Based Long-Term Dynamics Prediction Under Environment Misalignment - **Authors:** Hanchen Xie, Jiageng Zhu, Mahyar Khayatkhoei, Jiazhi Li, Mohamed E. Hussein, Wael AbdAlmgaeed - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2305.07648 - **Pdf link:** https://arxiv.org/pdf/2305.07648 - **Abstract** Dynamics prediction, which is the problem of predicting future states of scene objects based on current and prior states, is drawing increasing attention as an instance of learning physics. To solve this problem, Region Proposal Convolutional Interaction Network (RPCIN), a vision-based model, was proposed and achieved state-of-the-art performance in long-term prediction. RPCIN only takes raw images and simple object descriptions, such as the bounding box and segmentation mask of each object, as input. However, despite its success, the model's capability can be compromised under conditions of environment misalignment. In this paper, we investigate two challenging conditions for environment misalignment: Cross-Domain and Cross-Context by proposing four datasets that are designed for these challenges: SimB-Border, SimB-Split, BlenB-Border, and BlenB-Split. The datasets cover two domains and two contexts. Using RPCIN as a probe, experiments conducted on the combinations of the proposed datasets reveal potential weaknesses of the vision-based long-term dynamics prediction model. Furthermore, we propose a promising direction to mitigate the Cross-Domain challenge and provide concrete evidence supporting such a direction, which provides dramatic alleviation of the challenge on the proposed datasets.
2.0
New submissions for Mon, 15 May 23 - ## Keyword: events There is no result ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP There is no result ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### HFLIC: Human Friendly Perceptual Learned Image Compression with Reinforced Transform - **Authors:** Peirong Ning, Wei Jiang, Ronggang Wang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2305.07519 - **Pdf link:** https://arxiv.org/pdf/2305.07519 - **Abstract** In recent years, there has been rapid development in learned image compression techniques that prioritize ratedistortion-perceptual compression, preserving fine details even at lower bit-rates. However, current learning-based image compression methods often sacrifice human-friendly compression and require long decoding times. In this paper, we propose enhancements to the backbone network and loss function of existing image compression model, focusing on improving human perception and efficiency. Our proposed approach achieves competitive subjective results compared to state-of-the-art end-to-end learned image compression methods and classic methods, while requiring less decoding time and offering human-friendly compression. Through empirical evaluation, we demonstrate the effectiveness of our proposed method in achieving outstanding performance, with more than 25% bit-rate saving at the same subjective quality. ## Keyword: RAW ### A Survey on Deep Learning-Based Monocular Spacecraft Pose Estimation: Current State, Limitations and Prospects - **Authors:** Leo Pauly, Wassim Rharbaoui, Carl Shneider, Arunkumar Rathinam, Vincent Gaudilliere, Djamila Aouada - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2305.07348 - **Pdf link:** https://arxiv.org/pdf/2305.07348 - **Abstract** Estimating the pose of an uncooperative spacecraft is an important computer vision problem for enabling the deployment of automatic vision-based systems in orbit, with applications ranging from on-orbit servicing to space debris removal. Following the general trend in computer vision, more and more works have been focusing on leveraging Deep Learning (DL) methods to address this problem. However and despite promising research-stage results, major challenges preventing the use of such methods in real-life missions still stand in the way. In particular, the deployment of such computation-intensive algorithms is still under-investigated, while the performance drop when training on synthetic and testing on real images remains to mitigate. The primary goal of this survey is to describe the current DL-based methods for spacecraft pose estimation in a comprehensive manner. The secondary goal is to help define the limitations towards the effective deployment of DL-based spacecraft pose estimation solutions for reliable autonomous vision-based applications. To this end, the survey first summarises the existing algorithms according to two approaches: hybrid modular pipelines and direct end-to-end regression methods. A comparison of algorithms is presented not only in terms of pose accuracy but also with a focus on network architectures and models' sizes keeping potential deployment in mind. Then, current monocular spacecraft pose estimation datasets used to train and test these methods are discussed. The data generation methods: simulators and testbeds, the domain gap and the performance drop between synthetically generated and lab/space collected images and the potential solutions are also discussed. Finally, the paper presents open research questions and future directions in the field, drawing parallels with other computer vision applications. ### Visual Information Extraction in the Wild: Practical Dataset and End-to-end Solution - **Authors:** Jianfeng Kuang, Wei Hua, Dingkang Liang, Mingkun Yang, Deqiang Jiang, Bo Ren, Yu Zhou, Xiang Bai - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2305.07498 - **Pdf link:** https://arxiv.org/pdf/2305.07498 - **Abstract** Visual information extraction (VIE), which aims to simultaneously perform OCR and information extraction in a unified framework, has drawn increasing attention due to its essential role in various applications like understanding receipts, goods, and traffic signs. However, as existing benchmark datasets for VIE mainly consist of document images without the adequate diversity of layout structures, background disturbs, and entity categories, they cannot fully reveal the challenges of real-world applications. In this paper, we propose a large-scale dataset consisting of camera images for VIE, which contains not only the larger variance of layout, backgrounds, and fonts but also much more types of entities. Besides, we propose a novel framework for end-to-end VIE that combines the stages of OCR and information extraction in an end-to-end learning fashion. Different from the previous end-to-end approaches that directly adopt OCR features as the input of an information extraction module, we propose to use contrastive learning to narrow the semantic gap caused by the difference between the tasks of OCR and information extraction. We evaluate the existing end-to-end methods for VIE on the proposed dataset and observe that the performance of these methods has a distinguishable drop from SROIE (a widely used English dataset) to our proposed dataset due to the larger variance of layout and entities. These results demonstrate our dataset is more practical for promoting advanced VIE algorithms. In addition, experiments demonstrate that the proposed VIE method consistently achieves the obvious performance gains on the proposed and SROIE datasets. ### BlendFields: Few-Shot Example-Driven Facial Modeling - **Authors:** Kacper Kania, Stephan J. Garbin, Andrea Tagliasacchi, Virginia Estellers, Kwang Moo Yi, Julien Valentin, Tomasz Trzciński, Marek Kowalski - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR) - **Arxiv link:** https://arxiv.org/abs/2305.07514 - **Pdf link:** https://arxiv.org/pdf/2305.07514 - **Abstract** Generating faithful visualizations of human faces requires capturing both coarse and fine-level details of the face geometry and appearance. Existing methods are either data-driven, requiring an extensive corpus of data not publicly accessible to the research community, or fail to capture fine details because they rely on geometric face models that cannot represent fine-grained details in texture with a mesh discretization and linear deformation designed to model only a coarse face geometry. We introduce a method that bridges this gap by drawing inspiration from traditional computer graphics techniques. Unseen expressions are modeled by blending appearance from a sparse set of extreme poses. This blending is performed by measuring local volumetric changes in those expressions and locally reproducing their appearance whenever a similar expression is performed at test time. We show that our method generalizes to unseen expressions, adding fine-grained effects on top of smooth volumetric deformations of a face, and demonstrate how it generalizes beyond faces. ### A Critical View Of Vision-Based Long-Term Dynamics Prediction Under Environment Misalignment - **Authors:** Hanchen Xie, Jiageng Zhu, Mahyar Khayatkhoei, Jiazhi Li, Mohamed E. Hussein, Wael AbdAlmgaeed - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2305.07648 - **Pdf link:** https://arxiv.org/pdf/2305.07648 - **Abstract** Dynamics prediction, which is the problem of predicting future states of scene objects based on current and prior states, is drawing increasing attention as an instance of learning physics. To solve this problem, Region Proposal Convolutional Interaction Network (RPCIN), a vision-based model, was proposed and achieved state-of-the-art performance in long-term prediction. RPCIN only takes raw images and simple object descriptions, such as the bounding box and segmentation mask of each object, as input. However, despite its success, the model's capability can be compromised under conditions of environment misalignment. In this paper, we investigate two challenging conditions for environment misalignment: Cross-Domain and Cross-Context by proposing four datasets that are designed for these challenges: SimB-Border, SimB-Split, BlenB-Border, and BlenB-Split. The datasets cover two domains and two contexts. Using RPCIN as a probe, experiments conducted on the combinations of the proposed datasets reveal potential weaknesses of the vision-based long-term dynamics prediction model. Furthermore, we propose a promising direction to mitigate the Cross-Domain challenge and provide concrete evidence supporting such a direction, which provides dramatic alleviation of the challenge on the proposed datasets. ## Keyword: raw image ### A Critical View Of Vision-Based Long-Term Dynamics Prediction Under Environment Misalignment - **Authors:** Hanchen Xie, Jiageng Zhu, Mahyar Khayatkhoei, Jiazhi Li, Mohamed E. Hussein, Wael AbdAlmgaeed - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2305.07648 - **Pdf link:** https://arxiv.org/pdf/2305.07648 - **Abstract** Dynamics prediction, which is the problem of predicting future states of scene objects based on current and prior states, is drawing increasing attention as an instance of learning physics. To solve this problem, Region Proposal Convolutional Interaction Network (RPCIN), a vision-based model, was proposed and achieved state-of-the-art performance in long-term prediction. RPCIN only takes raw images and simple object descriptions, such as the bounding box and segmentation mask of each object, as input. However, despite its success, the model's capability can be compromised under conditions of environment misalignment. In this paper, we investigate two challenging conditions for environment misalignment: Cross-Domain and Cross-Context by proposing four datasets that are designed for these challenges: SimB-Border, SimB-Split, BlenB-Border, and BlenB-Split. The datasets cover two domains and two contexts. Using RPCIN as a probe, experiments conducted on the combinations of the proposed datasets reveal potential weaknesses of the vision-based long-term dynamics prediction model. Furthermore, we propose a promising direction to mitigate the Cross-Domain challenge and provide concrete evidence supporting such a direction, which provides dramatic alleviation of the challenge on the proposed datasets.
process
new submissions for mon may keyword events there is no result keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp there is no result keyword image signal processing there is no result keyword image signal process there is no result keyword compression hflic human friendly perceptual learned image compression with reinforced transform authors peirong ning wei jiang ronggang wang subjects computer vision and pattern recognition cs cv multimedia cs mm image and video processing eess iv arxiv link pdf link abstract in recent years there has been rapid development in learned image compression techniques that prioritize ratedistortion perceptual compression preserving fine details even at lower bit rates however current learning based image compression methods often sacrifice human friendly compression and require long decoding times in this paper we propose enhancements to the backbone network and loss function of existing image compression model focusing on improving human perception and efficiency our proposed approach achieves competitive subjective results compared to state of the art end to end learned image compression methods and classic methods while requiring less decoding time and offering human friendly compression through empirical evaluation we demonstrate the effectiveness of our proposed method in achieving outstanding performance with more than bit rate saving at the same subjective quality keyword raw a survey on deep learning based monocular spacecraft pose estimation current state limitations and prospects authors leo pauly wassim rharbaoui carl shneider arunkumar rathinam vincent gaudilliere djamila aouada subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract estimating the pose of an uncooperative spacecraft is an important computer vision problem for enabling the deployment of automatic vision based systems in orbit with applications ranging from on orbit servicing to space debris removal following the general trend in computer vision more and more works have been focusing on leveraging deep learning dl methods to address this problem however and despite promising research stage results major challenges preventing the use of such methods in real life missions still stand in the way in particular the deployment of such computation intensive algorithms is still under investigated while the performance drop when training on synthetic and testing on real images remains to mitigate the primary goal of this survey is to describe the current dl based methods for spacecraft pose estimation in a comprehensive manner the secondary goal is to help define the limitations towards the effective deployment of dl based spacecraft pose estimation solutions for reliable autonomous vision based applications to this end the survey first summarises the existing algorithms according to two approaches hybrid modular pipelines and direct end to end regression methods a comparison of algorithms is presented not only in terms of pose accuracy but also with a focus on network architectures and models sizes keeping potential deployment in mind then current monocular spacecraft pose estimation datasets used to train and test these methods are discussed the data generation methods simulators and testbeds the domain gap and the performance drop between synthetically generated and lab space collected images and the potential solutions are also discussed finally the paper presents open research questions and future directions in the field drawing parallels with other computer vision applications visual information extraction in the wild practical dataset and end to end solution authors jianfeng kuang wei hua dingkang liang mingkun yang deqiang jiang bo ren yu zhou xiang bai subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract visual information extraction vie which aims to simultaneously perform ocr and information extraction in a unified framework has drawn increasing attention due to its essential role in various applications like understanding receipts goods and traffic signs however as existing benchmark datasets for vie mainly consist of document images without the adequate diversity of layout structures background disturbs and entity categories they cannot fully reveal the challenges of real world applications in this paper we propose a large scale dataset consisting of camera images for vie which contains not only the larger variance of layout backgrounds and fonts but also much more types of entities besides we propose a novel framework for end to end vie that combines the stages of ocr and information extraction in an end to end learning fashion different from the previous end to end approaches that directly adopt ocr features as the input of an information extraction module we propose to use contrastive learning to narrow the semantic gap caused by the difference between the tasks of ocr and information extraction we evaluate the existing end to end methods for vie on the proposed dataset and observe that the performance of these methods has a distinguishable drop from sroie a widely used english dataset to our proposed dataset due to the larger variance of layout and entities these results demonstrate our dataset is more practical for promoting advanced vie algorithms in addition experiments demonstrate that the proposed vie method consistently achieves the obvious performance gains on the proposed and sroie datasets blendfields few shot example driven facial modeling authors kacper kania stephan j garbin andrea tagliasacchi virginia estellers kwang moo yi julien valentin tomasz trzciński marek kowalski subjects computer vision and pattern recognition cs cv graphics cs gr arxiv link pdf link abstract generating faithful visualizations of human faces requires capturing both coarse and fine level details of the face geometry and appearance existing methods are either data driven requiring an extensive corpus of data not publicly accessible to the research community or fail to capture fine details because they rely on geometric face models that cannot represent fine grained details in texture with a mesh discretization and linear deformation designed to model only a coarse face geometry we introduce a method that bridges this gap by drawing inspiration from traditional computer graphics techniques unseen expressions are modeled by blending appearance from a sparse set of extreme poses this blending is performed by measuring local volumetric changes in those expressions and locally reproducing their appearance whenever a similar expression is performed at test time we show that our method generalizes to unseen expressions adding fine grained effects on top of smooth volumetric deformations of a face and demonstrate how it generalizes beyond faces a critical view of vision based long term dynamics prediction under environment misalignment authors hanchen xie jiageng zhu mahyar khayatkhoei jiazhi li mohamed e hussein wael abdalmgaeed subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract dynamics prediction which is the problem of predicting future states of scene objects based on current and prior states is drawing increasing attention as an instance of learning physics to solve this problem region proposal convolutional interaction network rpcin a vision based model was proposed and achieved state of the art performance in long term prediction rpcin only takes raw images and simple object descriptions such as the bounding box and segmentation mask of each object as input however despite its success the model s capability can be compromised under conditions of environment misalignment in this paper we investigate two challenging conditions for environment misalignment cross domain and cross context by proposing four datasets that are designed for these challenges simb border simb split blenb border and blenb split the datasets cover two domains and two contexts using rpcin as a probe experiments conducted on the combinations of the proposed datasets reveal potential weaknesses of the vision based long term dynamics prediction model furthermore we propose a promising direction to mitigate the cross domain challenge and provide concrete evidence supporting such a direction which provides dramatic alleviation of the challenge on the proposed datasets keyword raw image a critical view of vision based long term dynamics prediction under environment misalignment authors hanchen xie jiageng zhu mahyar khayatkhoei jiazhi li mohamed e hussein wael abdalmgaeed subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract dynamics prediction which is the problem of predicting future states of scene objects based on current and prior states is drawing increasing attention as an instance of learning physics to solve this problem region proposal convolutional interaction network rpcin a vision based model was proposed and achieved state of the art performance in long term prediction rpcin only takes raw images and simple object descriptions such as the bounding box and segmentation mask of each object as input however despite its success the model s capability can be compromised under conditions of environment misalignment in this paper we investigate two challenging conditions for environment misalignment cross domain and cross context by proposing four datasets that are designed for these challenges simb border simb split blenb border and blenb split the datasets cover two domains and two contexts using rpcin as a probe experiments conducted on the combinations of the proposed datasets reveal potential weaknesses of the vision based long term dynamics prediction model furthermore we propose a promising direction to mitigate the cross domain challenge and provide concrete evidence supporting such a direction which provides dramatic alleviation of the challenge on the proposed datasets
1
16,739
21,900,209,676
IssuesEvent
2022-05-20 12:43:49
camunda/zeebe-process-test
https://api.github.com/repos/camunda/zeebe-process-test
opened
Make the hasPassed() assertion more informative
kind/feature team/process-automation
**Description** running this assertion: ``` assertThat(processInstanceEvent) .hasPassedElement("StartEvent_"); ``` throws an assertion error like ``` java.lang.AssertionError: Expected element with id StartEvent_ to be passed 1 times, but was 0 at io.camunda.zeebe.process.test.assertions.ProcessInstanceAssert.hasPassedElement(ProcessInstanceAssert.java:226) at io.camunda.zeebe.process.test.assertions.ProcessInstanceAssert.hasPassedElement(ProcessInstanceAssert.java:189) at com.camunda.consulting.ExampleProcessUnitTest.testHappyPath(ExampleProcessUnitTest.java:35) ... ``` A message like ``` Could not find StartEvent_ in the list of passed elements. Instead the process has passed [StartEvent_1, Activity_2] ``` is more helpful. camunda-bpm-assert shows this error message: https://github.com/camunda/camunda-bpm-platform/blob/master/test-utils/assert/core/src/main/java/org/camunda/bpm/engine/test/assertions/bpmn/ProcessInstanceAssert.java#L287
1.0
Make the hasPassed() assertion more informative - **Description** running this assertion: ``` assertThat(processInstanceEvent) .hasPassedElement("StartEvent_"); ``` throws an assertion error like ``` java.lang.AssertionError: Expected element with id StartEvent_ to be passed 1 times, but was 0 at io.camunda.zeebe.process.test.assertions.ProcessInstanceAssert.hasPassedElement(ProcessInstanceAssert.java:226) at io.camunda.zeebe.process.test.assertions.ProcessInstanceAssert.hasPassedElement(ProcessInstanceAssert.java:189) at com.camunda.consulting.ExampleProcessUnitTest.testHappyPath(ExampleProcessUnitTest.java:35) ... ``` A message like ``` Could not find StartEvent_ in the list of passed elements. Instead the process has passed [StartEvent_1, Activity_2] ``` is more helpful. camunda-bpm-assert shows this error message: https://github.com/camunda/camunda-bpm-platform/blob/master/test-utils/assert/core/src/main/java/org/camunda/bpm/engine/test/assertions/bpmn/ProcessInstanceAssert.java#L287
process
make the haspassed assertion more informative description running this assertion assertthat processinstanceevent haspassedelement startevent throws an assertion error like java lang assertionerror expected element with id startevent to be passed times but was at io camunda zeebe process test assertions processinstanceassert haspassedelement processinstanceassert java at io camunda zeebe process test assertions processinstanceassert haspassedelement processinstanceassert java at com camunda consulting exampleprocessunittest testhappypath exampleprocessunittest java a message like could not find startevent in the list of passed elements instead the process has passed is more helpful camunda bpm assert shows this error message
1
11,487
14,358,521,836
IssuesEvent
2020-11-30 14:31:43
panther-labs/panther
https://api.github.com/repos/panther-labs/panther
closed
Syntax Errors Not Showing as Rule Errors
bug p0 team:data processing
### Describe the bug When a rule has a syntax error, it does not generate a Rule Error ### Steps to reproduce 1. Write a new rule 2. Add a return statement such as `return foobar = 'test'` 3. CloudWatch alarms fire, but no rule error alert is generated: ``` [ERROR] 2020-11-25T20:26:22.625Z 37ebe67d-1e18-580d-913a-be45de030a8f Failed to import rule Rule.ID Error: [invalid syntax (Rule.ID.py, line 2)] ``` ### Expected behavior A rule error alert should be generated about the Syntax Error ### Environment Panther Enterprise v1.13.0-RC ### Additional context None
1.0
Syntax Errors Not Showing as Rule Errors - ### Describe the bug When a rule has a syntax error, it does not generate a Rule Error ### Steps to reproduce 1. Write a new rule 2. Add a return statement such as `return foobar = 'test'` 3. CloudWatch alarms fire, but no rule error alert is generated: ``` [ERROR] 2020-11-25T20:26:22.625Z 37ebe67d-1e18-580d-913a-be45de030a8f Failed to import rule Rule.ID Error: [invalid syntax (Rule.ID.py, line 2)] ``` ### Expected behavior A rule error alert should be generated about the Syntax Error ### Environment Panther Enterprise v1.13.0-RC ### Additional context None
process
syntax errors not showing as rule errors describe the bug when a rule has a syntax error it does not generate a rule error steps to reproduce write a new rule add a return statement such as return foobar test cloudwatch alarms fire but no rule error alert is generated failed to import rule rule id error expected behavior a rule error alert should be generated about the syntax error environment panther enterprise rc additional context none
1
544,288
15,891,941,567
IssuesEvent
2021-04-10 21:26:30
thenewboston-developers/Website
https://api.github.com/repos/thenewboston-developers/Website
closed
Update Community Team's "About the team" card on website
priority.Low
You need to update Community Team's "About the team" card, on website. Here is the link to that webpage: https://thenewboston.com/teams/Community/Overview Currently in "About the team" card, we have a link to: https://github.com/thenewboston-developers/thenewboston-python We want it to be changed to: https://github.com/thenewboston-developers/Management/projects/7 with "thenewboston-Management/Community Tasks" as the reference name. If you have any questions in this issue, kindly let me know in the comment or via discord inbox.
1.0
Update Community Team's "About the team" card on website - You need to update Community Team's "About the team" card, on website. Here is the link to that webpage: https://thenewboston.com/teams/Community/Overview Currently in "About the team" card, we have a link to: https://github.com/thenewboston-developers/thenewboston-python We want it to be changed to: https://github.com/thenewboston-developers/Management/projects/7 with "thenewboston-Management/Community Tasks" as the reference name. If you have any questions in this issue, kindly let me know in the comment or via discord inbox.
non_process
update community team s about the team card on website you need to update community team s about the team card on website here is the link to that webpage currently in about the team card we have a link to we want it to be changed to with thenewboston management community tasks as the reference name if you have any questions in this issue kindly let me know in the comment or via discord inbox
0
22,342
31,018,899,966
IssuesEvent
2023-08-10 02:28:25
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
opened
[processor/k8sattributes] Refactor FieldExtractConfig
priority:p2 processor/k8sattributes
### Component(s) _No response_ ### What happened? There are at least two issues with the current FieldExtractConfig config interface: - https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/25128 - `tag_name` references the old "tag" concept inherited from OpenCensus, it should reference "attribute" or "resource attribute" instead. Overall, the config should be aligned with all other filtering interfaces that we have in other parts of the Collector. ### Collector version - ### Environment information ## Environment OS: (e.g., "Ubuntu 20.04") Compiler(if manually compiled): (e.g., "go 14.2") ### OpenTelemetry Collector configuration _No response_ ### Log output _No response_ ### Additional context _No response_
1.0
[processor/k8sattributes] Refactor FieldExtractConfig - ### Component(s) _No response_ ### What happened? There are at least two issues with the current FieldExtractConfig config interface: - https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/25128 - `tag_name` references the old "tag" concept inherited from OpenCensus, it should reference "attribute" or "resource attribute" instead. Overall, the config should be aligned with all other filtering interfaces that we have in other parts of the Collector. ### Collector version - ### Environment information ## Environment OS: (e.g., "Ubuntu 20.04") Compiler(if manually compiled): (e.g., "go 14.2") ### OpenTelemetry Collector configuration _No response_ ### Log output _No response_ ### Additional context _No response_
process
refactor fieldextractconfig component s no response what happened there are at least two issues with the current fieldextractconfig config interface tag name references the old tag concept inherited from opencensus it should reference attribute or resource attribute instead overall the config should be aligned with all other filtering interfaces that we have in other parts of the collector collector version environment information environment os e g ubuntu compiler if manually compiled e g go opentelemetry collector configuration no response log output no response additional context no response
1
278,070
21,058,035,772
IssuesEvent
2022-04-01 06:39:08
python-pillow/Pillow
https://api.github.com/repos/python-pillow/Pillow
closed
JPEG2000 irreversible option misleading
Documentation JPEG
The [documentation](https://pillow.readthedocs.io/en/latest/handbook/image-file-formats.html#jpeg-2000) for the J2K plugin says: > **irreversible** If `True`, use the lossy Irreversible Color Transformation followed by DWT 9-7. Defaults to `False`, which means to use the Reversible Color Transformation with DWT 5-3. I think the "color transformation" here is referring to the multiple component transformation (MCT), however looking at the source code, MCT isn't being applied, so no color transform is occurring (as I understand it `irreversible` in openjpeg only controls which DWT is used). This is visible in the output codestream: ```python from io import BytesIO import numpy as np from PIL import Image from struct import unpack # https://github.com/python-pillow/Pillow/blob/master/Tests/images/xmp_test.jpg im = Image.open("xmp_test.jpg") assert im.mode == "RGB" ref = np.asarray(im) b = BytesIO() b.name = "foo.j2k" # workaround so no JP2 im.save(b, format="JPEG2000", irreversible=False) # Parse J2K codestream codestream = b.getvalue() print(f"len: {len(codestream)}") if codestream[0:2] == b'\xff\x4f': if codestream[2:4] == b'\xff\x51': lsiz = unpack('>H', codestream[4:6])[0] ssiz = codestream[42] if ssiz & 0x80: print(f"precision: {(ssiz & 0x7F) + 1}") print("signed: true") else: print(f"precision: {ssiz + 1}") print("signed: false") cod_offset = lsiz + 4 if codestream[cod_offset:cod_offset + 2] == b'\xff\x52': # 0 for none, 1 for applied to components 0, 1, 2 print(f"MCT: {codestream[cod_offset + 8]}") # 0 for DWT 9-7 (irreversible), 1 for DWT 5-3 (reversible) print(f"DWT: {codestream[cod_offset + 13]}") im = Image.open(b) assert np.array_equal(ref, np.asarray(im)) ``` Produces: ``` len: 3506536 precision: 8 signed: false MCT: 0 DWT: 1 ``` For comparison, with Jpeg2KEncode.c modified to use MCT: ``` len: 2682646 precision: 8 signed: false MCT: 1 DWT: 1 ``` With the transform into YCbCr the compression ratio is higher, as expected, while still being reversible. ### What are your OS, Python and Pillow versions? * OS: Ubuntu 20.04 * Python: 3.9 * Pillow: current master d393cfb
1.0
JPEG2000 irreversible option misleading - The [documentation](https://pillow.readthedocs.io/en/latest/handbook/image-file-formats.html#jpeg-2000) for the J2K plugin says: > **irreversible** If `True`, use the lossy Irreversible Color Transformation followed by DWT 9-7. Defaults to `False`, which means to use the Reversible Color Transformation with DWT 5-3. I think the "color transformation" here is referring to the multiple component transformation (MCT), however looking at the source code, MCT isn't being applied, so no color transform is occurring (as I understand it `irreversible` in openjpeg only controls which DWT is used). This is visible in the output codestream: ```python from io import BytesIO import numpy as np from PIL import Image from struct import unpack # https://github.com/python-pillow/Pillow/blob/master/Tests/images/xmp_test.jpg im = Image.open("xmp_test.jpg") assert im.mode == "RGB" ref = np.asarray(im) b = BytesIO() b.name = "foo.j2k" # workaround so no JP2 im.save(b, format="JPEG2000", irreversible=False) # Parse J2K codestream codestream = b.getvalue() print(f"len: {len(codestream)}") if codestream[0:2] == b'\xff\x4f': if codestream[2:4] == b'\xff\x51': lsiz = unpack('>H', codestream[4:6])[0] ssiz = codestream[42] if ssiz & 0x80: print(f"precision: {(ssiz & 0x7F) + 1}") print("signed: true") else: print(f"precision: {ssiz + 1}") print("signed: false") cod_offset = lsiz + 4 if codestream[cod_offset:cod_offset + 2] == b'\xff\x52': # 0 for none, 1 for applied to components 0, 1, 2 print(f"MCT: {codestream[cod_offset + 8]}") # 0 for DWT 9-7 (irreversible), 1 for DWT 5-3 (reversible) print(f"DWT: {codestream[cod_offset + 13]}") im = Image.open(b) assert np.array_equal(ref, np.asarray(im)) ``` Produces: ``` len: 3506536 precision: 8 signed: false MCT: 0 DWT: 1 ``` For comparison, with Jpeg2KEncode.c modified to use MCT: ``` len: 2682646 precision: 8 signed: false MCT: 1 DWT: 1 ``` With the transform into YCbCr the compression ratio is higher, as expected, while still being reversible. ### What are your OS, Python and Pillow versions? * OS: Ubuntu 20.04 * Python: 3.9 * Pillow: current master d393cfb
non_process
irreversible option misleading the for the plugin says irreversible if true use the lossy irreversible color transformation followed by dwt defaults to false which means to use the reversible color transformation with dwt i think the color transformation here is referring to the multiple component transformation mct however looking at the source code mct isn t being applied so no color transform is occurring as i understand it irreversible in openjpeg only controls which dwt is used this is visible in the output codestream python from io import bytesio import numpy as np from pil import image from struct import unpack im image open xmp test jpg assert im mode rgb ref np asarray im b bytesio b name foo workaround so no im save b format irreversible false parse codestream codestream b getvalue print f len len codestream if codestream b xff if codestream b xff lsiz unpack h codestream ssiz codestream if ssiz print f precision ssiz print signed true else print f precision ssiz print signed false cod offset lsiz if codestream b xff for none for applied to components print f mct codestream for dwt irreversible for dwt reversible print f dwt codestream im image open b assert np array equal ref np asarray im produces len precision signed false mct dwt for comparison with c modified to use mct len precision signed false mct dwt with the transform into ycbcr the compression ratio is higher as expected while still being reversible what are your os python and pillow versions os ubuntu python pillow current master
0
15,240
19,178,875,190
IssuesEvent
2021-12-04 03:09:55
ooi-data/CE01ISSP-SP001-08-FLORTJ000-recovered_cspp-flort_sample
https://api.github.com/repos/ooi-data/CE01ISSP-SP001-08-FLORTJ000-recovered_cspp-flort_sample
opened
🛑 Processing failed: TypeError
process
## Overview `TypeError` found in `processing_task` task during run ended on 2021-12-04T03:09:54.779447. ## Details Flow name: `CE01ISSP-SP001-08-FLORTJ000-recovered_cspp-flort_sample` Task name: `processing_task` Error type: `TypeError` Error message: int() argument must be a string, a bytes-like object or a number, not 'NoneType' <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 157, in processing process_dataset( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 146, in process_dataset append_to_zarr(mod_ds, store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 342, in append_to_zarr mod_ds = _prepare_ds_to_append(store, mod_ds) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 142, in _prepare_ds_to_append new_arr.fill( TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType' ``` </details>
1.0
🛑 Processing failed: TypeError - ## Overview `TypeError` found in `processing_task` task during run ended on 2021-12-04T03:09:54.779447. ## Details Flow name: `CE01ISSP-SP001-08-FLORTJ000-recovered_cspp-flort_sample` Task name: `processing_task` Error type: `TypeError` Error message: int() argument must be a string, a bytes-like object or a number, not 'NoneType' <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 157, in processing process_dataset( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 146, in process_dataset append_to_zarr(mod_ds, store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 342, in append_to_zarr mod_ds = _prepare_ds_to_append(store, mod_ds) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 142, in _prepare_ds_to_append new_arr.fill( TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType' ``` </details>
process
🛑 processing failed typeerror overview typeerror found in processing task task during run ended on details flow name recovered cspp flort sample task name processing task error type typeerror error message int argument must be a string a bytes like object or a number not nonetype traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing process dataset file srv conda envs notebook lib site packages ooi harvester processor init py line in process dataset append to zarr mod ds store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr mod ds prepare ds to append store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in prepare ds to append new arr fill typeerror int argument must be a string a bytes like object or a number not nonetype
1
119,451
4,770,468,093
IssuesEvent
2016-10-26 15:21:02
USGCRP/gcis
https://api.github.com/repos/USGCRP/gcis
opened
Replace 'international' with the relevant org type
context QA priority medium type content type technical
Currently 'international' is a type of organization. Now that we have added international flag feature ( #448 ), there is no reason to have this type listed as an option under 'Organization type' drop down list. To replace this type with the relevant org type : - [ ] Generate a list of organizations that are currently listed as Organization type = 'International' ( @amruelama ) - [ ] Perform QA to replace all these organizations' type with relevant organization type ( @Zullmira ) - [ ] Delete 'international' as a type under 'Organization type' drop down list ( @lomky )
1.0
Replace 'international' with the relevant org type - Currently 'international' is a type of organization. Now that we have added international flag feature ( #448 ), there is no reason to have this type listed as an option under 'Organization type' drop down list. To replace this type with the relevant org type : - [ ] Generate a list of organizations that are currently listed as Organization type = 'International' ( @amruelama ) - [ ] Perform QA to replace all these organizations' type with relevant organization type ( @Zullmira ) - [ ] Delete 'international' as a type under 'Organization type' drop down list ( @lomky )
non_process
replace international with the relevant org type currently international is a type of organization now that we have added international flag feature there is no reason to have this type listed as an option under organization type drop down list to replace this type with the relevant org type generate a list of organizations that are currently listed as organization type international amruelama perform qa to replace all these organizations type with relevant organization type zullmira delete international as a type under organization type drop down list lomky
0
7,979
11,167,808,681
IssuesEvent
2019-12-27 18:46:28
openopps/openopps-platform
https://api.github.com/repos/openopps/openopps-platform
closed
Bug: Whoops page flashing
Apply Process Bug
Environment: Test Issue: Whoops page is flashing up in two places during the application process Steps to reproduce: 1) apply for an internship 2) click next steps - pulling data modal displays and then "Whoops page" for a few seconds 3) Select 3 internships and click next...whoops page displays again before taking user to education
1.0
Bug: Whoops page flashing - Environment: Test Issue: Whoops page is flashing up in two places during the application process Steps to reproduce: 1) apply for an internship 2) click next steps - pulling data modal displays and then "Whoops page" for a few seconds 3) Select 3 internships and click next...whoops page displays again before taking user to education
process
bug whoops page flashing environment test issue whoops page is flashing up in two places during the application process steps to reproduce apply for an internship click next steps pulling data modal displays and then whoops page for a few seconds select internships and click next whoops page displays again before taking user to education
1
41,094
10,617,039,182
IssuesEvent
2019-10-12 16:11:16
highlightjs/highlight.js
https://api.github.com/repos/highlightjs/highlight.js
closed
Getting r instead of relevance
npm/packaging/build
The docs describe the some functions return an object with the attribute `relevance` as in http://highlightjs.readthedocs.io/en/latest/api.html#highlight-name-value-ignore-illegals-continuation But using the last version of highlight.js seems to return `r` instead of `relevance` http://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.4.0/highlight.min.js If you search the minified file you can verify this. You will find `language:`, `value:` and `top:` but you won't find `relevance:`
1.0
Getting r instead of relevance - The docs describe the some functions return an object with the attribute `relevance` as in http://highlightjs.readthedocs.io/en/latest/api.html#highlight-name-value-ignore-illegals-continuation But using the last version of highlight.js seems to return `r` instead of `relevance` http://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.4.0/highlight.min.js If you search the minified file you can verify this. You will find `language:`, `value:` and `top:` but you won't find `relevance:`
non_process
getting r instead of relevance the docs describe the some functions return an object with the attribute relevance as in but using the last version of highlight js seems to return r instead of relevance if you search the minified file you can verify this you will find language value and top but you won t find relevance
0
3,817
6,800,668,630
IssuesEvent
2017-11-02 14:39:54
qgis/QGIS-Documentation
https://api.github.com/repos/qgis/QGIS-Documentation
opened
Add an icon for Processing algorithm
Processing help question
Some of the Processing algorithms have kept their initial icon from GDALTools or fTools period. Do we show these icons in algorithms' help? Note that this could be mixed with solving #2215 by adding path like :menuselection:`Vector --> Section -->|icon| alg_name` (this spelling won't work but you get the idea ;) )
1.0
Add an icon for Processing algorithm - Some of the Processing algorithms have kept their initial icon from GDALTools or fTools period. Do we show these icons in algorithms' help? Note that this could be mixed with solving #2215 by adding path like :menuselection:`Vector --> Section -->|icon| alg_name` (this spelling won't work but you get the idea ;) )
process
add an icon for processing algorithm some of the processing algorithms have kept their initial icon from gdaltools or ftools period do we show these icons in algorithms help note that this could be mixed with solving by adding path like menuselection vector section icon alg name this spelling won t work but you get the idea
1
7,909
19,978,585,647
IssuesEvent
2022-01-29 14:14:32
paperclip-ui/paperclip
https://api.github.com/repos/paperclip-ui/paperclip
closed
Use CRDTs for document syncing
status: completed estimate: 1 month priority: high effort: hard architecture
Kinda necessary for proper text syncing between VS Code and the designer
1.0
Use CRDTs for document syncing - Kinda necessary for proper text syncing between VS Code and the designer
non_process
use crdts for document syncing kinda necessary for proper text syncing between vs code and the designer
0
378,476
11,202,975,683
IssuesEvent
2020-01-04 16:32:16
Nicklason/tf2-automatic
https://api.github.com/repos/Nicklason/tf2-automatic
closed
Feature Request: Adding timestamps back to console/terminal
enhancement low priority
I like seeing it on the terminal xd
1.0
Feature Request: Adding timestamps back to console/terminal - I like seeing it on the terminal xd
non_process
feature request adding timestamps back to console terminal i like seeing it on the terminal xd
0
9,803
12,814,803,060
IssuesEvent
2020-07-04 21:09:21
percybolmer/workflow
https://api.github.com/repos/percybolmer/workflow
closed
Remove Application and Workflow
enhancement processor
Should Applications and Workflow be removed? Or an Hidden implementation instead? It would be easier in the gui if there only was Processors to care for Processors would a reference ID to the Parent
1.0
Remove Application and Workflow - Should Applications and Workflow be removed? Or an Hidden implementation instead? It would be easier in the gui if there only was Processors to care for Processors would a reference ID to the Parent
process
remove application and workflow should applications and workflow be removed or an hidden implementation instead it would be easier in the gui if there only was processors to care for processors would a reference id to the parent
1
179,314
21,562,835,724
IssuesEvent
2022-05-01 12:26:57
anyulled/package-dependency
https://api.github.com/repos/anyulled/package-dependency
closed
CVE-2020-7598 (Medium) detected in minimist-0.0.8.tgz - autoclosed
security vulnerability
## CVE-2020-7598 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimist-0.0.8.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p> <p>Path to dependency file: /dependentApp/package.json</p> <p>Path to vulnerable library: /dependentApp/node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - babel-core-6.26.3.tgz (Root Library) - babel-register-6.26.0.tgz - mkdirp-0.5.1.tgz - :x: **minimist-0.0.8.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/anyulled/package-dependency/commit/0f22de0986ee857eed06117e2eab2f58e4f0e1e3">0f22de0986ee857eed06117e2eab2f58e4f0e1e3</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload. <p>Publish Date: 2020-03-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598>CVE-2020-7598</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94">https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94</a></p> <p>Release Date: 2020-03-11</p> <p>Fix Resolution (minimist): 0.2.1</p> <p>Direct dependency fix Resolution (babel-core): 7.0.0-alpha.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-7598 (Medium) detected in minimist-0.0.8.tgz - autoclosed - ## CVE-2020-7598 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimist-0.0.8.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p> <p>Path to dependency file: /dependentApp/package.json</p> <p>Path to vulnerable library: /dependentApp/node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - babel-core-6.26.3.tgz (Root Library) - babel-register-6.26.0.tgz - mkdirp-0.5.1.tgz - :x: **minimist-0.0.8.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/anyulled/package-dependency/commit/0f22de0986ee857eed06117e2eab2f58e4f0e1e3">0f22de0986ee857eed06117e2eab2f58e4f0e1e3</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload. <p>Publish Date: 2020-03-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598>CVE-2020-7598</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94">https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94</a></p> <p>Release Date: 2020-03-11</p> <p>Fix Resolution (minimist): 0.2.1</p> <p>Direct dependency fix Resolution (babel-core): 7.0.0-alpha.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in minimist tgz autoclosed cve medium severity vulnerability vulnerable library minimist tgz parse argument options library home page a href path to dependency file dependentapp package json path to vulnerable library dependentapp node modules minimist package json dependency hierarchy babel core tgz root library babel register tgz mkdirp tgz x minimist tgz vulnerable library found in head commit a href found in base branch master vulnerability details minimist before could be tricked into adding or modifying properties of object prototype using a constructor or proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution minimist direct dependency fix resolution babel core alpha step up your open source security game with whitesource
0
11,106
13,942,799,142
IssuesEvent
2020-10-22 21:40:48
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
"Does not contain" and "Is not" filter also removes nulls
.Reproduced Priority:P2 Querying/GUI Querying/Processor Type:Bug
When applying a "does not contain" filter, it implicitly also filters away rows with null in that column. I think this is SQL behaviour, but not very nice UX.
1.0
"Does not contain" and "Is not" filter also removes nulls - When applying a "does not contain" filter, it implicitly also filters away rows with null in that column. I think this is SQL behaviour, but not very nice UX.
process
does not contain and is not filter also removes nulls when applying a does not contain filter it implicitly also filters away rows with null in that column i think this is sql behaviour but not very nice ux
1
15,811
20,013,175,684
IssuesEvent
2022-02-01 09:19:42
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
Apache Log4j Security Vulnerabilities with version lesser than 2.15.0
Bug Process: Fixed
Apache Log4j Security Vulnerabilities with version lesser than 2.15.0. For further details please refer this [link ](https://logging.apache.org/log4j/2.x/security.html#CVE-2021-45046)
1.0
Apache Log4j Security Vulnerabilities with version lesser than 2.15.0 - Apache Log4j Security Vulnerabilities with version lesser than 2.15.0. For further details please refer this [link ](https://logging.apache.org/log4j/2.x/security.html#CVE-2021-45046)
process
apache security vulnerabilities with version lesser than apache security vulnerabilities with version lesser than for further details please refer this
1
16,771
21,946,770,004
IssuesEvent
2022-05-24 02:03:04
hashgraph/hedera-json-rpc-relay
https://api.github.com/repos/hashgraph/hedera-json-rpc-relay
opened
Add Prometheus monitoring support
enhancement P1 process
### Problem The relay currently doesn't have monitoring support embedded into the API. This will make it challenging to assess and view health at scale. ### Solution Utilize [Prometheus](https://prometheus.io/docs/introduction/overview/) monitoring support to allow for great log value tracing capabilities. - Add a `/metrics` endpoint for aggregations to be polled from - Add npm [prom-client](https://github.com/siimon/prom-client) to allow logs aggregation - Capture request latency - Capture log levels - Capture response status ### Alternatives A non prometheus option e.g. swagger
1.0
Add Prometheus monitoring support - ### Problem The relay currently doesn't have monitoring support embedded into the API. This will make it challenging to assess and view health at scale. ### Solution Utilize [Prometheus](https://prometheus.io/docs/introduction/overview/) monitoring support to allow for great log value tracing capabilities. - Add a `/metrics` endpoint for aggregations to be polled from - Add npm [prom-client](https://github.com/siimon/prom-client) to allow logs aggregation - Capture request latency - Capture log levels - Capture response status ### Alternatives A non prometheus option e.g. swagger
process
add prometheus monitoring support problem the relay currently doesn t have monitoring support embedded into the api this will make it challenging to assess and view health at scale solution utilize monitoring support to allow for great log value tracing capabilities add a metrics endpoint for aggregations to be polled from add npm to allow logs aggregation capture request latency capture log levels capture response status alternatives a non prometheus option e g swagger
1
333,286
29,520,488,597
IssuesEvent
2023-06-05 00:57:50
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
reopened
DISABLED test_rmsprop (__main__.TestOptim)
triaged module: flaky-tests skipped module: dynamo
Platforms: dynamo This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_rmsprop&suite=TestOptim) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/8754116789). Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 failures and 1 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT BE ALARMED IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_rmsprop` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. cc @vincentqb @jbschlosser @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305
1.0
DISABLED test_rmsprop (__main__.TestOptim) - Platforms: dynamo This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_rmsprop&suite=TestOptim) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/8754116789). Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 failures and 1 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT BE ALARMED IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_rmsprop` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. cc @vincentqb @jbschlosser @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305
non_process
disabled test rmsprop main testoptim platforms dynamo this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with failures and successes debugging instructions after clicking on the recent samples link do not be alarmed if the ci is green we now shield flaky tests from developers so ci will thus be green but it will be harder to parse the logs to find relevant log snippets click on the workflow logs linked above click on the test step of the job so that it is expanded otherwise the grepping will not work grep for test rmsprop there should be several instances run as flaky tests are rerun in ci from which you can study the logs cc vincentqb jbschlosser jansel mlazos soumith voznesenskym yanboliang penguinwu
0