Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 1k | labels stringlengths 4 1.38k | body stringlengths 1 262k | index stringclasses 16
values | text_combine stringlengths 96 262k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
611,838 | 18,982,438,631 | IssuesEvent | 2021-11-21 05:26:48 | phetsims/chipper | https://api.github.com/repos/phetsims/chipper | closed | Make sure all repos pass precommit hooks | priority:2-high dev:typescript | From https://github.com/phetsims/chipper/issues/1134 make sure all repos pass precommit hooks. I already fixed phet-io. Sun was ok. Others need to be checked. | 1.0 | Make sure all repos pass precommit hooks - From https://github.com/phetsims/chipper/issues/1134 make sure all repos pass precommit hooks. I already fixed phet-io. Sun was ok. Others need to be checked. | priority | make sure all repos pass precommit hooks from make sure all repos pass precommit hooks i already fixed phet io sun was ok others need to be checked | 1 |
826,434 | 31,623,764,113 | IssuesEvent | 2023-09-06 02:34:53 | robocupjunioraustralia/RCJA_Registration_System | https://api.github.com/repos/robocupjunioraustralia/RCJA_Registration_System | closed | Cannot create state as both available for registration and global. | priority | Need for this has materialised as National is now running events in its own right and will likely do so again in the future.
Should be able to set national to global, show on website, and registration available. | 1.0 | Cannot create state as both available for registration and global. - Need for this has materialised as National is now running events in its own right and will likely do so again in the future.
Should be able to set national to global, show on website, and registration available. | priority | cannot create state as both available for registration and global need for this has materialised as national is now running events in its own right and will likely do so again in the future should be able to set national to global show on website and registration available | 1 |
10,155 | 31,813,653,857 | IssuesEvent | 2023-09-13 18:43:16 | inbucket/inbucket | https://api.github.com/repos/inbucket/inbucket | closed | goreleaser: archives.rlcp should not be used anymore | automation | Printed by goreleaser 1.20
> DEPRECATED: archives.rlcp should not be used anymore, check https://goreleaser.com/deprecations#archivesrlcp for more info | 1.0 | goreleaser: archives.rlcp should not be used anymore - Printed by goreleaser 1.20
> DEPRECATED: archives.rlcp should not be used anymore, check https://goreleaser.com/deprecations#archivesrlcp for more info | non_priority | goreleaser archives rlcp should not be used anymore printed by goreleaser deprecated archives rlcp should not be used anymore check for more info | 0 |
313,956 | 9,582,715,537 | IssuesEvent | 2019-05-08 01:58:54 | mfractor/mfractor-feedback | https://api.github.com/repos/mfractor/mfractor-feedback | closed | Generate View/Viewmodel in different projects | High Priority User Requested enhancement | It would be nice to use the mvvm wizard when using a separate project for my viewmodels. This is already possible with the navigation between pages/viewmodels | 1.0 | Generate View/Viewmodel in different projects - It would be nice to use the mvvm wizard when using a separate project for my viewmodels. This is already possible with the navigation between pages/viewmodels | priority | generate view viewmodel in different projects it would be nice to use the mvvm wizard when using a separate project for my viewmodels this is already possible with the navigation between pages viewmodels | 1 |
820,116 | 30,760,151,624 | IssuesEvent | 2023-07-29 15:37:16 | OneUptime/oneuptime | https://api.github.com/repos/OneUptime/oneuptime | closed | Push Containers To GHCR.io | enhancement low priority | **Is your feature request related to a problem? Please describe.**
Docker Hub provides a very restrictive rate limit for anonymous users at 100 pulls every 6 hours per ip.
For authenticated users, its 200 pulls every 6 hours per ip.
Paid Users can get up to 5000 pull a day.
We as a community would very much appreciate mirroring the packages to ghcr.io, githubs package platform, as it offers better if not non-existant ip ratelimits.
**Describe the solution you'd like**
Please change workflows to also push containers to ghcr.io/oneuptime/container
**Describe alternatives you've considered**
Users can mirror the packages themselves through some painful and confusing steps of forking the repo, modifying the workflows to update the container image tag every so often and deploy a container.
**Additional context**
https://docs.docker.com/docker-hub/download-rate-limit/
| 1.0 | Push Containers To GHCR.io - **Is your feature request related to a problem? Please describe.**
Docker Hub provides a very restrictive rate limit for anonymous users at 100 pulls every 6 hours per ip.
For authenticated users, its 200 pulls every 6 hours per ip.
Paid Users can get up to 5000 pull a day.
We as a community would very much appreciate mirroring the packages to ghcr.io, githubs package platform, as it offers better if not non-existant ip ratelimits.
**Describe the solution you'd like**
Please change workflows to also push containers to ghcr.io/oneuptime/container
**Describe alternatives you've considered**
Users can mirror the packages themselves through some painful and confusing steps of forking the repo, modifying the workflows to update the container image tag every so often and deploy a container.
**Additional context**
https://docs.docker.com/docker-hub/download-rate-limit/
| priority | push containers to ghcr io is your feature request related to a problem please describe docker hub provides a very restrictive rate limit for anonymous users at pulls every hours per ip for authenticated users its pulls every hours per ip paid users can get up to pull a day we as a community would very much appreciate mirroring the packages to ghcr io githubs package platform as it offers better if not non existant ip ratelimits describe the solution you d like please change workflows to also push containers to ghcr io oneuptime container describe alternatives you ve considered users can mirror the packages themselves through some painful and confusing steps of forking the repo modifying the workflows to update the container image tag every so often and deploy a container additional context | 1 |
65,592 | 3,236,441,690 | IssuesEvent | 2015-10-14 05:19:41 | cs2103aug2015-w14-3j/main | https://api.github.com/repos/cs2103aug2015-w14-3j/main | closed | As a user, I can assign priority levels to a certain task | priority.high type.story | so that know which tasks I need to complete soonest | 1.0 | As a user, I can assign priority levels to a certain task - so that know which tasks I need to complete soonest | priority | as a user i can assign priority levels to a certain task so that know which tasks i need to complete soonest | 1 |
340,821 | 10,278,933,707 | IssuesEvent | 2019-08-25 18:31:04 | RoboJackets/robocup-software | https://api.github.com/repos/RoboJackets/robocup-software | closed | Implement full angle paths with motion/rotation constraints | area / planning-motion priority / high status / need-triage type / bug type / not actionable | I've noticed that fast turning causes Pathing errors even in simulator, and this should be implemented eventually anyways to have accurate shooting and stuff.
Currently, PID is used to control angle on the soccer side with a target Angle or Location in MotionControl.cpp. I've already modified Paths and Pathplanner to have the capability to have angle information, but haven't implemented actual angle path planning. InterpolatedPath has angles in their Waypoints.
We would use trapezoidal motion planning for rotation and stuff.
| 1.0 | Implement full angle paths with motion/rotation constraints - I've noticed that fast turning causes Pathing errors even in simulator, and this should be implemented eventually anyways to have accurate shooting and stuff.
Currently, PID is used to control angle on the soccer side with a target Angle or Location in MotionControl.cpp. I've already modified Paths and Pathplanner to have the capability to have angle information, but haven't implemented actual angle path planning. InterpolatedPath has angles in their Waypoints.
We would use trapezoidal motion planning for rotation and stuff.
| priority | implement full angle paths with motion rotation constraints i ve noticed that fast turning causes pathing errors even in simulator and this should be implemented eventually anyways to have accurate shooting and stuff currently pid is used to control angle on the soccer side with a target angle or location in motioncontrol cpp i ve already modified paths and pathplanner to have the capability to have angle information but haven t implemented actual angle path planning interpolatedpath has angles in their waypoints we would use trapezoidal motion planning for rotation and stuff | 1 |
273,923 | 8,554,956,213 | IssuesEvent | 2018-11-08 08:34:39 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | support.mozilla.org - see bug description | browser-firefox-mobile browser-firefox-reality priority-important | <!-- @browser: Firefox Mobile 64.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.1.2; Mobile; rv:64.0) Gecko/64.0 Firefox/64.0 -->
<!-- @reported_with: browser-fxr -->
<!-- @extra_labels: browser-firefox-reality -->
**URL**: https://support.mozilla.org/en-US/kb/install-firefox-reality
**Browser / Version**: Firefox Mobile 64.0
**Operating System**: Android 7.1.2
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: please add tab manage
**Steps to Reproduce**:
i accidently click into a ad and not matter how many times i ckick back it just doesnt work. please let ad open in a new tab so i can close it with one ckick.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | support.mozilla.org - see bug description - <!-- @browser: Firefox Mobile 64.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.1.2; Mobile; rv:64.0) Gecko/64.0 Firefox/64.0 -->
<!-- @reported_with: browser-fxr -->
<!-- @extra_labels: browser-firefox-reality -->
**URL**: https://support.mozilla.org/en-US/kb/install-firefox-reality
**Browser / Version**: Firefox Mobile 64.0
**Operating System**: Android 7.1.2
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: please add tab manage
**Steps to Reproduce**:
i accidently click into a ad and not matter how many times i ckick back it just doesnt work. please let ad open in a new tab so i can close it with one ckick.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | support mozilla org see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description please add tab manage steps to reproduce i accidently click into a ad and not matter how many times i ckick back it just doesnt work please let ad open in a new tab so i can close it with one ckick browser configuration none from with ❤️ | 1 |
43,185 | 23,139,138,147 | IssuesEvent | 2022-07-28 16:42:28 | ampproject/amp-wp | https://api.github.com/repos/ampproject/amp-wp | closed | Preload background images from hero Cover blocks when parallax used | Enhancement P1 Performance Optimizer | ### Feature Description
Content-initial Cover blocks with a selected image will often be identified as a hero image candidate for the AMP Optimizer to apply SSR. When [opt-ing in to preloading responsive images](https://gist.github.com/westonruter/6adb64e6e8c858a40ac1dc51b03f16d8) (which is only supported by Chromium at the moment), such Cover block images will be successfully preloaded.
However, the Cover block does not always utilize a responsive image. Instead of having a nested `img` it will instead utilize a `background-image` style when the parallax option (“Fixed background”) is selected:

The non-AMP output looks as follows:
```html
<div class="wp-block-cover has-parallax" style="background-image:url(https://wordpressdev.lndo.site/content/uploads/2021/11/PXL_20211126_001920318.NIGHT_-scaled.jpg)">
<span aria-hidden="true" class="wp-block-cover__gradient-background has-background-dim"></span>
<div class="wp-block-cover__inner-container">
<p class="has-text-align-center has-large-font-size">Sunset</p>
</div>
</div>
```
Since responsive images are not involved here, the background image should always be preloaded when the Cover block is a hero candidate.
### Acceptance Criteria
_No response_
### Implementation Brief
_No response_
### QA Testing Instructions
_No response_
### Demo
_No response_
### Changelog Entry
_No response_ | True | Preload background images from hero Cover blocks when parallax used - ### Feature Description
Content-initial Cover blocks with a selected image will often be identified as a hero image candidate for the AMP Optimizer to apply SSR. When [opt-ing in to preloading responsive images](https://gist.github.com/westonruter/6adb64e6e8c858a40ac1dc51b03f16d8) (which is only supported by Chromium at the moment), such Cover block images will be successfully preloaded.
However, the Cover block does not always utilize a responsive image. Instead of having a nested `img` it will instead utilize a `background-image` style when the parallax option (“Fixed background”) is selected:

The non-AMP output looks as follows:
```html
<div class="wp-block-cover has-parallax" style="background-image:url(https://wordpressdev.lndo.site/content/uploads/2021/11/PXL_20211126_001920318.NIGHT_-scaled.jpg)">
<span aria-hidden="true" class="wp-block-cover__gradient-background has-background-dim"></span>
<div class="wp-block-cover__inner-container">
<p class="has-text-align-center has-large-font-size">Sunset</p>
</div>
</div>
```
Since responsive images are not involved here, the background image should always be preloaded when the Cover block is a hero candidate.
### Acceptance Criteria
_No response_
### Implementation Brief
_No response_
### QA Testing Instructions
_No response_
### Demo
_No response_
### Changelog Entry
_No response_ | non_priority | preload background images from hero cover blocks when parallax used feature description content initial cover blocks with a selected image will often be identified as a hero image candidate for the amp optimizer to apply ssr when which is only supported by chromium at the moment such cover block images will be successfully preloaded however the cover block does not always utilize a responsive image instead of having a nested img it will instead utilize a background image style when the parallax option “fixed background” is selected the non amp output looks as follows html div class wp block cover has parallax style background image url sunset since responsive images are not involved here the background image should always be preloaded when the cover block is a hero candidate acceptance criteria no response implementation brief no response qa testing instructions no response demo no response changelog entry no response | 0 |
11,340 | 7,518,705,480 | IssuesEvent | 2018-04-12 09:14:51 | mono/monodevelop | https://api.github.com/repos/mono/monodevelop | opened | Roslyn Full Solution Analysis is always on | Area: Performance Area: Roslyn Integration | http://source.roslyn.io/#Microsoft.CodeAnalysis.Workspaces/Shared/RuntimeOptions.cs,12
See this, we need to have an option for this. Maybe we could surface this in options, similar to Enable Source Analysis.
This causes the diagnostic analyzers to run on the full solution in the background, rather than just open files. | True | Roslyn Full Solution Analysis is always on - http://source.roslyn.io/#Microsoft.CodeAnalysis.Workspaces/Shared/RuntimeOptions.cs,12
See this, we need to have an option for this. Maybe we could surface this in options, similar to Enable Source Analysis.
This causes the diagnostic analyzers to run on the full solution in the background, rather than just open files. | non_priority | roslyn full solution analysis is always on see this we need to have an option for this maybe we could surface this in options similar to enable source analysis this causes the diagnostic analyzers to run on the full solution in the background rather than just open files | 0 |
505,933 | 14,654,923,689 | IssuesEvent | 2020-12-28 09:49:25 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | addons.mozilla.org - design is broken | browser-fenix engine-gecko ml-needsdiagnosis-false priority-important | <!-- @browser: Firefox Mobile 85.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.1.0; Mobile; rv:85.0) Gecko/85.0 Firefox/85.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/64314 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://addons.mozilla.org/pl/android/
**Browser / Version**: Firefox Mobile 85.0
**Operating System**: Android 8.1.0
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Images not loaded
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201223151005</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | addons.mozilla.org - design is broken - <!-- @browser: Firefox Mobile 85.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.1.0; Mobile; rv:85.0) Gecko/85.0 Firefox/85.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/64314 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://addons.mozilla.org/pl/android/
**Browser / Version**: Firefox Mobile 85.0
**Operating System**: Android 8.1.0
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Images not loaded
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201223151005</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | addons mozilla org design is broken url browser version firefox mobile operating system android tested another browser yes chrome problem type design is broken description images not loaded steps to reproduce browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 1 |
124,796 | 26,540,085,947 | IssuesEvent | 2023-01-19 18:30:46 | vtex/admin-ui | https://api.github.com/repos/vtex/admin-ui | closed | Listing template: adjustments | documentation code enhancement | **[Full blown example](https://97wzs0.csb.app/)**
* Change page width to wide
* On the row Menu, add a divider before the Delete option
* Update the Filter component, so it doesn't display a Clear all button when no filters are applied
**[Basic example](https://cjmzyv.csb.app/)**
* Add pagination | 1.0 | Listing template: adjustments - **[Full blown example](https://97wzs0.csb.app/)**
* Change page width to wide
* On the row Menu, add a divider before the Delete option
* Update the Filter component, so it doesn't display a Clear all button when no filters are applied
**[Basic example](https://cjmzyv.csb.app/)**
* Add pagination | non_priority | listing template adjustments change page width to wide on the row menu add a divider before the delete option update the filter component so it doesn t display a clear all button when no filters are applied add pagination | 0 |
67,425 | 7,047,982,557 | IssuesEvent | 2018-01-02 15:51:25 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | ccl/sqlccl: TestBackupRestoreControlJob failed under stress | Robot test-failure | SHA: https://github.com/cockroachdb/cockroach/commits/4fd3e09aaa7974a7fb6f81853d717c25b082d878
Parameters:
```
TAGS=deadlock
GOFLAGS=
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=455324&tab=buildLog
```
I171223 10:12:01.624626 9732186 sql/event_log.go:113 [client=127.0.0.1:38614,user=root,n1] Event: "create_database", target: 61, info: {DatabaseName:cancelimport Statement:CREATE DATABASE cancelimport User:root}
I171223 10:12:01.660555 9451500 storage/replica_command.go:1231 [split,n3,s3,r117/2:/{Table/60/1/9…-Max}] initiating a split of this range at key /Table/61 [r118]
I171223 10:12:01.772297 9451388 storage/replica_proposal.go:195 [n3,s3,r117/2:/{Table/60/1/9…-Max}] new range lease repl=(n3,s3):2 start=1514023919.845143483,0 epo=1 pro=1514023919.845212063,0 following repl=(n3,s3):2 start=1514023919.845143483,0 epo=1 pro=1514023919.845212063,0
W171223 10:12:01.837455 10335420 storage/engine/mvcc.go:1871 [n1,s1,r12/1:/Table/1{5-6}] unable to find value for /Table/15/2/"pending"/2017-12-23T10:12:01.69962Z/307866899888472065/0 @ 1514023921.774163341,1
W171223 10:12:01.956656 10338084 storage/engine/mvcc.go:1900 [n1,s1,r12/1:/Table/1{5-6}] unable to find value for /Table/15/2/"pending"/2017-12-23T10:12:01.774163Z/307866899888472065/0 @ 1514023921.888943849,1
I171223 10:12:01.959654 10340270 ccl/sqlccl/csv.go:420 [client=127.0.0.1:38544,user=root,n1] could not fetch file size; falling back to per-file progress: bad ContentLength: -1
W171223 10:12:02.124131 10344974 storage/engine/mvcc.go:1900 [n1,s1,r12/1:/Table/1{5-6}] unable to find value for /Table/15/2/"running"/2017-12-23T10:12:01.774163Z/307866899888472065/0 @ 1514023922.083762405,0
W171223 10:12:02.313412 9385839 storage/engine/mvcc.go:1900 [n1,s1,r12/1:/Table/1{5-6}] unable to find value for /Table/15/2/"pending"/2017-12-23T10:12:02.164567Z/307866901428699137/0 @ 1514023922.242377124,1
I171223 10:12:02.314676 10353592 ccl/sqlccl/csv.go:420 [client=127.0.0.1:38614,user=root,n1] could not fetch file size; falling back to per-file progress: bad ContentLength: -1
I171223 10:12:02.367237 9385509 server/status/runtime.go:219 [n1] runtime stats: 485 MiB RSS, 564 goroutines, 47 MiB/225 MiB/339 MiB GO alloc/idle/total, 49 MiB/75 MiB CGO alloc/total, 6877.25cgo/sec, 1.10/0.05 %(u/s)time, 0.03 %gc (22x)
I171223 10:12:03.367403 10391187 storage/replica_command.go:1231 [n3,s3,r118/2:/{Table/61-Max}] initiating a split of this range at key /Table/62 [r119]
I171223 10:12:03.408160 9451374 storage/replica_proposal.go:195 [n3,s3,r118/2:/{Table/61-Max}] new range lease repl=(n3,s3):2 start=1514023919.845143483,0 epo=1 pro=1514023919.845212063,0 following repl=(n3,s3):2 start=1514023919.845143483,0 epo=1 pro=1514023919.845212063,0
I171223 10:12:03.456670 10397143 storage/replica_command.go:1231 [n3,s3,r119/2:/{Table/62-Max}] initiating a split of this range at key /Table/62/1/307866901922906112/0 [r120]
I171223 10:12:03.576131 9451339 storage/replica_proposal.go:195 [n3,s3,r119/2:/{Table/62-Max}] new range lease repl=(n3,s3):2 start=1514023919.845143483,0 epo=1 pro=1514023919.845212063,0 following repl=(n3,s3):2 start=1514023919.845143483,0 epo=1 pro=1514023919.845212063,0
I171223 10:12:03.621916 10405811 storage/replica_command.go:1231 [n3,s3,r120/2:/{Table/62/1/3…-Max}] initiating a split of this range at key /Table/62/1/307866903980539904/0/NULL [r121]
I171223 10:12:03.663422 9451394 storage/replica_proposal.go:195 [n3,s3,r120/2:/{Table/62/1/3…-Max}] new range lease repl=(n3,s3):2 start=1514023919.845143483,0 epo=1 pro=1514023919.845212063,0 following repl=(n3,s3):2 start=1514023919.845143483,0 epo=1 pro=1514023919.845212063,0
I171223 10:12:03.827208 10417743 util/stop/stopper.go:474 quiescing; tasks left:
1 storage.intentResolver: processing intents
W171223 10:12:03.827656 10417689 storage/replica.go:2795 [n1,s1,r12/1:/Table/1{5-6}] shutdown cancellation after 0.0s of attempting command ResolveIntent [/Table/15/1/307866905046024193/0,/Min), ResolveIntent [/Table/15/2/"running"/2017-12-23T10:12:03.268963Z/307866905046024193/0,/Min), ResolveIntent [/Table/15/2/"succeeded"/2017-12-23T10:12:03.268963Z/307866905046024193/0,/Min)
W171223 10:12:03.829147 9457417 storage/raft_transport.go:461 [n3] raft transport stream to node 1 failed: rpc error: code = FailedPrecondition desc = grpc: the client connection is closing
W171223 10:12:03.829698 10417675 storage/intent_resolver.go:330 [n3,s3] failed to resolve intents: failed to send RPC: sending to all 3 replicas failed; last error: {<nil> rpc error: code = FailedPrecondition desc = grpc: the client connection is closing}
W171223 10:12:03.829865 9466313 storage/raft_transport.go:461 [n3] raft transport stream to node 2 failed: rpc error: code = FailedPrecondition desc = grpc: the client connection is closing
W171223 10:12:03.830269 9454795 storage/raft_transport.go:461 [n2] raft transport stream to node 1 failed: rpc error: code = Unavailable desc = transport is closing
W171223 10:12:03.830417 9465941 storage/raft_transport.go:461 [n2] raft transport stream to node 3 failed: rpc error: code = FailedPrecondition desc = grpc: the client connection is closing
W171223 10:12:03.830941 9454740 storage/raft_transport.go:461 [n1] raft transport stream to node 2 failed: EOF
W171223 10:12:03.831072 9457373 storage/raft_transport.go:461 [n1] raft transport stream to node 3 failed: EOF
I171223 10:12:03.832209 10417741 util/stop/stopper.go:474 quiescing; tasks left:
1 node.Node: batch
E171223 10:12:03.836100 9447333 sql/jobs/registry.go:208 error while adopting jobs: node unavailable; try another peer
E171223 10:12:03.843427 9451788 sql/jobs/registry.go:208 error while adopting jobs: node unavailable; try another peer
``` | 1.0 | ccl/sqlccl: TestBackupRestoreControlJob failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/4fd3e09aaa7974a7fb6f81853d717c25b082d878
Parameters:
```
TAGS=deadlock
GOFLAGS=
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=455324&tab=buildLog
```
I171223 10:12:01.624626 9732186 sql/event_log.go:113 [client=127.0.0.1:38614,user=root,n1] Event: "create_database", target: 61, info: {DatabaseName:cancelimport Statement:CREATE DATABASE cancelimport User:root}
I171223 10:12:01.660555 9451500 storage/replica_command.go:1231 [split,n3,s3,r117/2:/{Table/60/1/9…-Max}] initiating a split of this range at key /Table/61 [r118]
I171223 10:12:01.772297 9451388 storage/replica_proposal.go:195 [n3,s3,r117/2:/{Table/60/1/9…-Max}] new range lease repl=(n3,s3):2 start=1514023919.845143483,0 epo=1 pro=1514023919.845212063,0 following repl=(n3,s3):2 start=1514023919.845143483,0 epo=1 pro=1514023919.845212063,0
W171223 10:12:01.837455 10335420 storage/engine/mvcc.go:1871 [n1,s1,r12/1:/Table/1{5-6}] unable to find value for /Table/15/2/"pending"/2017-12-23T10:12:01.69962Z/307866899888472065/0 @ 1514023921.774163341,1
W171223 10:12:01.956656 10338084 storage/engine/mvcc.go:1900 [n1,s1,r12/1:/Table/1{5-6}] unable to find value for /Table/15/2/"pending"/2017-12-23T10:12:01.774163Z/307866899888472065/0 @ 1514023921.888943849,1
I171223 10:12:01.959654 10340270 ccl/sqlccl/csv.go:420 [client=127.0.0.1:38544,user=root,n1] could not fetch file size; falling back to per-file progress: bad ContentLength: -1
W171223 10:12:02.124131 10344974 storage/engine/mvcc.go:1900 [n1,s1,r12/1:/Table/1{5-6}] unable to find value for /Table/15/2/"running"/2017-12-23T10:12:01.774163Z/307866899888472065/0 @ 1514023922.083762405,0
W171223 10:12:02.313412 9385839 storage/engine/mvcc.go:1900 [n1,s1,r12/1:/Table/1{5-6}] unable to find value for /Table/15/2/"pending"/2017-12-23T10:12:02.164567Z/307866901428699137/0 @ 1514023922.242377124,1
I171223 10:12:02.314676 10353592 ccl/sqlccl/csv.go:420 [client=127.0.0.1:38614,user=root,n1] could not fetch file size; falling back to per-file progress: bad ContentLength: -1
I171223 10:12:02.367237 9385509 server/status/runtime.go:219 [n1] runtime stats: 485 MiB RSS, 564 goroutines, 47 MiB/225 MiB/339 MiB GO alloc/idle/total, 49 MiB/75 MiB CGO alloc/total, 6877.25cgo/sec, 1.10/0.05 %(u/s)time, 0.03 %gc (22x)
I171223 10:12:03.367403 10391187 storage/replica_command.go:1231 [n3,s3,r118/2:/{Table/61-Max}] initiating a split of this range at key /Table/62 [r119]
I171223 10:12:03.408160 9451374 storage/replica_proposal.go:195 [n3,s3,r118/2:/{Table/61-Max}] new range lease repl=(n3,s3):2 start=1514023919.845143483,0 epo=1 pro=1514023919.845212063,0 following repl=(n3,s3):2 start=1514023919.845143483,0 epo=1 pro=1514023919.845212063,0
I171223 10:12:03.456670 10397143 storage/replica_command.go:1231 [n3,s3,r119/2:/{Table/62-Max}] initiating a split of this range at key /Table/62/1/307866901922906112/0 [r120]
I171223 10:12:03.576131 9451339 storage/replica_proposal.go:195 [n3,s3,r119/2:/{Table/62-Max}] new range lease repl=(n3,s3):2 start=1514023919.845143483,0 epo=1 pro=1514023919.845212063,0 following repl=(n3,s3):2 start=1514023919.845143483,0 epo=1 pro=1514023919.845212063,0
I171223 10:12:03.621916 10405811 storage/replica_command.go:1231 [n3,s3,r120/2:/{Table/62/1/3…-Max}] initiating a split of this range at key /Table/62/1/307866903980539904/0/NULL [r121]
I171223 10:12:03.663422 9451394 storage/replica_proposal.go:195 [n3,s3,r120/2:/{Table/62/1/3…-Max}] new range lease repl=(n3,s3):2 start=1514023919.845143483,0 epo=1 pro=1514023919.845212063,0 following repl=(n3,s3):2 start=1514023919.845143483,0 epo=1 pro=1514023919.845212063,0
I171223 10:12:03.827208 10417743 util/stop/stopper.go:474 quiescing; tasks left:
1 storage.intentResolver: processing intents
W171223 10:12:03.827656 10417689 storage/replica.go:2795 [n1,s1,r12/1:/Table/1{5-6}] shutdown cancellation after 0.0s of attempting command ResolveIntent [/Table/15/1/307866905046024193/0,/Min), ResolveIntent [/Table/15/2/"running"/2017-12-23T10:12:03.268963Z/307866905046024193/0,/Min), ResolveIntent [/Table/15/2/"succeeded"/2017-12-23T10:12:03.268963Z/307866905046024193/0,/Min)
W171223 10:12:03.829147 9457417 storage/raft_transport.go:461 [n3] raft transport stream to node 1 failed: rpc error: code = FailedPrecondition desc = grpc: the client connection is closing
W171223 10:12:03.829698 10417675 storage/intent_resolver.go:330 [n3,s3] failed to resolve intents: failed to send RPC: sending to all 3 replicas failed; last error: {<nil> rpc error: code = FailedPrecondition desc = grpc: the client connection is closing}
W171223 10:12:03.829865 9466313 storage/raft_transport.go:461 [n3] raft transport stream to node 2 failed: rpc error: code = FailedPrecondition desc = grpc: the client connection is closing
W171223 10:12:03.830269 9454795 storage/raft_transport.go:461 [n2] raft transport stream to node 1 failed: rpc error: code = Unavailable desc = transport is closing
W171223 10:12:03.830417 9465941 storage/raft_transport.go:461 [n2] raft transport stream to node 3 failed: rpc error: code = FailedPrecondition desc = grpc: the client connection is closing
W171223 10:12:03.830941 9454740 storage/raft_transport.go:461 [n1] raft transport stream to node 2 failed: EOF
W171223 10:12:03.831072 9457373 storage/raft_transport.go:461 [n1] raft transport stream to node 3 failed: EOF
I171223 10:12:03.832209 10417741 util/stop/stopper.go:474 quiescing; tasks left:
1 node.Node: batch
E171223 10:12:03.836100 9447333 sql/jobs/registry.go:208 error while adopting jobs: node unavailable; try another peer
E171223 10:12:03.843427 9451788 sql/jobs/registry.go:208 error while adopting jobs: node unavailable; try another peer
``` | non_priority | ccl sqlccl testbackuprestorecontroljob failed under stress sha parameters tags deadlock goflags stress build found a failed test sql event log go event create database target info databasename cancelimport statement create database cancelimport user root storage replica command go initiating a split of this range at key table storage replica proposal go new range lease repl start epo pro following repl start epo pro storage engine mvcc go unable to find value for table pending storage engine mvcc go unable to find value for table pending ccl sqlccl csv go could not fetch file size falling back to per file progress bad contentlength storage engine mvcc go unable to find value for table running storage engine mvcc go unable to find value for table pending ccl sqlccl csv go could not fetch file size falling back to per file progress bad contentlength server status runtime go runtime stats mib rss goroutines mib mib mib go alloc idle total mib mib cgo alloc total sec u s time gc storage replica command go initiating a split of this range at key table storage replica proposal go new range lease repl start epo pro following repl start epo pro storage replica command go initiating a split of this range at key table storage replica proposal go new range lease repl start epo pro following repl start epo pro storage replica command go initiating a split of this range at key table null storage replica proposal go new range lease repl start epo pro following repl start epo pro util stop stopper go quiescing tasks left storage intentresolver processing intents storage replica go shutdown cancellation after of attempting command resolveintent table min resolveintent table running min resolveintent table succeeded min storage raft transport go raft transport stream to node failed rpc error code failedprecondition desc grpc the client connection is closing storage intent resolver go failed to resolve intents failed to send rpc sending to all replicas failed last error rpc error code failedprecondition desc grpc the client connection is closing storage raft transport go raft transport stream to node failed rpc error code failedprecondition desc grpc the client connection is closing storage raft transport go raft transport stream to node failed rpc error code unavailable desc transport is closing storage raft transport go raft transport stream to node failed rpc error code failedprecondition desc grpc the client connection is closing storage raft transport go raft transport stream to node failed eof storage raft transport go raft transport stream to node failed eof util stop stopper go quiescing tasks left node node batch sql jobs registry go error while adopting jobs node unavailable try another peer sql jobs registry go error while adopting jobs node unavailable try another peer | 0 |
274,774 | 23,866,542,843 | IssuesEvent | 2022-09-07 11:32:38 | CommunitySolidServer/CommunitySolidServer | https://api.github.com/repos/CommunitySolidServer/CommunitySolidServer | closed | Add conformance tests to PR CI | ➕ test | The latest version of the conformance test harness added an `--ignore-failures` option which we could use to make it so it can be run when doing a PR without blocking the merge. https://github.com/solid/conformance-test-harness/releases/tag/v1.0.10
But even better is that we can now target specific versions, see https://github.com/solid/conformance-test-harness/blob/v1.0.10/USAGE.md#running-in-a-docker-container
We could for example fix to a specific version that we know works for PRs, and use the daily CI to test the latest version (and make sure we update when needed). | 1.0 | Add conformance tests to PR CI - The latest version of the conformance test harness added an `--ignore-failures` option which we could use to make it so it can be run when doing a PR without blocking the merge. https://github.com/solid/conformance-test-harness/releases/tag/v1.0.10
But even better is that we can now target specific versions, see https://github.com/solid/conformance-test-harness/blob/v1.0.10/USAGE.md#running-in-a-docker-container
We could for example fix to a specific version that we know works for PRs, and use the daily CI to test the latest version (and make sure we update when needed). | non_priority | add conformance tests to pr ci the latest version of the conformance test harness added an ignore failures option which we could use to make it so it can be run when doing a pr without blocking the merge but even better is that we can now target specific versions see we could for example fix to a specific version that we know works for prs and use the daily ci to test the latest version and make sure we update when needed | 0 |
604,898 | 18,720,736,114 | IssuesEvent | 2021-11-03 11:28:29 | canonical-web-and-design/snapcraft.io | https://api.github.com/repos/canonical-web-and-design/snapcraft.io | closed | Metrics showing inaccurate values | Priority: High 🚀 Dev ready | 
There are some instances where the tooltip is not showing accurate values on metrics and probably it has to do with some data that has changed or we haven't updated.
[Raised in the forum in here](https://forum.snapcraft.io/t/by-channel-metric-not-showing-accurate-references/12897) | 1.0 | Metrics showing inaccurate values - 
There are some instances where the tooltip is not showing accurate values on metrics and probably it has to do with some data that has changed or we haven't updated.
[Raised in the forum in here](https://forum.snapcraft.io/t/by-channel-metric-not-showing-accurate-references/12897) | priority | metrics showing inaccurate values there are some instances where the tooltip is not showing accurate values on metrics and probably it has to do with some data that has changed or we haven t updated | 1 |
337,944 | 10,221,669,125 | IssuesEvent | 2019-08-16 02:50:22 | Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth | https://api.github.com/repos/Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth | opened | [LOCALIZATION] | old_god_partly_free_effect_desc | :beetle: bug - localization :scroll: :grey_exclamation: priority low | **Mod Version**
458123df
**Please explain your issue in as much detail as possible:**
No loc
**Upload screenshots of the problem localization:**
<details>

</details> | 1.0 | [LOCALIZATION] | old_god_partly_free_effect_desc - **Mod Version**
458123df
**Please explain your issue in as much detail as possible:**
No loc
**Upload screenshots of the problem localization:**
<details>

</details> | priority | old god partly free effect desc mod version please explain your issue in as much detail as possible no loc upload screenshots of the problem localization | 1 |
251,786 | 8,027,302,285 | IssuesEvent | 2018-07-27 08:38:01 | Optiboot/optiboot | https://api.github.com/repos/Optiboot/optiboot | closed | Does it possible to call/ jump into boot loader from main loop ( for remote access)? | Priority-Medium Type-Enhancement auto-migrated | ```
What steps will reproduce the problem?
1.-
2.
3.
What is the expected output? What do you see instead?
-
What version of the product are you using? On what operating system?
AVRdude / Custom Made board / Win7
Please provide any additional information below.
For develop remote flash by serial link application.I try to jump from main loop to boot loader.Please comment.
```
Original issue reported on code.google.com by `tsupa...@gmail.com` on 20 Jan 2012 at 2:33
| 1.0 | Does it possible to call/ jump into boot loader from main loop ( for remote access)? - ```
What steps will reproduce the problem?
1.-
2.
3.
What is the expected output? What do you see instead?
-
What version of the product are you using? On what operating system?
AVRdude / Custom Made board / Win7
Please provide any additional information below.
For develop remote flash by serial link application.I try to jump from main loop to boot loader.Please comment.
```
Original issue reported on code.google.com by `tsupa...@gmail.com` on 20 Jan 2012 at 2:33
| priority | does it possible to call jump into boot loader from main loop for remote access what steps will reproduce the problem what is the expected output what do you see instead what version of the product are you using on what operating system avrdude custom made board please provide any additional information below for develop remote flash by serial link application i try to jump from main loop to boot loader please comment original issue reported on code google com by tsupa gmail com on jan at | 1 |
50,403 | 21,093,868,907 | IssuesEvent | 2022-04-04 08:28:36 | hashicorp/terraform-provider-azurerm | https://api.github.com/repos/hashicorp/terraform-provider-azurerm | closed | Compute Instance naming rule is not identical with Azure ML portal | enhancement service/machine-learning | <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
The maximum length of a compute instance name is 16 characters, while its length at Azure portal is 28 characters.
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
### Potential Terraform Configuration
N/A
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* `azurerm_machine_learning_compute_instance`
### Expected Behaviour
When the length of compute instance is between 16 and 28, the instance should be successfully created.
<!--- What should have happened? --->
### Actual Behaviour
An error shows:
```hcl
Error: invalid value for name (It can include letters, digits and dashes. It must start with a letter, end with a letter or digit, and be between 2 and 16 characters in length.)
```
<!--- What actually happened? --->
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Such as vendor documentation?
--->
N/A
| 1.0 | Compute Instance naming rule is not identical with Azure ML portal - <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
The maximum length of a compute instance name is 16 characters, while its length at Azure portal is 28 characters.
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
### Potential Terraform Configuration
N/A
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* `azurerm_machine_learning_compute_instance`
### Expected Behaviour
When the length of compute instance is between 16 and 28, the instance should be successfully created.
<!--- What should have happened? --->
### Actual Behaviour
An error shows:
```hcl
Error: invalid value for name (It can include letters, digits and dashes. It must start with a letter, end with a letter or digit, and be between 2 and 16 characters in length.)
```
<!--- What actually happened? --->
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Such as vendor documentation?
--->
N/A
| non_priority | compute instance naming rule is not identical with azure ml portal please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description the maximum length of a compute instance name is characters while its length at azure portal is characters potential terraform configuration n a affected resource s azurerm machine learning compute instance expected behaviour when the length of compute instance is between and the instance should be successfully created actual behaviour an error shows hcl error invalid value for name it can include letters digits and dashes it must start with a letter end with a letter or digit and be between and characters in length references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here such as vendor documentation n a | 0 |
136,360 | 5,281,235,541 | IssuesEvent | 2017-02-07 16:03:17 | timberline-secondary/flex-site | https://api.github.com/repos/timberline-secondary/flex-site | opened | Date filter in Add Registration in Admin | 4. Low Priority | Add a date filter in the registration section for registering a student. Right now the drop down list lists ALL the event and you cannot tell by looking at it which event (date) the student will actually be registered for. | 1.0 | Date filter in Add Registration in Admin - Add a date filter in the registration section for registering a student. Right now the drop down list lists ALL the event and you cannot tell by looking at it which event (date) the student will actually be registered for. | priority | date filter in add registration in admin add a date filter in the registration section for registering a student right now the drop down list lists all the event and you cannot tell by looking at it which event date the student will actually be registered for | 1 |
247,051 | 18,857,235,088 | IssuesEvent | 2021-11-12 08:20:48 | Daimler/odxtools | https://api.github.com/repos/Daimler/odxtools | opened | Better API documentation | documentation good first issue help wanted nice to have | The API documentation is generated from the python docstrings and it can be generated and inspected via
```bash
cd $ODXTOOLS_SRC_DIR/doc
make html
firefox _build/html/index.html
```
It IMO already looks decent, but it is sorely lacking in completeness and does not feature any "tutorial style" introductions. this should be changed in the medium to long term. | 1.0 | Better API documentation - The API documentation is generated from the python docstrings and it can be generated and inspected via
```bash
cd $ODXTOOLS_SRC_DIR/doc
make html
firefox _build/html/index.html
```
It IMO already looks decent, but it is sorely lacking in completeness and does not feature any "tutorial style" introductions. this should be changed in the medium to long term. | non_priority | better api documentation the api documentation is generated from the python docstrings and it can be generated and inspected via bash cd odxtools src dir doc make html firefox build html index html it imo already looks decent but it is sorely lacking in completeness and does not feature any tutorial style introductions this should be changed in the medium to long term | 0 |
789,877 | 27,808,475,267 | IssuesEvent | 2023-03-17 23:00:31 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [YSQL] ALTER TABLE fails constraints if executed in the same connection | kind/bug area/ysql priority/medium status/awaiting-triage | Jira Link: [DB-5878](https://yugabyte.atlassian.net/browse/DB-5878)
### Description
In YB master, executing the following sequence of statements will fail
```SQL
yugabyte=# CREATE TABLE nopk ( id int CHECK (id > 0), v1 int CHECK (v1 > 0));
CREATE TABLE
yugabyte=# ALTER TABLE nopk ADD PRIMARY KEY (id);
ERROR: duplicate key value violates unique constraint "pg_constraint_conrelid_contypid_conname_index"
```
while the following sequence of statement can succeed
```
yugabyte=# CREATE TABLE nopk ( id int CHECK (id > 0));
CREATE TABLE
yugabyte=# ALTER TABLE nopk ADD PRIMARY KEY (id);
ALTER TABLE
```
```
yugabyte=# CREATE TABLE nopk ( id int CHECK (id > 0), v2 int NOT NULL);
CREATE TABLE
yugabyte=# ALTER TABLE nopk ADD PRIMARY KEY (id);
ALTER TABLE
```
Note: if the `ALTER` is executed in another connection, it can succeed as well
```
yugabyte=# CREATE TABLE nopk ( id int CHECK (id > 0), v1 int CHECK (v1 > 0));
CREATE TABLE
yugabyte=#
yugabyte=# ^D\q
<restart connection>
ssong@dev-server-ssong ~/c/yugabyte-db (fix-preload-mem-scan-context)> ysqlsh
ysqlsh (11.2-YB-2.17.2.0-b0)
Type "help" for help.
yugabyte=# ALTER TABLE nopk ADD PRIMARY KEY (id);
ALTER TABLE
```
In, PG the same sequence of statements can succeed
```
postgres=# CREATE TABLE nopk ( id int CHECK (id > 0), v1 int CHECK (v1 > 0));
CREATE TABLE
postgres=# ALTER TABLE nopk ADD PRIMARY KEY (id);
ALTER TABLE
```
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-5878]: https://yugabyte.atlassian.net/browse/DB-5878?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | [YSQL] ALTER TABLE fails constraints if executed in the same connection - Jira Link: [DB-5878](https://yugabyte.atlassian.net/browse/DB-5878)
### Description
In YB master, executing the following sequence of statements will fail
```SQL
yugabyte=# CREATE TABLE nopk ( id int CHECK (id > 0), v1 int CHECK (v1 > 0));
CREATE TABLE
yugabyte=# ALTER TABLE nopk ADD PRIMARY KEY (id);
ERROR: duplicate key value violates unique constraint "pg_constraint_conrelid_contypid_conname_index"
```
while the following sequence of statement can succeed
```
yugabyte=# CREATE TABLE nopk ( id int CHECK (id > 0));
CREATE TABLE
yugabyte=# ALTER TABLE nopk ADD PRIMARY KEY (id);
ALTER TABLE
```
```
yugabyte=# CREATE TABLE nopk ( id int CHECK (id > 0), v2 int NOT NULL);
CREATE TABLE
yugabyte=# ALTER TABLE nopk ADD PRIMARY KEY (id);
ALTER TABLE
```
Note: if the `ALTER` is executed in another connection, it can succeed as well
```
yugabyte=# CREATE TABLE nopk ( id int CHECK (id > 0), v1 int CHECK (v1 > 0));
CREATE TABLE
yugabyte=#
yugabyte=# ^D\q
<restart connection>
ssong@dev-server-ssong ~/c/yugabyte-db (fix-preload-mem-scan-context)> ysqlsh
ysqlsh (11.2-YB-2.17.2.0-b0)
Type "help" for help.
yugabyte=# ALTER TABLE nopk ADD PRIMARY KEY (id);
ALTER TABLE
```
In, PG the same sequence of statements can succeed
```
postgres=# CREATE TABLE nopk ( id int CHECK (id > 0), v1 int CHECK (v1 > 0));
CREATE TABLE
postgres=# ALTER TABLE nopk ADD PRIMARY KEY (id);
ALTER TABLE
```
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-5878]: https://yugabyte.atlassian.net/browse/DB-5878?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | alter table fails constraints if executed in the same connection jira link description in yb master executing the following sequence of statements will fail sql yugabyte create table nopk id int check id int check create table yugabyte alter table nopk add primary key id error duplicate key value violates unique constraint pg constraint conrelid contypid conname index while the following sequence of statement can succeed yugabyte create table nopk id int check id create table yugabyte alter table nopk add primary key id alter table yugabyte create table nopk id int check id int not null create table yugabyte alter table nopk add primary key id alter table note if the alter is executed in another connection it can succeed as well yugabyte create table nopk id int check id int check create table yugabyte yugabyte d q ssong dev server ssong c yugabyte db fix preload mem scan context ysqlsh ysqlsh yb type help for help yugabyte alter table nopk add primary key id alter table in pg the same sequence of statements can succeed postgres create table nopk id int check id int check create table postgres alter table nopk add primary key id alter table warning please confirm that this issue does not contain any sensitive information i confirm this issue does not contain any sensitive information | 1 |
53,501 | 13,261,773,480 | IssuesEvent | 2020-08-20 20:30:29 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | cmake - `make deploy-docs` goes to the wrong destination (Trac #1548) | Migrated from Trac cmake defect | goes to "METAPROJECT_release"
should just go to "METAPROJECT"
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1548">https://code.icecube.wisc.edu/projects/icecube/ticket/1548</a>, reported by negaand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-04-21T18:46:34",
"_ts": "1461264394191468",
"description": "goes to \"METAPROJECT_release\"\nshould just go to \"METAPROJECT\"",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"time": "2016-02-12T22:53:11",
"component": "cmake",
"summary": "cmake - `make deploy-docs` goes to the wrong destination",
"priority": "normal",
"keywords": "docs website",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| 1.0 | cmake - `make deploy-docs` goes to the wrong destination (Trac #1548) - goes to "METAPROJECT_release"
should just go to "METAPROJECT"
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1548">https://code.icecube.wisc.edu/projects/icecube/ticket/1548</a>, reported by negaand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-04-21T18:46:34",
"_ts": "1461264394191468",
"description": "goes to \"METAPROJECT_release\"\nshould just go to \"METAPROJECT\"",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"time": "2016-02-12T22:53:11",
"component": "cmake",
"summary": "cmake - `make deploy-docs` goes to the wrong destination",
"priority": "normal",
"keywords": "docs website",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| non_priority | cmake make deploy docs goes to the wrong destination trac goes to metaproject release should just go to metaproject migrated from json status closed changetime ts description goes to metaproject release nshould just go to metaproject reporter nega cc resolution fixed time component cmake summary cmake make deploy docs goes to the wrong destination priority normal keywords docs website milestone owner nega type defect | 0 |
244,340 | 7,874,036,230 | IssuesEvent | 2018-06-25 15:49:50 | cms-gem-daq-project/gem-plotting-tools | https://api.github.com/repos/cms-gem-daq-project/gem-plotting-tools | opened | Feature Request: 2D Map of Detector Scurve Width | Priority: Medium Status: Help Wanted Type: Enhancement | <!--- Provide a general summary of the issue in the Title above -->
## Brief summary of issue
<!--- Provide a description of the issue, including any other issues or pull requests it references -->
To better correlate channel loss with physical location on the detector a new distribution is needed. It should be a `TH2F` which has on the y-axis `ieta` and on the x-axis `strip`. Here `strip` should go as 0 to 383. The z-axis should be the scurve width.
However additional distributions may be of interest in the future so we should add a function to [anautilities.py](https://github.com/cms-gem-daq-project/gem-plotting-tools/blob/develop/anautilities.py). This function should take as input either a dictionary or a multidimensional numpy array. In either case something of the form of:
```
def makeDetectorMap(inputContainer):
"""
inputContainer - container where inputContainer[vfat][ROBstr] is the observable of interest for (vfat, ROBstr) ordered pair
"""
#initialize some TH2F object called hDetectorMap
#Loop over inputContainer[vfat][ROBstr]
#Get ieta position corresponding to (vfat, ROBstr) using chamber_vfatPos2iEta
#Determine binX and binY of hDetectorMap that corresponds to (vfat, ROBstr)
#Use the TH2F::SetBinContent() method to set inputContainer[vfat][ROBstr] to (binX,binY)
return hDetectorMap
```
Where is imported from `chamber_vfatPos2iEta` comes from [chamberInfo Line 72](https://github.com/cms-gem-daq-project/gem-plotting-tools/blob/3f1d85dedc8963f72467b1d253112b3f8d57aa04/mapping/chamberInfo.py#L72)
Then add making a detector map of scurve width in `anaUltraScurve.py`
### Types of issue
<!--- Propsed labels (see CONTRIBUTING.md) to help maintainers label your issue: -->
- [ ] Bug report (report an issue with the code)
- [X] Feature request (request for change which adds functionality)
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
We should make a physical map of scurve width as a single 2D plot across the detector in `anaUltraScurve.py`
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
See above pseudocode.
## Context (for feature requests)
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
Will help us understand the nature of the channel loss.
<!--- Template thanks to https://www.talater.com/open-source-templates/#/page/98 -->
| 1.0 | Feature Request: 2D Map of Detector Scurve Width - <!--- Provide a general summary of the issue in the Title above -->
## Brief summary of issue
<!--- Provide a description of the issue, including any other issues or pull requests it references -->
To better correlate channel loss with physical location on the detector a new distribution is needed. It should be a `TH2F` which has on the y-axis `ieta` and on the x-axis `strip`. Here `strip` should go as 0 to 383. The z-axis should be the scurve width.
However additional distributions may be of interest in the future so we should add a function to [anautilities.py](https://github.com/cms-gem-daq-project/gem-plotting-tools/blob/develop/anautilities.py). This function should take as input either a dictionary or a multidimensional numpy array. In either case something of the form of:
```
def makeDetectorMap(inputContainer):
"""
inputContainer - container where inputContainer[vfat][ROBstr] is the observable of interest for (vfat, ROBstr) ordered pair
"""
#initialize some TH2F object called hDetectorMap
#Loop over inputContainer[vfat][ROBstr]
#Get ieta position corresponding to (vfat, ROBstr) using chamber_vfatPos2iEta
#Determine binX and binY of hDetectorMap that corresponds to (vfat, ROBstr)
#Use the TH2F::SetBinContent() method to set inputContainer[vfat][ROBstr] to (binX,binY)
return hDetectorMap
```
Where is imported from `chamber_vfatPos2iEta` comes from [chamberInfo Line 72](https://github.com/cms-gem-daq-project/gem-plotting-tools/blob/3f1d85dedc8963f72467b1d253112b3f8d57aa04/mapping/chamberInfo.py#L72)
Then add making a detector map of scurve width in `anaUltraScurve.py`
### Types of issue
<!--- Propsed labels (see CONTRIBUTING.md) to help maintainers label your issue: -->
- [ ] Bug report (report an issue with the code)
- [X] Feature request (request for change which adds functionality)
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
We should make a physical map of scurve width as a single 2D plot across the detector in `anaUltraScurve.py`
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
See above pseudocode.
## Context (for feature requests)
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
Will help us understand the nature of the channel loss.
<!--- Template thanks to https://www.talater.com/open-source-templates/#/page/98 -->
| priority | feature request map of detector scurve width brief summary of issue to better correlate channel loss with physical location on the detector a new distribution is needed it should be a which has on the y axis ieta and on the x axis strip here strip should go as to the z axis should be the scurve width however additional distributions may be of interest in the future so we should add a function to this function should take as input either a dictionary or a multidimensional numpy array in either case something of the form of def makedetectormap inputcontainer inputcontainer container where inputcontainer is the observable of interest for vfat robstr ordered pair initialize some object called hdetectormap loop over inputcontainer get ieta position corresponding to vfat robstr using chamber determine binx and biny of hdetectormap that corresponds to vfat robstr use the setbincontent method to set inputcontainer to binx biny return hdetectormap where is imported from chamber comes from then add making a detector map of scurve width in anaultrascurve py types of issue bug report report an issue with the code feature request request for change which adds functionality expected behavior we should make a physical map of scurve width as a single plot across the detector in anaultrascurve py current behavior see above pseudocode context for feature requests will help us understand the nature of the channel loss | 1 |
16,495 | 22,333,858,207 | IssuesEvent | 2022-06-14 16:39:06 | ikemen-engine/Ikemen-GO | https://api.github.com/repos/ikemen-engine/Ikemen-GO | closed | Couple of issues with ReversalDef | bug compatibility | Found a few problems with this sctrl compared to how it works in Mugen.
The first one is easy to explain: if you use a ReversalDef without P2stateno, such as for autoguard moves or parrying, a player caught by it will become your target indefinitely. Or until you hit him again anyway. In Mugen the target is dropped if his Movetype is not H, like with HitDef.
The second one is more serious but also harder to explain. In Mugen, if a Hitdef clashes with a ReversalDef, the Hitdef will immediately be nullified. Like the ReversalDef is processed first. In Ikemen Go the Hitdef may keep going and be able to hit other players.
This can lead to some funny behaviour and also breaks my Parry code, which uses a ReversalDef in a separate helper. What's also interesting here is that the exchange between a ReversalDef and a HitDef will have different results according to player order. To use the Parry example again, the player that gets parried will receive a MoveHit or MoveReversed flag depending on which team side he is.
I'm attaching a video with a small example. In Mugen, only Geese's attack connects in this exchange.
https://user-images.githubusercontent.com/107247004/173037796-19abf1de-b6f4-43d6-86bc-673827863d60.mp4
| True | Couple of issues with ReversalDef - Found a few problems with this sctrl compared to how it works in Mugen.
The first one is easy to explain: if you use a ReversalDef without P2stateno, such as for autoguard moves or parrying, a player caught by it will become your target indefinitely. Or until you hit him again anyway. In Mugen the target is dropped if his Movetype is not H, like with HitDef.
The second one is more serious but also harder to explain. In Mugen, if a Hitdef clashes with a ReversalDef, the Hitdef will immediately be nullified. Like the ReversalDef is processed first. In Ikemen Go the Hitdef may keep going and be able to hit other players.
This can lead to some funny behaviour and also breaks my Parry code, which uses a ReversalDef in a separate helper. What's also interesting here is that the exchange between a ReversalDef and a HitDef will have different results according to player order. To use the Parry example again, the player that gets parried will receive a MoveHit or MoveReversed flag depending on which team side he is.
I'm attaching a video with a small example. In Mugen, only Geese's attack connects in this exchange.
https://user-images.githubusercontent.com/107247004/173037796-19abf1de-b6f4-43d6-86bc-673827863d60.mp4
| non_priority | couple of issues with reversaldef found a few problems with this sctrl compared to how it works in mugen the first one is easy to explain if you use a reversaldef without such as for autoguard moves or parrying a player caught by it will become your target indefinitely or until you hit him again anyway in mugen the target is dropped if his movetype is not h like with hitdef the second one is more serious but also harder to explain in mugen if a hitdef clashes with a reversaldef the hitdef will immediately be nullified like the reversaldef is processed first in ikemen go the hitdef may keep going and be able to hit other players this can lead to some funny behaviour and also breaks my parry code which uses a reversaldef in a separate helper what s also interesting here is that the exchange between a reversaldef and a hitdef will have different results according to player order to use the parry example again the player that gets parried will receive a movehit or movereversed flag depending on which team side he is i m attaching a video with a small example in mugen only geese s attack connects in this exchange | 0 |
433,775 | 30,349,528,028 | IssuesEvent | 2023-07-11 17:50:13 | EducationalTestingService/rsmtool | https://api.github.com/repos/EducationalTestingService/rsmtool | closed | Add best practices for sharing reports to documentation. | documentation | It would be useful to add some best practices for sharing reports with other people to the documentation. When to send just the HTML, when to zip up everything, when to include the CSVs etc. | 1.0 | Add best practices for sharing reports to documentation. - It would be useful to add some best practices for sharing reports with other people to the documentation. When to send just the HTML, when to zip up everything, when to include the CSVs etc. | non_priority | add best practices for sharing reports to documentation it would be useful to add some best practices for sharing reports with other people to the documentation when to send just the html when to zip up everything when to include the csvs etc | 0 |
445,671 | 12,834,775,788 | IssuesEvent | 2020-07-07 11:41:54 | radical-cybertools/radical.pilot | https://api.github.com/repos/radical-cybertools/radical.pilot | closed | Get RP running on NCAR Cheyenne | layer:rp priority:critical topic:deployment | Our collaborators want to run their applications on Cheyenne. See ticket https://github.com/radical-collaboration/hpc-workflows/issues/28.
A config file has been added for cheyenne under resource_ncar.json in the feature/cheyenne branch. Pending instructions to create a static VE on cheyenne (if required), need to confirm with Andre. | 1.0 | Get RP running on NCAR Cheyenne - Our collaborators want to run their applications on Cheyenne. See ticket https://github.com/radical-collaboration/hpc-workflows/issues/28.
A config file has been added for cheyenne under resource_ncar.json in the feature/cheyenne branch. Pending instructions to create a static VE on cheyenne (if required), need to confirm with Andre. | priority | get rp running on ncar cheyenne our collaborators want to run their applications on cheyenne see ticket a config file has been added for cheyenne under resource ncar json in the feature cheyenne branch pending instructions to create a static ve on cheyenne if required need to confirm with andre | 1 |
60,147 | 8,406,091,473 | IssuesEvent | 2018-10-11 16:56:11 | GMOD/jbrowse | https://api.github.com/repos/GMOD/jbrowse | closed | need a "embedding" section of documentation | documentation has pullreq in progress | Probably title it "Advanced Configuration -> Embedding JBrowse in another page".
Should tell users how to embed JBrowse without an iframe. | 1.0 | need a "embedding" section of documentation - Probably title it "Advanced Configuration -> Embedding JBrowse in another page".
Should tell users how to embed JBrowse without an iframe. | non_priority | need a embedding section of documentation probably title it advanced configuration embedding jbrowse in another page should tell users how to embed jbrowse without an iframe | 0 |
40,531 | 5,301,545,964 | IssuesEvent | 2017-02-10 10:03:08 | openshift/origin | https://api.github.com/repos/openshift/origin | closed | FAIL: TestTriggers_configChange | area/tests component/deployments kind/test-flake priority/P1 | ```
--- FAIL: TestTriggers_configChange (4.99s)
deploy_trigger_test.go:666: Operation cannot be fulfilled on deploymentconfigs "config": the object has been modified; please apply your changes to the latest version and try again
```
https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin_integration/8159/consoleFull#-94697474756bf4006e4b05b79524e5923
From the logs it seems that the test config was reconciled 20 times in one second which made the retry we already have for conflicts (max 5 retries) to blow up.
cc: @mfojtik @smarterclayton | 2.0 | FAIL: TestTriggers_configChange - ```
--- FAIL: TestTriggers_configChange (4.99s)
deploy_trigger_test.go:666: Operation cannot be fulfilled on deploymentconfigs "config": the object has been modified; please apply your changes to the latest version and try again
```
https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin_integration/8159/consoleFull#-94697474756bf4006e4b05b79524e5923
From the logs it seems that the test config was reconciled 20 times in one second which made the retry we already have for conflicts (max 5 retries) to blow up.
cc: @mfojtik @smarterclayton | non_priority | fail testtriggers configchange fail testtriggers configchange deploy trigger test go operation cannot be fulfilled on deploymentconfigs config the object has been modified please apply your changes to the latest version and try again from the logs it seems that the test config was reconciled times in one second which made the retry we already have for conflicts max retries to blow up cc mfojtik smarterclayton | 0 |
291,337 | 8,923,563,989 | IssuesEvent | 2019-01-21 15:58:40 | mozilla/addons-frontend | https://api.github.com/repos/mozilla/addons-frontend | closed | Score filtering dropdown menu disappears after selecting zero-count rating | component: add-on ratings priority: p2 state: pull request ready | Describe the problem and steps to reproduce it:
1. Load addons-dev.allizom.org
2. go to reviews page (addon or theme without too many reviews..)
3. In score filtering dropdown menu, select rating with 0 review
What happened?
The dropdown disappears.
What did you expect to happen?
The dropdown still be there so that I can select other ratings. And "There are no reviews" message is confusing when the dropdown disappears.

| 1.0 | Score filtering dropdown menu disappears after selecting zero-count rating - Describe the problem and steps to reproduce it:
1. Load addons-dev.allizom.org
2. go to reviews page (addon or theme without too many reviews..)
3. In score filtering dropdown menu, select rating with 0 review
What happened?
The dropdown disappears.
What did you expect to happen?
The dropdown still be there so that I can select other ratings. And "There are no reviews" message is confusing when the dropdown disappears.

| priority | score filtering dropdown menu disappears after selecting zero count rating describe the problem and steps to reproduce it load addons dev allizom org go to reviews page addon or theme without too many reviews in score filtering dropdown menu select rating with review what happened the dropdown disappears what did you expect to happen the dropdown still be there so that i can select other ratings and there are no reviews message is confusing when the dropdown disappears | 1 |
415,417 | 12,129,139,829 | IssuesEvent | 2020-04-22 21:53:28 | kubeflow/kubeflow | https://api.github.com/repos/kubeflow/kubeflow | closed | Notebook auto-scaling | kind/question platform/aws priority/p2 | /kind question
**Question:**
I've installed kubeflow (1.0) on AWS on a EKS cluster with autoscaling. Min number of instances 0, max number of instances 4, and desired 2.
I was under the impression that as I request for more notebooks (and resources for it) Kubeflow would be smart enough to auto-scale by itself until it hits the max limit (4 instances in my case). Is that the intended behavior or auto-scaling is intended to be done manually trough the underlying EKS cluster ? | 1.0 | Notebook auto-scaling - /kind question
**Question:**
I've installed kubeflow (1.0) on AWS on a EKS cluster with autoscaling. Min number of instances 0, max number of instances 4, and desired 2.
I was under the impression that as I request for more notebooks (and resources for it) Kubeflow would be smart enough to auto-scale by itself until it hits the max limit (4 instances in my case). Is that the intended behavior or auto-scaling is intended to be done manually trough the underlying EKS cluster ? | priority | notebook auto scaling kind question question i ve installed kubeflow on aws on a eks cluster with autoscaling min number of instances max number of instances and desired i was under the impression that as i request for more notebooks and resources for it kubeflow would be smart enough to auto scale by itself until it hits the max limit instances in my case is that the intended behavior or auto scaling is intended to be done manually trough the underlying eks cluster | 1 |
12,140 | 2,685,250,602 | IssuesEvent | 2015-03-29 21:12:11 | IssueMigrationTest/Test5 | https://api.github.com/repos/IssueMigrationTest/Test5 | closed | Support "with" statement | auto-migrated Priority-Medium Type-Defect | **Issue by [Jérémie Roquet](/arkanosis)**
_6 Jan 2010 at 10:25 GMT_
_Originally opened on Google Code_
----
```
Hello,
It would be nice to have support for the "with" statement introduced in
http://www.python.org/dev/peps/pep-0343/
It's available in CPython from version 2.5 (in the __future__
pseudo-module) and from version 2.6 (as a "normal" keyword).
Thanks,
```
| 1.0 | Support "with" statement - **Issue by [Jérémie Roquet](/arkanosis)**
_6 Jan 2010 at 10:25 GMT_
_Originally opened on Google Code_
----
```
Hello,
It would be nice to have support for the "with" statement introduced in
http://www.python.org/dev/peps/pep-0343/
It's available in CPython from version 2.5 (in the __future__
pseudo-module) and from version 2.6 (as a "normal" keyword).
Thanks,
```
| non_priority | support with statement issue by arkanosis jan at gmt originally opened on google code hello it would be nice to have support for the with statement introduced in it s available in cpython from version in the future pseudo module and from version as a normal keyword thanks | 0 |
107,057 | 23,339,746,857 | IssuesEvent | 2022-08-09 13:08:48 | neovim/neovim | https://api.github.com/repos/neovim/neovim | closed | `set ambiwidth=double` will cause display error | bug tui display unicode 💩 | ### Neovim version (nvim -v)
NVIM v0.7.0 Build type: Release
### Vim (not Nvim) behaves the same?
no, vim8.2
### Operating system/version
manjaro
### Terminal name/version
terminator
### $TERM environment variable
xterm-256color
### Installation
AUR
### How to reproduce the issue
1. nvim --clean
2. paste this line: `因此如果您在网页版的坚果云发现同一个目录下有两个名称为“Nutstore”和“nutstore”(小写的“n”)的文件/文件夹,在Windows/Mac系统上其中一个文件/文件夹后面会加上“大小写冲突”的字样`
3. :set ambiwidth=double
4. mouse drag to change terminal window width, you will see the symbol `>` which break a line misdisplay.

### Expected behavior
there won't any character in the right of the symbol `>`.
### Actual behavior
there are some character in the right of the symbol `>`.
| 1.0 | `set ambiwidth=double` will cause display error - ### Neovim version (nvim -v)
NVIM v0.7.0 Build type: Release
### Vim (not Nvim) behaves the same?
no, vim8.2
### Operating system/version
manjaro
### Terminal name/version
terminator
### $TERM environment variable
xterm-256color
### Installation
AUR
### How to reproduce the issue
1. nvim --clean
2. paste this line: `因此如果您在网页版的坚果云发现同一个目录下有两个名称为“Nutstore”和“nutstore”(小写的“n”)的文件/文件夹,在Windows/Mac系统上其中一个文件/文件夹后面会加上“大小写冲突”的字样`
3. :set ambiwidth=double
4. mouse drag to change terminal window width, you will see the symbol `>` which break a line misdisplay.

### Expected behavior
there won't any character in the right of the symbol `>`.
### Actual behavior
there are some character in the right of the symbol `>`.
| non_priority | set ambiwidth double will cause display error neovim version nvim v nvim build type release vim not nvim behaves the same no operating system version manjaro terminal name version terminator term environment variable xterm installation aur how to reproduce the issue nvim clean paste this line 因此如果您在网页版的坚果云发现同一个目录下有两个名称为“nutstore”和“nutstore”(小写的“n”)的文件 文件夹,在windows mac系统上其中一个文件 文件夹后面会加上“大小写冲突”的字样 set ambiwidth double mouse drag to change terminal window width you will see the symbol which break a line misdisplay expected behavior there won t any character in the right of the symbol actual behavior there are some character in the right of the symbol | 0 |
1,820 | 2,574,686,931 | IssuesEvent | 2015-02-11 18:22:47 | ORNL-CEES/DataTransferKit | https://api.github.com/repos/ORNL-CEES/DataTransferKit | opened | Fix Failing/Passing SplineInterpolation test | bug Priority Testing | The SplineInterpolation test fails on my machine but passes on most of the CDash builds. There are CDash builds in which this test passes some nights and not others. The test started failing after Tpetra switched to view semantics as part of the Kokkos refactor. This issue will track the process of finding the bugs. It may be a memory leak. | 1.0 | Fix Failing/Passing SplineInterpolation test - The SplineInterpolation test fails on my machine but passes on most of the CDash builds. There are CDash builds in which this test passes some nights and not others. The test started failing after Tpetra switched to view semantics as part of the Kokkos refactor. This issue will track the process of finding the bugs. It may be a memory leak. | non_priority | fix failing passing splineinterpolation test the splineinterpolation test fails on my machine but passes on most of the cdash builds there are cdash builds in which this test passes some nights and not others the test started failing after tpetra switched to view semantics as part of the kokkos refactor this issue will track the process of finding the bugs it may be a memory leak | 0 |
570,116 | 17,019,208,860 | IssuesEvent | 2021-07-02 16:09:19 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | closed | Add Site should probably read Add site in new-tab page's Favorites | OS/Desktop feature/new-tab good first issue needs-text-change polish priority/P4 release-notes/exclude | <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
Add Site should probably read Add site in new-tab page's Favorites
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. open the new-tab page
2. click `Customize`
3. choose `Top Sites` from the left
4. choose `Favorites`
5. dismiss the dialog
6. hover to the right of the `Add site` button
7. click on the ellipsis
8. note the capitalization of `Add Site`
## Actual result:
<!--Please add screenshots if needed-->
It reads `Add Site` (upper-case)
<img width="1607" alt="Screen Shot 2021-03-09 at 11 14 11 AM" src="https://user-images.githubusercontent.com/387249/110524975-e8a6eb80-80c8-11eb-8960-986ee8c0057a.png">
## Expected result:
`Add site` (lower-case)
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
Brave | 1.22.55 Chromium: 89.0.4389.72 (Official Build) beta (x86_64)
-- | --
Revision | 3f345f156bfd157bd1bea06310e55f3fb2490359-refs/branch-heads/4389@{#1393}
OS | macOS Version 11.2.3 (Build 20D91)
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current release? no
- Can you reproduce this issue with the beta channel? no
- Can you reproduce this issue with the nightly channel? yes
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields?
- Does the issue resolve itself when disabling Brave Rewards?
- Is the issue reproducible on the latest version of Chrome?
## Miscellaneous Information:
<!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue-->
| 1.0 | Add Site should probably read Add site in new-tab page's Favorites - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
Add Site should probably read Add site in new-tab page's Favorites
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. open the new-tab page
2. click `Customize`
3. choose `Top Sites` from the left
4. choose `Favorites`
5. dismiss the dialog
6. hover to the right of the `Add site` button
7. click on the ellipsis
8. note the capitalization of `Add Site`
## Actual result:
<!--Please add screenshots if needed-->
It reads `Add Site` (upper-case)
<img width="1607" alt="Screen Shot 2021-03-09 at 11 14 11 AM" src="https://user-images.githubusercontent.com/387249/110524975-e8a6eb80-80c8-11eb-8960-986ee8c0057a.png">
## Expected result:
`Add site` (lower-case)
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
Brave | 1.22.55 Chromium: 89.0.4389.72 (Official Build) beta (x86_64)
-- | --
Revision | 3f345f156bfd157bd1bea06310e55f3fb2490359-refs/branch-heads/4389@{#1393}
OS | macOS Version 11.2.3 (Build 20D91)
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current release? no
- Can you reproduce this issue with the beta channel? no
- Can you reproduce this issue with the nightly channel? yes
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields?
- Does the issue resolve itself when disabling Brave Rewards?
- Is the issue reproducible on the latest version of Chrome?
## Miscellaneous Information:
<!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue-->
| priority | add site should probably read add site in new tab page s favorites have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description add site should probably read add site in new tab page s favorites steps to reproduce open the new tab page click customize choose top sites from the left choose favorites dismiss the dialog hover to the right of the add site button click on the ellipsis note the capitalization of add site actual result it reads add site upper case img width alt screen shot at am src expected result add site lower case reproduces how often brave version brave version info brave chromium official build beta revision refs branch heads os macos version build version channel information can you reproduce this issue with the current release no can you reproduce this issue with the beta channel no can you reproduce this issue with the nightly channel yes other additional information does the issue resolve itself when disabling brave shields does the issue resolve itself when disabling brave rewards is the issue reproducible on the latest version of chrome miscellaneous information | 1 |
597,942 | 18,216,690,592 | IssuesEvent | 2021-09-30 05:50:58 | literakl/mezinamiridici | https://api.github.com/repos/literakl/mezinamiridici | opened | Anchor flag for blogs | type: enhancement priority: P3 | Blog subtype: short text and a link to external article. Such an article must be visually distinguished in the stream. | 1.0 | Anchor flag for blogs - Blog subtype: short text and a link to external article. Such an article must be visually distinguished in the stream. | priority | anchor flag for blogs blog subtype short text and a link to external article such an article must be visually distinguished in the stream | 1 |
18,944 | 13,173,644,220 | IssuesEvent | 2020-08-11 20:43:36 | dotnet/dotnet-docker | https://api.github.com/repos/dotnet/dotnet-docker | closed | Incorrect PR build leg paths for 5.0 Alpine | area:infrastructure bug triaged | The changes from https://github.com/dotnet/dotnet-docker/pull/2064 end up causing the 5.0 Alpine PR build leg in nightly to build a bunch of images that should not be built. This is due to the build matrix generation. This leg ends up having the following build paths defined for it:
```
--path src/runtime-deps/5.0/alpine3.12/amd64 --path src/runtime/5.0/alpine3.12/amd64 --path src/aspnet/5.0/alpine3.12/amd64 --path src/sdk/5.0/alpine3.12/amd64 --path src/monitor/5.0/alpine/amd64 --path src/sdk/3.1/alpine3.12/amd64 --path src/aspnet/3.1/alpine3.12/amd64
```
It should only be built with the following:
```
--path src/runtime-deps/5.0/alpine3.12/amd64 --path src/runtime/5.0/alpine3.12/amd64 --path src/aspnet/5.0/alpine3.12/amd64 --path src/sdk/5.0/alpine3.12/amd64
``` | 1.0 | Incorrect PR build leg paths for 5.0 Alpine - The changes from https://github.com/dotnet/dotnet-docker/pull/2064 end up causing the 5.0 Alpine PR build leg in nightly to build a bunch of images that should not be built. This is due to the build matrix generation. This leg ends up having the following build paths defined for it:
```
--path src/runtime-deps/5.0/alpine3.12/amd64 --path src/runtime/5.0/alpine3.12/amd64 --path src/aspnet/5.0/alpine3.12/amd64 --path src/sdk/5.0/alpine3.12/amd64 --path src/monitor/5.0/alpine/amd64 --path src/sdk/3.1/alpine3.12/amd64 --path src/aspnet/3.1/alpine3.12/amd64
```
It should only be built with the following:
```
--path src/runtime-deps/5.0/alpine3.12/amd64 --path src/runtime/5.0/alpine3.12/amd64 --path src/aspnet/5.0/alpine3.12/amd64 --path src/sdk/5.0/alpine3.12/amd64
``` | non_priority | incorrect pr build leg paths for alpine the changes from end up causing the alpine pr build leg in nightly to build a bunch of images that should not be built this is due to the build matrix generation this leg ends up having the following build paths defined for it path src runtime deps path src runtime path src aspnet path src sdk path src monitor alpine path src sdk path src aspnet it should only be built with the following path src runtime deps path src runtime path src aspnet path src sdk | 0 |
4,830 | 3,896,897,613 | IssuesEvent | 2016-04-16 03:02:23 | lionheart/openradar-mirror | https://api.github.com/repos/lionheart/openradar-mirror | opened | 16284628: Autolayout Constraint Label not updated in IB Tree | classification:ui/usability reproducible:sometimes status:open | #### Description
Summary:
Two similar constraints, one updates in the sidebar the other doesn't.
Steps to Reproduce:
Add a "box" to a view controller in IB with storyboards enabled. Add a leading edge constraint with a value of 60 points. Add a trailing space constraint with the same values.
Then select the constraints in the tree and set the values to greater than 40 points, priority High (750).
Expected Results:
Both constrains would have "Horizontal Space (> 40)" as a label.
Actual Results:
One of the constraints still says "Horizontal Space (60)" as a label
cur, such as software versions and/or hardware configurations.
Notes:
Screenshot: http://cl.ly/image/0r2u1B1o2B04
-
Product Version: Version 5.1 (5B130a)
Created: 2014-03-11T03:01:12.979421
Originated: 2014-03-10T22:01:00
Open Radar Link: http://www.openradar.me/16284628 | True | 16284628: Autolayout Constraint Label not updated in IB Tree - #### Description
Summary:
Two similar constraints, one updates in the sidebar the other doesn't.
Steps to Reproduce:
Add a "box" to a view controller in IB with storyboards enabled. Add a leading edge constraint with a value of 60 points. Add a trailing space constraint with the same values.
Then select the constraints in the tree and set the values to greater than 40 points, priority High (750).
Expected Results:
Both constrains would have "Horizontal Space (> 40)" as a label.
Actual Results:
One of the constraints still says "Horizontal Space (60)" as a label
cur, such as software versions and/or hardware configurations.
Notes:
Screenshot: http://cl.ly/image/0r2u1B1o2B04
-
Product Version: Version 5.1 (5B130a)
Created: 2014-03-11T03:01:12.979421
Originated: 2014-03-10T22:01:00
Open Radar Link: http://www.openradar.me/16284628 | non_priority | autolayout constraint label not updated in ib tree description summary two similar constraints one updates in the sidebar the other doesn t steps to reproduce add a box to a view controller in ib with storyboards enabled add a leading edge constraint with a value of points add a trailing space constraint with the same values then select the constraints in the tree and set the values to greater than points priority high expected results both constrains would have horizontal space as a label actual results one of the constraints still says horizontal space as a label cur such as software versions and or hardware configurations notes screenshot product version version created originated open radar link | 0 |
456,780 | 13,150,997,156 | IssuesEvent | 2020-08-09 14:35:13 | chrisjsewell/docutils | https://api.github.com/repos/chrisjsewell/docutils | closed | The inline markup recognition rule example "2 * x *a **b *.txt" does not validate. [SF:bugs:308] | bugs closed-fixed priority-3 |
author: edauvergne
created: 2017-02-08 21:19:56.282000
assigned: goodger
SF_url: https://sourceforge.net/p/docutils/bugs/308
From the [inline markup recognition rules](http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html#inline-markup-recognition-rules), the following example is given for not requiring escaping due to the breaking of rule 2.
```
2 * x *a **b *.txt
```
However I don't see how rule 2 comes into this. And the conversion of this text by docutils to XML gives:
```
$ rst2xml aaa.rst
aaa.rst:1: (WARNING/2) Inline emphasis start-string without end-string.
aaa.rst:1: (WARNING/2) Inline strong start-string without end-string.
aaa.rst:1: (WARNING/2) Inline emphasis start-string without end-string.
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE document PUBLIC "+//IDN docutils.sourceforge.net//DTD Docutils Generic//EN//XML" "http://docutils.sourceforge.net/docs/ref/docutils.dtd">
<!-- Generated by Docutils 0.12 -->
<document source="aaa.rst"><paragraph>2 * x <problematic ids="id2" refid="id1">*</problematic>a <problematic ids="id4" refid="id3">**</problematic>b <problematic ids="id6" refid="id5">*</problematic>.txt</paragraph><system_message backrefs="id2" ids="id1" level="2" line="1" source="aaa.rst" type="WARNING"><paragraph>Inline emphasis start-string without end-string.</paragraph></system_message><system_message backrefs="id4" ids="id3" level="2" line="1" source="aaa.rst" type="WARNING"><paragraph>Inline strong start-string without end-string.</paragraph></system_message><system_message backrefs="id6" ids="id5" level="2" line="1" source="aaa.rst" type="WARNING"><paragraph>Inline emphasis start-string without end-string.</paragraph></system_message></document>$
```
---
commenter: goodger
posted: 2017-02-08 22:13:00.509000
title: #308 The inline markup recognition rule example "2 * x *a **b *.txt" does not validate.
You need to look at the reST sources, not the formatted HTML. Also, I had to guess which line you're talking about; please be more specific (give some context).
I believe the line you're talking about is:
~~~
No escaping is required inside the following inline markup examples:
- *2 * x *a **b *.txt* (breaks 2)
~~~
It's referring to the no escaping being needed inside the outermost asterisks, which makes the whole phrase italic.
Fixed the doc to make this clearer. Closing bug.
---
commenter: goodger
posted: 2017-02-08 22:13:20.340000
title: #308 The inline markup recognition rule example "2 * x *a **b *.txt" does not validate.
- **status**: open --> closed-fixed
- **assigned_to**: David Goodger
| 1.0 | The inline markup recognition rule example "2 * x *a **b *.txt" does not validate. [SF:bugs:308] -
author: edauvergne
created: 2017-02-08 21:19:56.282000
assigned: goodger
SF_url: https://sourceforge.net/p/docutils/bugs/308
From the [inline markup recognition rules](http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html#inline-markup-recognition-rules), the following example is given for not requiring escaping due to the breaking of rule 2.
```
2 * x *a **b *.txt
```
However I don't see how rule 2 comes into this. And the conversion of this text by docutils to XML gives:
```
$ rst2xml aaa.rst
aaa.rst:1: (WARNING/2) Inline emphasis start-string without end-string.
aaa.rst:1: (WARNING/2) Inline strong start-string without end-string.
aaa.rst:1: (WARNING/2) Inline emphasis start-string without end-string.
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE document PUBLIC "+//IDN docutils.sourceforge.net//DTD Docutils Generic//EN//XML" "http://docutils.sourceforge.net/docs/ref/docutils.dtd">
<!-- Generated by Docutils 0.12 -->
<document source="aaa.rst"><paragraph>2 * x <problematic ids="id2" refid="id1">*</problematic>a <problematic ids="id4" refid="id3">**</problematic>b <problematic ids="id6" refid="id5">*</problematic>.txt</paragraph><system_message backrefs="id2" ids="id1" level="2" line="1" source="aaa.rst" type="WARNING"><paragraph>Inline emphasis start-string without end-string.</paragraph></system_message><system_message backrefs="id4" ids="id3" level="2" line="1" source="aaa.rst" type="WARNING"><paragraph>Inline strong start-string without end-string.</paragraph></system_message><system_message backrefs="id6" ids="id5" level="2" line="1" source="aaa.rst" type="WARNING"><paragraph>Inline emphasis start-string without end-string.</paragraph></system_message></document>$
```
---
commenter: goodger
posted: 2017-02-08 22:13:00.509000
title: #308 The inline markup recognition rule example "2 * x *a **b *.txt" does not validate.
You need to look at the reST sources, not the formatted HTML. Also, I had to guess which line you're talking about; please be more specific (give some context).
I believe the line you're talking about is:
~~~
No escaping is required inside the following inline markup examples:
- *2 * x *a **b *.txt* (breaks 2)
~~~
It's referring to the no escaping being needed inside the outermost asterisks, which makes the whole phrase italic.
Fixed the doc to make this clearer. Closing bug.
---
commenter: goodger
posted: 2017-02-08 22:13:20.340000
title: #308 The inline markup recognition rule example "2 * x *a **b *.txt" does not validate.
- **status**: open --> closed-fixed
- **assigned_to**: David Goodger
| priority | the inline markup recognition rule example x a b txt does not validate author edauvergne created assigned goodger sf url from the the following example is given for not requiring escaping due to the breaking of rule x a b txt however i don t see how rule comes into this and the conversion of this text by docutils to xml gives aaa rst aaa rst warning inline emphasis start string without end string aaa rst warning inline strong start string without end string aaa rst warning inline emphasis start string without end string doctype document public idn docutils sourceforge net dtd docutils generic en xml x a b txt inline emphasis start string without end string inline strong start string without end string inline emphasis start string without end string commenter goodger posted title the inline markup recognition rule example x a b txt does not validate you need to look at the rest sources not the formatted html also i had to guess which line you re talking about please be more specific give some context i believe the line you re talking about is no escaping is required inside the following inline markup examples x a b txt breaks it s referring to the no escaping being needed inside the outermost asterisks which makes the whole phrase italic fixed the doc to make this clearer closing bug commenter goodger posted title the inline markup recognition rule example x a b txt does not validate status open closed fixed assigned to david goodger | 1 |
340,522 | 10,273,144,534 | IssuesEvent | 2019-08-23 18:26:24 | fgpv-vpgf/fgpv-vpgf | https://api.github.com/repos/fgpv-vpgf/fgpv-vpgf | closed | Grid number filters dont allow negative symbol | bug-type: broken use case priority: medium problem: bug type: corrective | A user cannot type the `-` symbol in a min/max filter box in the data grid. However you can compose a negative number in `notepad.exe` and then paste it into the filter box ok.
[This service](http://section917.cloudapp.net/arcgis/rest/services/TestData/Oilsands/MapServer/0) has negative numbers in the `longitude` field | 1.0 | Grid number filters dont allow negative symbol - A user cannot type the `-` symbol in a min/max filter box in the data grid. However you can compose a negative number in `notepad.exe` and then paste it into the filter box ok.
[This service](http://section917.cloudapp.net/arcgis/rest/services/TestData/Oilsands/MapServer/0) has negative numbers in the `longitude` field | priority | grid number filters dont allow negative symbol a user cannot type the symbol in a min max filter box in the data grid however you can compose a negative number in notepad exe and then paste it into the filter box ok has negative numbers in the longitude field | 1 |
280,626 | 21,313,875,353 | IssuesEvent | 2022-04-16 01:14:13 | uf-mil/mil | https://api.github.com/repos/uf-mil/mil | opened | Document documentation generation process | documentation enhancement software | The documentation process has grown increasingly complex with the addition of `autodoc`, attribute tables, and building in containerized environments.
As a result, this should all be documented. | 1.0 | Document documentation generation process - The documentation process has grown increasingly complex with the addition of `autodoc`, attribute tables, and building in containerized environments.
As a result, this should all be documented. | non_priority | document documentation generation process the documentation process has grown increasingly complex with the addition of autodoc attribute tables and building in containerized environments as a result this should all be documented | 0 |
54,289 | 3,062,998,462 | IssuesEvent | 2015-08-17 02:16:01 | Miniand/brdg.me-issues | https://api.github.com/repos/Miniand/brdg.me-issues | closed | Simplify command backend | priority:medium project:server type:enhancement | It's currently too complex and poorly abstracted, which can complicate commands sitting on top of state machines (limiting the ability to support things like mid-game votes.) Things should also be cleaned up to support more strongly defined parsers which will hopefully pave the way for richer UIs for mobile / web.
Current plan:
- [x] Remove `CanCall` from `Command` interface
- [x] Remove `Parse` from `Command` interface
- [x] Add `Name` to command interface to simplify command matching
- [x] Make `Call` accept a plain `string` input for arguments which it can parse however the implementation deems fit
- [x] Add `player string` argument to `Commands` function of `Game` interface
- [x] Move call authorisation logic inside `Commands` function of `Game` interface
- [x] Remove `AvailableCommands` helper as it will no longer be required | 1.0 | Simplify command backend - It's currently too complex and poorly abstracted, which can complicate commands sitting on top of state machines (limiting the ability to support things like mid-game votes.) Things should also be cleaned up to support more strongly defined parsers which will hopefully pave the way for richer UIs for mobile / web.
Current plan:
- [x] Remove `CanCall` from `Command` interface
- [x] Remove `Parse` from `Command` interface
- [x] Add `Name` to command interface to simplify command matching
- [x] Make `Call` accept a plain `string` input for arguments which it can parse however the implementation deems fit
- [x] Add `player string` argument to `Commands` function of `Game` interface
- [x] Move call authorisation logic inside `Commands` function of `Game` interface
- [x] Remove `AvailableCommands` helper as it will no longer be required | priority | simplify command backend it s currently too complex and poorly abstracted which can complicate commands sitting on top of state machines limiting the ability to support things like mid game votes things should also be cleaned up to support more strongly defined parsers which will hopefully pave the way for richer uis for mobile web current plan remove cancall from command interface remove parse from command interface add name to command interface to simplify command matching make call accept a plain string input for arguments which it can parse however the implementation deems fit add player string argument to commands function of game interface move call authorisation logic inside commands function of game interface remove availablecommands helper as it will no longer be required | 1 |
531,114 | 15,441,182,200 | IssuesEvent | 2021-03-08 05:21:34 | octobercms/october | https://api.github.com/repos/octobercms/october | closed | Adding multiple images inside rich text editor with a single drag | Priority: Low Status: In Progress Type: Enhancement | ##### Expected behavior
So the richtext editor froala comes with a nice feature, adding images from the gallery, when I multiselect images and click insert ... all the images I selected should be added to my text.
##### Actual behavior
Only one image is added
##### Reproduce steps
add a rich text editor to your model, when you are inserting the data try to add multiple images as mentioned in the (Expected behavior).
##### October build
431

| 1.0 | Adding multiple images inside rich text editor with a single drag - ##### Expected behavior
So the richtext editor froala comes with a nice feature, adding images from the gallery, when I multiselect images and click insert ... all the images I selected should be added to my text.
##### Actual behavior
Only one image is added
##### Reproduce steps
add a rich text editor to your model, when you are inserting the data try to add multiple images as mentioned in the (Expected behavior).
##### October build
431

| priority | adding multiple images inside rich text editor with a single drag expected behavior so the richtext editor froala comes with a nice feature adding images from the gallery when i multiselect images and click insert all the images i selected should be added to my text actual behavior only one image is added reproduce steps add a rich text editor to your model when you are inserting the data try to add multiple images as mentioned in the expected behavior october build | 1 |
204,679 | 15,947,324,380 | IssuesEvent | 2021-04-15 03:13:13 | akiradeveloper/lol | https://api.github.com/repos/akiradeveloper/lol | opened | Use term "bootstrap" | documentation | To add a new node, there should be an existing cluster to accept the joining. But how about the first node?
The first node is taken as a special node that forms a single node cluster with only itself.
Other softwares like elasticsearch and Consul seem to do the same thing and both call it bootstrapping.
- https://www.elastic.co/blog/a-new-era-for-cluster-coordination-in-elasticsearch: if you want to start a brand new cluster that has nodes on more than one host, you must specify the initial set of master-eligible nodes that the cluster should use as voting configuration in its first election. This is known as cluster bootstrapping
- https://www.consul.io/docs/architecture/consensus: When getting started, a single Consul server is put into "bootstrap" mode.
lol should use the same term. | 1.0 | Use term "bootstrap" - To add a new node, there should be an existing cluster to accept the joining. But how about the first node?
The first node is taken as a special node that forms a single node cluster with only itself.
Other softwares like elasticsearch and Consul seem to do the same thing and both call it bootstrapping.
- https://www.elastic.co/blog/a-new-era-for-cluster-coordination-in-elasticsearch: if you want to start a brand new cluster that has nodes on more than one host, you must specify the initial set of master-eligible nodes that the cluster should use as voting configuration in its first election. This is known as cluster bootstrapping
- https://www.consul.io/docs/architecture/consensus: When getting started, a single Consul server is put into "bootstrap" mode.
lol should use the same term. | non_priority | use term bootstrap to add a new node there should be an existing cluster to accept the joining but how about the first node the first node is taken as a special node that forms a single node cluster with only itself other softwares like elasticsearch and consul seem to do the same thing and both call it bootstrapping if you want to start a brand new cluster that has nodes on more than one host you must specify the initial set of master eligible nodes that the cluster should use as voting configuration in its first election this is known as cluster bootstrapping when getting started a single consul server is put into bootstrap mode lol should use the same term | 0 |
686,075 | 23,476,050,841 | IssuesEvent | 2022-08-17 06:07:45 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | Readonly Service Object doesn't Validate at Compile time | Type/Bug Priority/Blocker Team/CompilerFE | **Description:**
Consider the following record:
```ballerina
public type GraphqlServiceConfig record {|
readonly readonly & Interceptor[] interceptors = [];
|};
public type Interceptor distinct service object {
isolated remote function execute(Context context, Field 'field) returns anydata|error;
};
```
GraphQL module uses a similar record as the GraphQL ServiceConfig. But, when passing an invalid value into `interceptors` field, It's not validated at the compile time. Consider the following GraphQL service.
```ballerina
readonly service class PersonInterceptor {
}
@graphql:ServiceConfig {
interceptors: [new PersonInterceptor()]
}
service /graphql on new graphql:Listener(9000) {
isolated resource function get name() returns string {
return "Ballerina";
}
}
```
Since `PersonInterceptor` is an invalid value, ideally this should be invalidated at compile time. But currently, it returns the following error at runtime.
```shell
Compiling source
interceptors_with_records.bal
Running executable
error: {ballerina/lang.array}InherentTypeViolation {"message":"incompatible types: expected '(graphql:Interceptor & readonly)', found 'PersonInterceptor'"}
at interceptors_with_records:$annot_func$_0(interceptors_with_records.bal:12)
```
**Affected Versions:**
`2201.2.0` | 1.0 | Readonly Service Object doesn't Validate at Compile time - **Description:**
Consider the following record:
```ballerina
public type GraphqlServiceConfig record {|
readonly readonly & Interceptor[] interceptors = [];
|};
public type Interceptor distinct service object {
isolated remote function execute(Context context, Field 'field) returns anydata|error;
};
```
GraphQL module uses a similar record as the GraphQL ServiceConfig. But, when passing an invalid value into `interceptors` field, It's not validated at the compile time. Consider the following GraphQL service.
```ballerina
readonly service class PersonInterceptor {
}
@graphql:ServiceConfig {
interceptors: [new PersonInterceptor()]
}
service /graphql on new graphql:Listener(9000) {
isolated resource function get name() returns string {
return "Ballerina";
}
}
```
Since `PersonInterceptor` is an invalid value, ideally this should be invalidated at compile time. But currently, it returns the following error at runtime.
```shell
Compiling source
interceptors_with_records.bal
Running executable
error: {ballerina/lang.array}InherentTypeViolation {"message":"incompatible types: expected '(graphql:Interceptor & readonly)', found 'PersonInterceptor'"}
at interceptors_with_records:$annot_func$_0(interceptors_with_records.bal:12)
```
**Affected Versions:**
`2201.2.0` | priority | readonly service object doesn t validate at compile time description consider the following record ballerina public type graphqlserviceconfig record readonly readonly interceptor interceptors public type interceptor distinct service object isolated remote function execute context context field field returns anydata error graphql module uses a similar record as the graphql serviceconfig but when passing an invalid value into interceptors field it s not validated at the compile time consider the following graphql service ballerina readonly service class personinterceptor graphql serviceconfig interceptors service graphql on new graphql listener isolated resource function get name returns string return ballerina since personinterceptor is an invalid value ideally this should be invalidated at compile time but currently it returns the following error at runtime shell compiling source interceptors with records bal running executable error ballerina lang array inherenttypeviolation message incompatible types expected graphql interceptor readonly found personinterceptor at interceptors with records annot func interceptors with records bal affected versions | 1 |
636,370 | 20,598,476,032 | IssuesEvent | 2022-03-05 22:13:23 | mreishman/Log-Hog | https://api.github.com/repos/mreishman/Log-Hog | closed | Combine whats new and change log pages | enhancement Priority - 3 - Medium | - [x] Move whats new images into change log
- [x] change images into slideshow with click to go full screen | 1.0 | Combine whats new and change log pages - - [x] Move whats new images into change log
- [x] change images into slideshow with click to go full screen | priority | combine whats new and change log pages move whats new images into change log change images into slideshow with click to go full screen | 1 |
271,712 | 8,488,808,431 | IssuesEvent | 2018-10-26 17:50:36 | cyberperspectives/sagacity | https://api.github.com/repos/cyberperspectives/sagacity | closed | Ops Page slow category load - db->get_Finding_Count_By_Status | High Priority bug | The Ops page overall loads quickly, until it gets to a category with a lot of hosts (like the 20-30 Win 10 SCC targets). The attached ste_index_php cachegrind files show that get_Finding_Count_By_Status is taking an inordinate amount of time.
[cachegrind.out.zip](https://github.com/cyberperspectives/sagacity/files/2484482/cachegrind.out.zip)
| 1.0 | Ops Page slow category load - db->get_Finding_Count_By_Status - The Ops page overall loads quickly, until it gets to a category with a lot of hosts (like the 20-30 Win 10 SCC targets). The attached ste_index_php cachegrind files show that get_Finding_Count_By_Status is taking an inordinate amount of time.
[cachegrind.out.zip](https://github.com/cyberperspectives/sagacity/files/2484482/cachegrind.out.zip)
| priority | ops page slow category load db get finding count by status the ops page overall loads quickly until it gets to a category with a lot of hosts like the win scc targets the attached ste index php cachegrind files show that get finding count by status is taking an inordinate amount of time | 1 |
178,073 | 6,598,694,744 | IssuesEvent | 2017-09-16 09:25:15 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | opened | Need downgrade tests for 1.8 release | kind/bug priority/critical-urgent sig/release | From what I learned on slack we're testing upgrades only - we check if we upgrade 1.7 cluster to 1.8 cluster things keep working. On the other hand we're NOT testing if downgrading 1.8 cluster to 1.7 works at all. Downgrades are extremely important for everyone running kubernetes in production - if it turns out that 1.8 has a bug that breaks some of user's workloads.
Standard mitigation policy in such case is to downgrade cluster to 1.7 and start working on a fix. In the absence of downgrade possibility, he/she will need to wait, possibly days, for hot-fix which is risky by itself.
We should consider making downgrades work a prerequisite for releasing 1.8. I'm aware that we haven't done downgrade testing since around 1.5, but the fact that we somehow managed that until now doesn't mean we should continue to ignore this problem.
@kubernetes/sig-release-bugs @thockin @bgrant0607 @smarterclayton @spiffxp | 1.0 | Need downgrade tests for 1.8 release - From what I learned on slack we're testing upgrades only - we check if we upgrade 1.7 cluster to 1.8 cluster things keep working. On the other hand we're NOT testing if downgrading 1.8 cluster to 1.7 works at all. Downgrades are extremely important for everyone running kubernetes in production - if it turns out that 1.8 has a bug that breaks some of user's workloads.
Standard mitigation policy in such case is to downgrade cluster to 1.7 and start working on a fix. In the absence of downgrade possibility, he/she will need to wait, possibly days, for hot-fix which is risky by itself.
We should consider making downgrades work a prerequisite for releasing 1.8. I'm aware that we haven't done downgrade testing since around 1.5, but the fact that we somehow managed that until now doesn't mean we should continue to ignore this problem.
@kubernetes/sig-release-bugs @thockin @bgrant0607 @smarterclayton @spiffxp | priority | need downgrade tests for release from what i learned on slack we re testing upgrades only we check if we upgrade cluster to cluster things keep working on the other hand we re not testing if downgrading cluster to works at all downgrades are extremely important for everyone running kubernetes in production if it turns out that has a bug that breaks some of user s workloads standard mitigation policy in such case is to downgrade cluster to and start working on a fix in the absence of downgrade possibility he she will need to wait possibly days for hot fix which is risky by itself we should consider making downgrades work a prerequisite for releasing i m aware that we haven t done downgrade testing since around but the fact that we somehow managed that until now doesn t mean we should continue to ignore this problem kubernetes sig release bugs thockin smarterclayton spiffxp | 1 |
522,397 | 15,159,061,166 | IssuesEvent | 2021-02-12 03:02:57 | apcountryman/picolibrary-microchip-megaavr | https://api.github.com/repos/apcountryman/picolibrary-microchip-megaavr | closed | Add SPI peripheral based SPI basic controller | priority-normal status-awaiting_approval type-feature | Add SPI peripheral based SPI basic controller (`::picolibrary::Microchip::megaAVR::SPI::Basic_Controller<::picolibrary::Microchip::megaAVR::Peripheral::SPI>`) and associated echo interactive test program. | 1.0 | Add SPI peripheral based SPI basic controller - Add SPI peripheral based SPI basic controller (`::picolibrary::Microchip::megaAVR::SPI::Basic_Controller<::picolibrary::Microchip::megaAVR::Peripheral::SPI>`) and associated echo interactive test program. | priority | add spi peripheral based spi basic controller add spi peripheral based spi basic controller picolibrary microchip megaavr spi basic controller and associated echo interactive test program | 1 |
66,183 | 14,767,360,161 | IssuesEvent | 2021-01-10 06:15:53 | shiriivtsan/bebo | https://api.github.com/repos/shiriivtsan/bebo | opened | WS-2019-0103 (Medium) detected in handlebars-1.0.12.tgz | security vulnerability | ## WS-2019-0103 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-1.0.12.tgz</b></p></summary>
<p>Extension of the Mustache logicless template language</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-1.0.12.tgz">https://registry.npmjs.org/handlebars/-/handlebars-1.0.12.tgz</a></p>
<p>Path to dependency file: bebo/decompress-zip-0.0.8/package/package.json</p>
<p>Path to vulnerable library: bebo/decompress-zip-0.0.8/package/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- istanbul-0.1.46.tgz (Root Library)
- :x: **handlebars-1.0.12.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/shiriivtsan/bebo/commit/8eb42e349cd3aded1eab4b65b59788a7e934dd99">8eb42e349cd3aded1eab4b65b59788a7e934dd99</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Handlebars.js before 4.1.0 has Remote Code Execution (RCE)
<p>Publish Date: 2019-01-30
<p>URL: <a href=https://github.com/wycats/handlebars.js/issues/1267#issue-187151586>WS-2019-0103</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/wycats/handlebars.js/commit/edc6220d51139b32c28e51641fadad59a543ae57">https://github.com/wycats/handlebars.js/commit/edc6220d51139b32c28e51641fadad59a543ae57</a></p>
<p>Release Date: 2019-05-30</p>
<p>Fix Resolution: 4.1.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handlebars","packageVersion":"1.0.12","isTransitiveDependency":true,"dependencyTree":"istanbul:0.1.46;handlebars:1.0.12","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.1.0"}],"vulnerabilityIdentifier":"WS-2019-0103","vulnerabilityDetails":"Handlebars.js before 4.1.0 has Remote Code Execution (RCE)","vulnerabilityUrl":"https://github.com/wycats/handlebars.js/issues/1267#issue-187151586","cvss2Severity":"medium","cvss2Score":"5.5","extraData":{}}</REMEDIATE> --> | True | WS-2019-0103 (Medium) detected in handlebars-1.0.12.tgz - ## WS-2019-0103 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-1.0.12.tgz</b></p></summary>
<p>Extension of the Mustache logicless template language</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-1.0.12.tgz">https://registry.npmjs.org/handlebars/-/handlebars-1.0.12.tgz</a></p>
<p>Path to dependency file: bebo/decompress-zip-0.0.8/package/package.json</p>
<p>Path to vulnerable library: bebo/decompress-zip-0.0.8/package/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- istanbul-0.1.46.tgz (Root Library)
- :x: **handlebars-1.0.12.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/shiriivtsan/bebo/commit/8eb42e349cd3aded1eab4b65b59788a7e934dd99">8eb42e349cd3aded1eab4b65b59788a7e934dd99</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Handlebars.js before 4.1.0 has Remote Code Execution (RCE)
<p>Publish Date: 2019-01-30
<p>URL: <a href=https://github.com/wycats/handlebars.js/issues/1267#issue-187151586>WS-2019-0103</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/wycats/handlebars.js/commit/edc6220d51139b32c28e51641fadad59a543ae57">https://github.com/wycats/handlebars.js/commit/edc6220d51139b32c28e51641fadad59a543ae57</a></p>
<p>Release Date: 2019-05-30</p>
<p>Fix Resolution: 4.1.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handlebars","packageVersion":"1.0.12","isTransitiveDependency":true,"dependencyTree":"istanbul:0.1.46;handlebars:1.0.12","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.1.0"}],"vulnerabilityIdentifier":"WS-2019-0103","vulnerabilityDetails":"Handlebars.js before 4.1.0 has Remote Code Execution (RCE)","vulnerabilityUrl":"https://github.com/wycats/handlebars.js/issues/1267#issue-187151586","cvss2Severity":"medium","cvss2Score":"5.5","extraData":{}}</REMEDIATE> --> | non_priority | ws medium detected in handlebars tgz ws medium severity vulnerability vulnerable library handlebars tgz extension of the mustache logicless template language library home page a href path to dependency file bebo decompress zip package package json path to vulnerable library bebo decompress zip package node modules handlebars package json dependency hierarchy istanbul tgz root library x handlebars tgz vulnerable library found in head commit a href found in base branch master vulnerability details handlebars js before has remote code execution rce publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier ws vulnerabilitydetails handlebars js before has remote code execution rce vulnerabilityurl | 0 |
71,837 | 23,822,387,482 | IssuesEvent | 2022-09-05 12:24:55 | vector-im/element-ios | https://api.github.com/repos/vector-im/element-ios | closed | Mention pills from partial unsent message disappear when coming back to Room | T-Defect A-Composer A-Room | ### Steps to reproduce
1. Go to any room
2. Use a mention pill
3. Leave the screen OR kill the app
4. Come back to the same room
### Outcome
#### What did you expect?
Mention pill from stored message is restored
#### What happened instead?
Mention pill disappears and get replaced by a whitespace
### Your phone model
Any
### Operating system version
Any
### Application version
develop
### Homeserver
_No response_
### Will you send logs?
No | 1.0 | Mention pills from partial unsent message disappear when coming back to Room - ### Steps to reproduce
1. Go to any room
2. Use a mention pill
3. Leave the screen OR kill the app
4. Come back to the same room
### Outcome
#### What did you expect?
Mention pill from stored message is restored
#### What happened instead?
Mention pill disappears and get replaced by a whitespace
### Your phone model
Any
### Operating system version
Any
### Application version
develop
### Homeserver
_No response_
### Will you send logs?
No | non_priority | mention pills from partial unsent message disappear when coming back to room steps to reproduce go to any room use a mention pill leave the screen or kill the app come back to the same room outcome what did you expect mention pill from stored message is restored what happened instead mention pill disappears and get replaced by a whitespace your phone model any operating system version any application version develop homeserver no response will you send logs no | 0 |
263,868 | 28,070,715,485 | IssuesEvent | 2023-03-29 18:51:00 | turkdevops/cirrus-ci-web | https://api.github.com/repos/turkdevops/cirrus-ci-web | opened | CVE-2022-25927 (High) detected in ua-parser-js-0.7.31.tgz | Mend: dependency security vulnerability | ## CVE-2022-25927 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ua-parser-js-0.7.31.tgz</b></p></summary>
<p>Detect Browser, Engine, OS, CPU, and Device type/model from User-Agent data. Supports browser & node.js environment</p>
<p>Library home page: <a href="https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.31.tgz">https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.31.tgz</a></p>
<p>
Dependency Hierarchy:
- react-relay-12.0.0.tgz (Root Library)
- fbjs-3.0.1.tgz
- :x: **ua-parser-js-0.7.31.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/cirrus-ci-web/commit/24a9c65820b016981a42efaa6133b0bbbf1eaf54">24a9c65820b016981a42efaa6133b0bbbf1eaf54</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of the package ua-parser-js from 0.7.30 and before 0.7.33, from 0.8.1 and before 1.0.33 are vulnerable to Regular Expression Denial of Service (ReDoS) via the trim() function.
<p>Publish Date: 2023-01-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-25927>CVE-2022-25927</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2023-01-26</p>
<p>Fix Resolution: ua-parser-js - 0.7.33,1.0.33</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-25927 (High) detected in ua-parser-js-0.7.31.tgz - ## CVE-2022-25927 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ua-parser-js-0.7.31.tgz</b></p></summary>
<p>Detect Browser, Engine, OS, CPU, and Device type/model from User-Agent data. Supports browser & node.js environment</p>
<p>Library home page: <a href="https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.31.tgz">https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.31.tgz</a></p>
<p>
Dependency Hierarchy:
- react-relay-12.0.0.tgz (Root Library)
- fbjs-3.0.1.tgz
- :x: **ua-parser-js-0.7.31.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/cirrus-ci-web/commit/24a9c65820b016981a42efaa6133b0bbbf1eaf54">24a9c65820b016981a42efaa6133b0bbbf1eaf54</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of the package ua-parser-js from 0.7.30 and before 0.7.33, from 0.8.1 and before 1.0.33 are vulnerable to Regular Expression Denial of Service (ReDoS) via the trim() function.
<p>Publish Date: 2023-01-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-25927>CVE-2022-25927</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2023-01-26</p>
<p>Fix Resolution: ua-parser-js - 0.7.33,1.0.33</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in ua parser js tgz cve high severity vulnerability vulnerable library ua parser js tgz detect browser engine os cpu and device type model from user agent data supports browser node js environment library home page a href dependency hierarchy react relay tgz root library fbjs tgz x ua parser js tgz vulnerable library found in head commit a href found in base branch master vulnerability details versions of the package ua parser js from and before from and before are vulnerable to regular expression denial of service redos via the trim function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution ua parser js step up your open source security game with mend | 0 |
274,708 | 23,859,360,102 | IssuesEvent | 2022-09-07 05:04:51 | godotengine/godot | https://api.github.com/repos/godotengine/godot | opened | Very random crashes when executing `SubViewport.set_size_2d_override_stretch` | bug topic:rendering needs testing crash | ### Godot version
4.0.alpha.custom_build. 4b164b8e4
### System information
Ubuntu 22.04 - Nvidia GTX 970, Gnome shell 42 X11
### Issue description
When executing random SubViewport function, then after a while(usually after 30min of project running), I have this crash
```
drivers/vulkan/rendering_device_vulkan.cpp:9014:68: runtime error: index 8 out of bounds for type 'VkSampleCountFlagBits [7]'
=================================================================
==15042==ERROR: AddressSanitizer: global-buffer-overflow on address 0x55605079d800 at pc 0x55603a80e2e2 bp 0x7ffd22933f40 sp 0x7ffd22933f30
READ of size 4 at 0x55605079d800 thread T0
#0 0x55603a80e2e1 in RenderingDeviceVulkan::_ensure_supported_sample_count(RenderingDevice::TextureSamples) const drivers/vulkan/rendering_device_vulkan.cpp:9014
#1 0x55603a70ddf7 in RenderingDeviceVulkan::texture_create(RenderingDevice::TextureFormat const&, RenderingDevice::TextureView const&, Vector<Vector<unsigned char> > const&) drivers/vulkan/rendering_device_vulkan.cpp:1736
#2 0x556048cf51ea in RendererRD::TextureStorage::_update_render_target(RendererRD::TextureStorage::RenderTarget*) servers/rendering/renderer_rd/storage_rd/texture_storage.cpp:2203
#3 0x556048cfbf37 in RendererRD::TextureStorage::render_target_set_size(RID, int, int, unsigned int) servers/rendering/renderer_rd/storage_rd/texture_storage.cpp:2329
#4 0x55604a4e467f in RendererViewport::viewport_set_size(RID, int, int) servers/rendering/renderer_viewport.cpp:840
#5 0x556047e52272 in RenderingServerDefault::viewport_set_size(RID, int, int) servers/rendering/rendering_server_default.h:583
#6 0x5560419a3c13 in Viewport::_set_size(Vector2i const&, Vector2i const&, Rect2i const&, Transform2D const&, bool) scene/main/viewport.cpp:799
#7 0x556041a5ac96 in SubViewport::set_size_2d_override_stretch(bool) scene/main/viewport.cpp:4072
#8 0x556033940369 in void call_with_variant_args_helper<__UnexistingClass, bool, 0ul>(__UnexistingClass*, void (__UnexistingClass::*)(bool), Variant const**, Callable::CallError&, IndexSequence<0ul>) core/variant/binder_common.h:262
#9 0x556033939049 in void call_with_variant_args_dv<__UnexistingClass, bool>(__UnexistingClass*, void (__UnexistingClass::*)(bool), Variant const**, int, Callable::CallError&, Vector<Variant> const&) core/variant/binder_common.h:409
#10 0x556033932620 in MethodBindT<bool>::call(Object*, Variant const**, int, Callable::CallError&) core/object/method_bind.h:320
#11 0x55604c4be894 in Object::callp(StringName const&, Variant const**, int, Callable::CallError&) core/object/object.cpp:733
#12 0x55604c4bd0cb in Object::callv(StringName const&, Array const&) core/object/object.cpp:670
#13 0x55604c53b02c in void call_with_variant_args_ret_helper<__UnexistingClass, Variant, StringName const&, Array const&, 0ul, 1ul>(__UnexistingClass*, Variant (__UnexistingClass::*)(StringName const&, Array const&), Variant const**, Variant&, Callable::CallError&, IndexSequence<0ul, 1ul>) core/variant/binder_common.h:680
#14 0x55604c534415 in void call_with_variant_args_ret_dv<__UnexistingClass, Variant, StringName const&, Array const&>(__UnexistingClass*, Variant (__UnexistingClass::*)(StringName const&, Array const&), Variant const**, int, Variant&, Callable::CallError&, Vector<Variant> const&) core/variant/binder_common.h:493
#15 0x55604c52c80e in MethodBindTR<Variant, StringName const&, Array const&>::call(Object*, Variant const**, int, Callable::CallError&) core/object/method_bind.h:481
#16 0x5560357d6917 in GDScriptFunction::call(GDScriptInstance*, Variant const**, int, Callable::CallError&, GDScriptFunction::CallState*) modules/gdscript/gdscript_vm.cpp:1644
#17 0x55603520bc71 in GDScriptInstance::callp(StringName const&, Variant const**, int, Callable::CallError&) modules/gdscript/gdscript.cpp:1627
#18 0x55604c4be497 in Object::callp(StringName const&, Variant const**, int, Callable::CallError&) core/object/object.cpp:711
#19 0x55604ba76de7 in Variant::callp(StringName const&, Variant const**, int, Variant&, Callable::CallError&) core/variant/variant_call.cpp:1048
#20 0x5560357d444c in GDScriptFunction::call(GDScriptInstance*, Variant const**, int, Callable::CallError&, GDScriptFunction::CallState*) modules/gdscript/gdscript_vm.cpp:1555
#21 0x55603520bc71 in GDScriptInstance::callp(StringName const&, Variant const**, int, Callable::CallError&) modules/gdscript/gdscript.cpp:1627
#22 0x5560417eb4c9 in bool Node::_gdvirtual__process_call<false>(double) scene/main/node.h:237
#23 0x556041750f92 in Node::_notification(int) scene/main/node.cpp:56
#24 0x556033e00319 in Node::_notificationv(int, bool) scene/main/node.h:45
#25 0x55604c4bfd71 in Object::notification(int, bool) core/object/object.cpp:790
#26 0x5560418aba3a in SceneTree::_notify_group_pause(StringName const&, int) scene/main/scene_tree.cpp:917
#27 0x55604189c717 in SceneTree::process(double) scene/main/scene_tree.cpp:465
#28 0x5560336e7c39 in Main::iteration() main/main.cpp:2992
#29 0x55603352caf3 in OS_LinuxBSD::run() platform/linuxbsd/os_linuxbsd.cpp:538
#30 0x556033513892 in main platform/linuxbsd/godot_linuxbsd.cpp:72
#31 0x7fec82c06082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082)
#32 0x55603351332d in _start (/home/runner/work/Qarminer/Qarminer/godot.linuxbsd.tools.x86_64.san+0x36e3632d)
0x55605079d800 is located 4 bytes to the right of global variable 'rasterization_sample_count' defined in 'drivers/vulkan/rendering_device_vulkan.cpp:1233:29' (0x55605079d7e0) of size 28
0x55605079d800 is located 32 bytes to the left of global variable 'logic_operations' defined in 'drivers/vulkan/rendering_device_vulkan.cpp:1243:17' (0x55605079d820) of size 64
```
This may be regression(probably happens max ~1 month)
https://github.com/godotengine/godot/blob/02d510bd079b0730f14680f75a1325ce1da0ac09/drivers/vulkan/rendering_device_vulkan.cpp#L9014
### Steps to reproduce
Not easily reproducible
### Minimal reproduction project
_No response_ | 1.0 | Very random crashes when executing `SubViewport.set_size_2d_override_stretch` - ### Godot version
4.0.alpha.custom_build. 4b164b8e4
### System information
Ubuntu 22.04 - Nvidia GTX 970, Gnome shell 42 X11
### Issue description
When executing random SubViewport function, then after a while(usually after 30min of project running), I have this crash
```
drivers/vulkan/rendering_device_vulkan.cpp:9014:68: runtime error: index 8 out of bounds for type 'VkSampleCountFlagBits [7]'
=================================================================
==15042==ERROR: AddressSanitizer: global-buffer-overflow on address 0x55605079d800 at pc 0x55603a80e2e2 bp 0x7ffd22933f40 sp 0x7ffd22933f30
READ of size 4 at 0x55605079d800 thread T0
#0 0x55603a80e2e1 in RenderingDeviceVulkan::_ensure_supported_sample_count(RenderingDevice::TextureSamples) const drivers/vulkan/rendering_device_vulkan.cpp:9014
#1 0x55603a70ddf7 in RenderingDeviceVulkan::texture_create(RenderingDevice::TextureFormat const&, RenderingDevice::TextureView const&, Vector<Vector<unsigned char> > const&) drivers/vulkan/rendering_device_vulkan.cpp:1736
#2 0x556048cf51ea in RendererRD::TextureStorage::_update_render_target(RendererRD::TextureStorage::RenderTarget*) servers/rendering/renderer_rd/storage_rd/texture_storage.cpp:2203
#3 0x556048cfbf37 in RendererRD::TextureStorage::render_target_set_size(RID, int, int, unsigned int) servers/rendering/renderer_rd/storage_rd/texture_storage.cpp:2329
#4 0x55604a4e467f in RendererViewport::viewport_set_size(RID, int, int) servers/rendering/renderer_viewport.cpp:840
#5 0x556047e52272 in RenderingServerDefault::viewport_set_size(RID, int, int) servers/rendering/rendering_server_default.h:583
#6 0x5560419a3c13 in Viewport::_set_size(Vector2i const&, Vector2i const&, Rect2i const&, Transform2D const&, bool) scene/main/viewport.cpp:799
#7 0x556041a5ac96 in SubViewport::set_size_2d_override_stretch(bool) scene/main/viewport.cpp:4072
#8 0x556033940369 in void call_with_variant_args_helper<__UnexistingClass, bool, 0ul>(__UnexistingClass*, void (__UnexistingClass::*)(bool), Variant const**, Callable::CallError&, IndexSequence<0ul>) core/variant/binder_common.h:262
#9 0x556033939049 in void call_with_variant_args_dv<__UnexistingClass, bool>(__UnexistingClass*, void (__UnexistingClass::*)(bool), Variant const**, int, Callable::CallError&, Vector<Variant> const&) core/variant/binder_common.h:409
#10 0x556033932620 in MethodBindT<bool>::call(Object*, Variant const**, int, Callable::CallError&) core/object/method_bind.h:320
#11 0x55604c4be894 in Object::callp(StringName const&, Variant const**, int, Callable::CallError&) core/object/object.cpp:733
#12 0x55604c4bd0cb in Object::callv(StringName const&, Array const&) core/object/object.cpp:670
#13 0x55604c53b02c in void call_with_variant_args_ret_helper<__UnexistingClass, Variant, StringName const&, Array const&, 0ul, 1ul>(__UnexistingClass*, Variant (__UnexistingClass::*)(StringName const&, Array const&), Variant const**, Variant&, Callable::CallError&, IndexSequence<0ul, 1ul>) core/variant/binder_common.h:680
#14 0x55604c534415 in void call_with_variant_args_ret_dv<__UnexistingClass, Variant, StringName const&, Array const&>(__UnexistingClass*, Variant (__UnexistingClass::*)(StringName const&, Array const&), Variant const**, int, Variant&, Callable::CallError&, Vector<Variant> const&) core/variant/binder_common.h:493
#15 0x55604c52c80e in MethodBindTR<Variant, StringName const&, Array const&>::call(Object*, Variant const**, int, Callable::CallError&) core/object/method_bind.h:481
#16 0x5560357d6917 in GDScriptFunction::call(GDScriptInstance*, Variant const**, int, Callable::CallError&, GDScriptFunction::CallState*) modules/gdscript/gdscript_vm.cpp:1644
#17 0x55603520bc71 in GDScriptInstance::callp(StringName const&, Variant const**, int, Callable::CallError&) modules/gdscript/gdscript.cpp:1627
#18 0x55604c4be497 in Object::callp(StringName const&, Variant const**, int, Callable::CallError&) core/object/object.cpp:711
#19 0x55604ba76de7 in Variant::callp(StringName const&, Variant const**, int, Variant&, Callable::CallError&) core/variant/variant_call.cpp:1048
#20 0x5560357d444c in GDScriptFunction::call(GDScriptInstance*, Variant const**, int, Callable::CallError&, GDScriptFunction::CallState*) modules/gdscript/gdscript_vm.cpp:1555
#21 0x55603520bc71 in GDScriptInstance::callp(StringName const&, Variant const**, int, Callable::CallError&) modules/gdscript/gdscript.cpp:1627
#22 0x5560417eb4c9 in bool Node::_gdvirtual__process_call<false>(double) scene/main/node.h:237
#23 0x556041750f92 in Node::_notification(int) scene/main/node.cpp:56
#24 0x556033e00319 in Node::_notificationv(int, bool) scene/main/node.h:45
#25 0x55604c4bfd71 in Object::notification(int, bool) core/object/object.cpp:790
#26 0x5560418aba3a in SceneTree::_notify_group_pause(StringName const&, int) scene/main/scene_tree.cpp:917
#27 0x55604189c717 in SceneTree::process(double) scene/main/scene_tree.cpp:465
#28 0x5560336e7c39 in Main::iteration() main/main.cpp:2992
#29 0x55603352caf3 in OS_LinuxBSD::run() platform/linuxbsd/os_linuxbsd.cpp:538
#30 0x556033513892 in main platform/linuxbsd/godot_linuxbsd.cpp:72
#31 0x7fec82c06082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082)
#32 0x55603351332d in _start (/home/runner/work/Qarminer/Qarminer/godot.linuxbsd.tools.x86_64.san+0x36e3632d)
0x55605079d800 is located 4 bytes to the right of global variable 'rasterization_sample_count' defined in 'drivers/vulkan/rendering_device_vulkan.cpp:1233:29' (0x55605079d7e0) of size 28
0x55605079d800 is located 32 bytes to the left of global variable 'logic_operations' defined in 'drivers/vulkan/rendering_device_vulkan.cpp:1243:17' (0x55605079d820) of size 64
```
This may be regression(probably happens max ~1 month)
https://github.com/godotengine/godot/blob/02d510bd079b0730f14680f75a1325ce1da0ac09/drivers/vulkan/rendering_device_vulkan.cpp#L9014
### Steps to reproduce
Not easily reproducible
### Minimal reproduction project
_No response_ | non_priority | very random crashes when executing subviewport set size override stretch godot version alpha custom build system information ubuntu nvidia gtx gnome shell issue description when executing random subviewport function then after a while usually after of project running i have this crash drivers vulkan rendering device vulkan cpp runtime error index out of bounds for type vksamplecountflagbits error addresssanitizer global buffer overflow on address at pc bp sp read of size at thread in renderingdevicevulkan ensure supported sample count renderingdevice texturesamples const drivers vulkan rendering device vulkan cpp in renderingdevicevulkan texture create renderingdevice textureformat const renderingdevice textureview const vector const drivers vulkan rendering device vulkan cpp in rendererrd texturestorage update render target rendererrd texturestorage rendertarget servers rendering renderer rd storage rd texture storage cpp in rendererrd texturestorage render target set size rid int int unsigned int servers rendering renderer rd storage rd texture storage cpp in rendererviewport viewport set size rid int int servers rendering renderer viewport cpp in renderingserverdefault viewport set size rid int int servers rendering rendering server default h in viewport set size const const const const bool scene main viewport cpp in subviewport set size override stretch bool scene main viewport cpp in void call with variant args helper unexistingclass void unexistingclass bool variant const callable callerror indexsequence core variant binder common h in void call with variant args dv unexistingclass void unexistingclass bool variant const int callable callerror vector const core variant binder common h in methodbindt call object variant const int callable callerror core object method bind h in object callp stringname const variant const int callable callerror core object object cpp in object callv stringname const array const core object object cpp in void call with variant args ret helper unexistingclass variant unexistingclass stringname const array const variant const variant callable callerror indexsequence core variant binder common h in void call with variant args ret dv unexistingclass variant unexistingclass stringname const array const variant const int variant callable callerror vector const core variant binder common h in methodbindtr call object variant const int callable callerror core object method bind h in gdscriptfunction call gdscriptinstance variant const int callable callerror gdscriptfunction callstate modules gdscript gdscript vm cpp in gdscriptinstance callp stringname const variant const int callable callerror modules gdscript gdscript cpp in object callp stringname const variant const int callable callerror core object object cpp in variant callp stringname const variant const int variant callable callerror core variant variant call cpp in gdscriptfunction call gdscriptinstance variant const int callable callerror gdscriptfunction callstate modules gdscript gdscript vm cpp in gdscriptinstance callp stringname const variant const int callable callerror modules gdscript gdscript cpp in bool node gdvirtual process call double scene main node h in node notification int scene main node cpp in node notificationv int bool scene main node h in object notification int bool core object object cpp in scenetree notify group pause stringname const int scene main scene tree cpp in scenetree process double scene main scene tree cpp in main iteration main main cpp in os linuxbsd run platform linuxbsd os linuxbsd cpp in main platform linuxbsd godot linuxbsd cpp in libc start main lib linux gnu libc so in start home runner work qarminer qarminer godot linuxbsd tools san is located bytes to the right of global variable rasterization sample count defined in drivers vulkan rendering device vulkan cpp of size is located bytes to the left of global variable logic operations defined in drivers vulkan rendering device vulkan cpp of size this may be regression probably happens max month steps to reproduce not easily reproducible minimal reproduction project no response | 0 |
177,069 | 6,573,906,451 | IssuesEvent | 2017-09-11 10:37:00 | EyeSeeTea/pictureapp | https://api.github.com/repos/EyeSeeTea/pictureapp | closed | Introduce a variable timeout for WS push | complexity - low (1hr) eReferrals priority - high type - feature | WS can spend some variable time to answer a push call. The suggested function that would describe the timeout to be configured in the call is: number_vouchers x 2000ms | 1.0 | Introduce a variable timeout for WS push - WS can spend some variable time to answer a push call. The suggested function that would describe the timeout to be configured in the call is: number_vouchers x 2000ms | priority | introduce a variable timeout for ws push ws can spend some variable time to answer a push call the suggested function that would describe the timeout to be configured in the call is number vouchers x | 1 |
97,655 | 20,370,946,942 | IssuesEvent | 2022-02-21 11:09:07 | GeoNode/geonode | https://api.github.com/repos/GeoNode/geonode | closed | Align importlayers command to the new upload interface | performance code quality 3.3.x master | `importlayers` must be upgrade both for 3.3.x and master in light of the changes to te upload interface, including the support for remote files (https://github.com/GeoNode/geonode/issues/8667).
- **3.3.x:** currently it still uses the old REST API
- **master**: the old upload method was renamed to `upload_legacy` to maintain the compatibility with imporlayers. The `upload_legacy` method will be dropped
| 1.0 | Align importlayers command to the new upload interface - `importlayers` must be upgrade both for 3.3.x and master in light of the changes to te upload interface, including the support for remote files (https://github.com/GeoNode/geonode/issues/8667).
- **3.3.x:** currently it still uses the old REST API
- **master**: the old upload method was renamed to `upload_legacy` to maintain the compatibility with imporlayers. The `upload_legacy` method will be dropped
| non_priority | align importlayers command to the new upload interface importlayers must be upgrade both for x and master in light of the changes to te upload interface including the support for remote files x currently it still uses the old rest api master the old upload method was renamed to upload legacy to maintain the compatibility with imporlayers the upload legacy method will be dropped | 0 |
100,577 | 11,199,491,133 | IssuesEvent | 2020-01-03 18:53:31 | sparkdesignsystem/spark-design-system | https://api.github.com/repos/sparkdesignsystem/spark-design-system | closed | Add Dictionary Docs - NDS | Component: Dictionary status: PO approved type: documentation | **User Story:**
As Spark, we created new Dictionary Docs in word that we want to copy into Storybook so that our New Doc Site displays the updated documentation.
**Notes:**
- Docs can be found at shorty/sparkcontent
**AC:**
- I will see updated documentation displaying on the New Doc Site | 1.0 | Add Dictionary Docs - NDS - **User Story:**
As Spark, we created new Dictionary Docs in word that we want to copy into Storybook so that our New Doc Site displays the updated documentation.
**Notes:**
- Docs can be found at shorty/sparkcontent
**AC:**
- I will see updated documentation displaying on the New Doc Site | non_priority | add dictionary docs nds user story as spark we created new dictionary docs in word that we want to copy into storybook so that our new doc site displays the updated documentation notes docs can be found at shorty sparkcontent ac i will see updated documentation displaying on the new doc site | 0 |
189,099 | 6,794,112,235 | IssuesEvent | 2017-11-01 10:41:14 | spring-projects/spring-boot | https://api.github.com/repos/spring-projects/spring-boot | closed | Elasticsearch starter forces use of Log4j2, breaking logging in apps that try to use Logback | priority: normal type: bug | I'm trying to run `spring-boot-starter-data-elasticsearch` in latest Milestone 2.0.0.M5.
I've used project template generated from start.spring.io.
Here is the GitHub repo url: https://github.com/staleks/spring-boot-2.0.M5-ES
Run
1. `$ ./gradlew clean build`
2. `$ ./gradlew bootRun`
stale the process of loading application context.
Please check the attached image: https://github.com/staleks/spring-boot-2.0.M5-ES/blob/master/images/staled-bootRun-process.png
On the other hand, if I switch back to 1.5.8.RELEASE version
(GitHub repo url: https://github.com/staleks/spring-boot-1.5.8-ES)
Run
1. `$ ./gradlew clean build`
2. `$ ./gradlew bootRun`
application context is loaded, and web application is started.
Please check the attached image: https://github.com/staleks/spring-boot-1.5.8-ES/blob/master/images/bootRun-process.png
Can someone please verify this?
Thank You
| 1.0 | Elasticsearch starter forces use of Log4j2, breaking logging in apps that try to use Logback - I'm trying to run `spring-boot-starter-data-elasticsearch` in latest Milestone 2.0.0.M5.
I've used project template generated from start.spring.io.
Here is the GitHub repo url: https://github.com/staleks/spring-boot-2.0.M5-ES
Run
1. `$ ./gradlew clean build`
2. `$ ./gradlew bootRun`
stale the process of loading application context.
Please check the attached image: https://github.com/staleks/spring-boot-2.0.M5-ES/blob/master/images/staled-bootRun-process.png
On the other hand, if I switch back to 1.5.8.RELEASE version
(GitHub repo url: https://github.com/staleks/spring-boot-1.5.8-ES)
Run
1. `$ ./gradlew clean build`
2. `$ ./gradlew bootRun`
application context is loaded, and web application is started.
Please check the attached image: https://github.com/staleks/spring-boot-1.5.8-ES/blob/master/images/bootRun-process.png
Can someone please verify this?
Thank You
| priority | elasticsearch starter forces use of breaking logging in apps that try to use logback i m trying to run spring boot starter data elasticsearch in latest milestone i ve used project template generated from start spring io here is the github repo url run gradlew clean build gradlew bootrun stale the process of loading application context please check the attached image on the other hand if i switch back to release version github repo url run gradlew clean build gradlew bootrun application context is loaded and web application is started please check the attached image can someone please verify this thank you | 1 |
613,434 | 19,090,232,105 | IssuesEvent | 2021-11-29 11:14:35 | golemfactory/ya-provider-winui | https://api.github.com/repos/golemfactory/ya-provider-winui | closed | Send all rolling log files with user feedback | priority: lowest no-issue-activity | Right now I believe that only one log with default name will be sent | 1.0 | Send all rolling log files with user feedback - Right now I believe that only one log with default name will be sent | priority | send all rolling log files with user feedback right now i believe that only one log with default name will be sent | 1 |
25,311 | 4,288,706,068 | IssuesEvent | 2016-07-17 16:51:25 | kraigs-android/kraigsandroid | https://api.github.com/repos/kraigs-android/kraigsandroid | closed | Rotating phone cuts off music preview | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. Make sure auto-rotate isn't disabled
2. Select song i Alarm Klock so the preview starts playing
3. Rotate phone
What is the expected output? What do you see instead?
Expect phone to rotate display, and music to continue to play. Instead, music
is cut off and music selection returns to the beginning.
What version of the product are you using? On what operating system?
Version 1.6 on Froyo.
Please provide any additional information below.
```
Original issue reported on code.google.com by `rep...@gmail.com` on 6 Oct 2010 at 4:37 | 1.0 | Rotating phone cuts off music preview - ```
What steps will reproduce the problem?
1. Make sure auto-rotate isn't disabled
2. Select song i Alarm Klock so the preview starts playing
3. Rotate phone
What is the expected output? What do you see instead?
Expect phone to rotate display, and music to continue to play. Instead, music
is cut off and music selection returns to the beginning.
What version of the product are you using? On what operating system?
Version 1.6 on Froyo.
Please provide any additional information below.
```
Original issue reported on code.google.com by `rep...@gmail.com` on 6 Oct 2010 at 4:37 | non_priority | rotating phone cuts off music preview what steps will reproduce the problem make sure auto rotate isn t disabled select song i alarm klock so the preview starts playing rotate phone what is the expected output what do you see instead expect phone to rotate display and music to continue to play instead music is cut off and music selection returns to the beginning what version of the product are you using on what operating system version on froyo please provide any additional information below original issue reported on code google com by rep gmail com on oct at | 0 |
236,429 | 7,749,198,104 | IssuesEvent | 2018-05-30 10:38:54 | Gloirin/m2gTest | https://api.github.com/repos/Gloirin/m2gTest | closed | 0003518:
Im Kalender eingetragene Termine werden nicht angepasst wenn sich eine eingeladene Gruppe nachträglich ändert | Calendar bug high priority | **Reported by svenkaths on 16 Dec 2010 12:36**
**Version:** Mialena (2010-03-9)
Trägt man einen neuen Termin in einen gemeinsamen Kalender und fügt eine Gruppe als Attendee hinzu, so erscheint dieser Termin bei allen Mitgliedern der Gruppe im privaten Kalender und wird bei einem Sync mit ActiveSync mit übertragen. Ändere ich nun im nachhinein die Gruppe, z.B. indem ich einen neuen benutzer anlege und hinzufüge, so wird der Termin nicht angepasst und der neue Benutzer ist nicht Attendee des Termins und kann den Termin somit auch nicht syncen.
**Steps to reproduce:** 1. Termin in gemeinsamen Kalender anlegen.
2. Die Gruppe Users einladen -> bei allen Mitgliedern der Gruppe Users erscheint der Termin im privaten Kalender und kann somit auch per ActiveSync gesynct werden.
3. Neuen Benutzer anglegen, der ebenfalls Mitglied der Gruppe Users ist -> Der eben angelegte Termin erscheint nicht im privaten Kalender :-(, daher auch kein sync.
| 1.0 | 0003518:
Im Kalender eingetragene Termine werden nicht angepasst wenn sich eine eingeladene Gruppe nachträglich ändert - **Reported by svenkaths on 16 Dec 2010 12:36**
**Version:** Mialena (2010-03-9)
Trägt man einen neuen Termin in einen gemeinsamen Kalender und fügt eine Gruppe als Attendee hinzu, so erscheint dieser Termin bei allen Mitgliedern der Gruppe im privaten Kalender und wird bei einem Sync mit ActiveSync mit übertragen. Ändere ich nun im nachhinein die Gruppe, z.B. indem ich einen neuen benutzer anlege und hinzufüge, so wird der Termin nicht angepasst und der neue Benutzer ist nicht Attendee des Termins und kann den Termin somit auch nicht syncen.
**Steps to reproduce:** 1. Termin in gemeinsamen Kalender anlegen.
2. Die Gruppe Users einladen -> bei allen Mitgliedern der Gruppe Users erscheint der Termin im privaten Kalender und kann somit auch per ActiveSync gesynct werden.
3. Neuen Benutzer anglegen, der ebenfalls Mitglied der Gruppe Users ist -> Der eben angelegte Termin erscheint nicht im privaten Kalender :-(, daher auch kein sync.
| priority | im kalender eingetragene termine werden nicht angepasst wenn sich eine eingeladene gruppe nachträglich ändert reported by svenkaths on dec version mialena trägt man einen neuen termin in einen gemeinsamen kalender und fügt eine gruppe als attendee hinzu so erscheint dieser termin bei allen mitgliedern der gruppe im privaten kalender und wird bei einem sync mit activesync mit übertragen ändere ich nun im nachhinein die gruppe z b indem ich einen neuen benutzer anlege und hinzufüge so wird der termin nicht angepasst und der neue benutzer ist nicht attendee des termins und kann den termin somit auch nicht syncen steps to reproduce termin in gemeinsamen kalender anlegen die gruppe users einladen gt bei allen mitgliedern der gruppe users erscheint der termin im privaten kalender und kann somit auch per activesync gesynct werden neuen benutzer anglegen der ebenfalls mitglied der gruppe users ist gt der eben angelegte termin erscheint nicht im privaten kalender daher auch kein sync | 1 |
6,370 | 6,361,319,796 | IssuesEvent | 2017-07-31 12:37:23 | warg-lang/warg | https://api.github.com/repos/warg-lang/warg | opened | Generate accessible static analysis diagnostics | ci enhancement infrastructure | [neovim](https://github.com/neovim/neovim) provides a nice [diagnostics overview](https://neovim.io/doc/reports/clang/) using Clang Static Analysis. While it is not completely transparent how to do that, making similar page would be great. | 1.0 | Generate accessible static analysis diagnostics - [neovim](https://github.com/neovim/neovim) provides a nice [diagnostics overview](https://neovim.io/doc/reports/clang/) using Clang Static Analysis. While it is not completely transparent how to do that, making similar page would be great. | non_priority | generate accessible static analysis diagnostics provides a nice using clang static analysis while it is not completely transparent how to do that making similar page would be great | 0 |
500,638 | 14,503,419,690 | IssuesEvent | 2020-12-11 22:40:36 | phetsims/faradays-law | https://api.github.com/repos/phetsims/faradays-law | closed | Dragging magnet to pan around while zoomed in causes jittery movement | priority:5-deferred type:bug | **Test device**
iPad 6th Gen
**Operating System**
iPadOS 14.1
**Browser**
Safari
**Problem description**
For https://github.com/phetsims/QA/issues/568. Fairly minor. Feel free to close if not an issue. Mostly seen on iPad/touch screen, but reproduced a bit on laptop.
When the sim is zoomed in, you can drag the magnet to the edge of the screen to pan around. As you do, the magnet jitters around quite a bit.
**Visuals**

| 1.0 | Dragging magnet to pan around while zoomed in causes jittery movement - **Test device**
iPad 6th Gen
**Operating System**
iPadOS 14.1
**Browser**
Safari
**Problem description**
For https://github.com/phetsims/QA/issues/568. Fairly minor. Feel free to close if not an issue. Mostly seen on iPad/touch screen, but reproduced a bit on laptop.
When the sim is zoomed in, you can drag the magnet to the edge of the screen to pan around. As you do, the magnet jitters around quite a bit.
**Visuals**

| priority | dragging magnet to pan around while zoomed in causes jittery movement test device ipad gen operating system ipados browser safari problem description for fairly minor feel free to close if not an issue mostly seen on ipad touch screen but reproduced a bit on laptop when the sim is zoomed in you can drag the magnet to the edge of the screen to pan around as you do the magnet jitters around quite a bit visuals | 1 |
13,475 | 15,983,888,932 | IssuesEvent | 2021-04-18 11:02:20 | brucemiller/LaTeXML | https://api.github.com/repos/brucemiller/LaTeXML | closed | Theorem title in Jats-XML | bug postprocessing schema | When i convert a theorem to JATS-XML the title is not appearing in the XML.
For a theorem environment defined like
**test.tex**
```tex
\documentclass{article}
\newtheorem{theorem}{Theorem}[section]
\begin{document}
\begin{theorem}
Let f be a function.
\end{theorem}
\end{document}
```
I get with `latexml test.tex`:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<?latexml searchpaths="/home/robert/Work/ems/tex-json"?>
<?latexml class="article"?>
<?latexml RelaxNGSchema="LaTeXML"?>
<document xmlns="http://dlmf.nist.gov/LaTeXML">
<resource src="LaTeXML.css" type="text/css"/>
<resource src="ltx-article.css" type="text/css"/>
<theorem class="ltx_theorem_theorem" inlist="thm theorem:theorem" xml:id="S0.Thmtheorem1">
<tags>
<tag>Theorem 0.1</tag>
<tag role="refnum">0.1</tag>
<tag role="typerefnum">Theorem 0.1</tag>
</tags>
<title class="ltx_runin"><tag><text font="bold">Theorem 0.1</text></tag></title>
<para xml:id="S0.Thmtheorem1.p1">
<p><text font="italic">Let f be a function.</text></p>
</para>
</theorem>
</document>
```
When i convert to JATS-XML with `latexmlc test.tex --dest=test.jats.xml --pmml --stylesheet=LaTeXML-jats.xsl`:
```xml
<?xml version="1.0"?>
<article>
<front>
<article-meta>
<contrib-group/>
<!-- The element theorem with attributes
class=ltx_theorem_theoreminlist=thm theorem:theoremxml:id=S0.Thmtheorem1fragid=S0.Thmtheorem1
is currently not supported for the front matter.
-->
</article-meta>
</front>
<body>
<statement id="S0.Thmtheorem1">
<title/>
<p id="S0.Thmtheorem1.p1">
<italic>Let f be a function.</italic>
</p>
</statement>
</body>
<back>
<!-- The element theorem with attributes
class=ltx_theorem_theoreminlist=thm theorem:theoremxml:id=S0.Thmtheorem1fragid=S0.Thmtheorem1
is currently not supported for the back matter
-->
<app-group/>
</back>
</article>
```
It seems the conversion should happen here, but is not picking up the title:
```xml
<xsl:template match="ltx:theorem/ltx:title">
<title>
<xsl:apply-templates select="@*|node()"/>
</title>
</xsl:template>
``` | 1.0 | Theorem title in Jats-XML - When i convert a theorem to JATS-XML the title is not appearing in the XML.
For a theorem environment defined like
**test.tex**
```tex
\documentclass{article}
\newtheorem{theorem}{Theorem}[section]
\begin{document}
\begin{theorem}
Let f be a function.
\end{theorem}
\end{document}
```
I get with `latexml test.tex`:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<?latexml searchpaths="/home/robert/Work/ems/tex-json"?>
<?latexml class="article"?>
<?latexml RelaxNGSchema="LaTeXML"?>
<document xmlns="http://dlmf.nist.gov/LaTeXML">
<resource src="LaTeXML.css" type="text/css"/>
<resource src="ltx-article.css" type="text/css"/>
<theorem class="ltx_theorem_theorem" inlist="thm theorem:theorem" xml:id="S0.Thmtheorem1">
<tags>
<tag>Theorem 0.1</tag>
<tag role="refnum">0.1</tag>
<tag role="typerefnum">Theorem 0.1</tag>
</tags>
<title class="ltx_runin"><tag><text font="bold">Theorem 0.1</text></tag></title>
<para xml:id="S0.Thmtheorem1.p1">
<p><text font="italic">Let f be a function.</text></p>
</para>
</theorem>
</document>
```
When i convert to JATS-XML with `latexmlc test.tex --dest=test.jats.xml --pmml --stylesheet=LaTeXML-jats.xsl`:
```xml
<?xml version="1.0"?>
<article>
<front>
<article-meta>
<contrib-group/>
<!-- The element theorem with attributes
class=ltx_theorem_theoreminlist=thm theorem:theoremxml:id=S0.Thmtheorem1fragid=S0.Thmtheorem1
is currently not supported for the front matter.
-->
</article-meta>
</front>
<body>
<statement id="S0.Thmtheorem1">
<title/>
<p id="S0.Thmtheorem1.p1">
<italic>Let f be a function.</italic>
</p>
</statement>
</body>
<back>
<!-- The element theorem with attributes
class=ltx_theorem_theoreminlist=thm theorem:theoremxml:id=S0.Thmtheorem1fragid=S0.Thmtheorem1
is currently not supported for the back matter
-->
<app-group/>
</back>
</article>
```
It seems the conversion should happen here, but is not picking up the title:
```xml
<xsl:template match="ltx:theorem/ltx:title">
<title>
<xsl:apply-templates select="@*|node()"/>
</title>
</xsl:template>
``` | non_priority | theorem title in jats xml when i convert a theorem to jats xml the title is not appearing in the xml for a theorem environment defined like test tex tex documentclass article newtheorem theorem theorem begin document begin theorem let f be a function end theorem end document i get with latexml test tex xml document xmlns theorem theorem theorem let f be a function when i convert to jats xml with latexmlc test tex dest test jats xml pmml stylesheet latexml jats xsl xml the element theorem with attributes class ltx theorem theoreminlist thm theorem theoremxml id is currently not supported for the front matter let f be a function the element theorem with attributes class ltx theorem theoreminlist thm theorem theoremxml id is currently not supported for the back matter it seems the conversion should happen here but is not picking up the title xml | 0 |
551,654 | 16,177,762,784 | IssuesEvent | 2021-05-03 09:44:19 | sopra-fs21-group-4/server | https://api.github.com/repos/sopra-fs21-group-4/server | closed | Joining a lobby via the Lobby URL | high priority removed task | - [x] Joining Request via Lobby URL makes User Entity join Lobby
- [ ] Test
#27 Story 4 | 1.0 | Joining a lobby via the Lobby URL - - [x] Joining Request via Lobby URL makes User Entity join Lobby
- [ ] Test
#27 Story 4 | priority | joining a lobby via the lobby url joining request via lobby url makes user entity join lobby test story | 1 |
18,681 | 4,295,667,561 | IssuesEvent | 2016-07-19 08:09:51 | smartive/giuseppe | https://api.github.com/repos/smartive/giuseppe | opened | Enhance documentation about tsconfig.json settings | documentation | Mentioned in #84:
We should documentate the used minimum settings for the `tsconfig.json` file.
In addition:
- Create yeoman generator for a full blown app (#81)
- #82 | 1.0 | Enhance documentation about tsconfig.json settings - Mentioned in #84:
We should documentate the used minimum settings for the `tsconfig.json` file.
In addition:
- Create yeoman generator for a full blown app (#81)
- #82 | non_priority | enhance documentation about tsconfig json settings mentioned in we should documentate the used minimum settings for the tsconfig json file in addition create yeoman generator for a full blown app | 0 |
162,887 | 20,255,481,067 | IssuesEvent | 2022-02-14 22:37:18 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | [Security Solution][Detections] In-memory Rule Management and Monitoring tables | Team:Detections and Resp Team: SecuritySolution Feature:Rule Management v8.1.0 Feature:Rule Monitoring Team:Detection Rules | ## Summary
Previously we considered an option of implementing in-memory filtering, sorting and searching in the browser for our Rule Management and Monitoring tables: https://github.com/elastic/kibana/pull/89877
This PR was abandoned because:
- Our `rules/_find` and `rules/_find_statuses` endpoints were extremely slow when the page size was 100+ items. We had N+1 problem and were fetching rule status and actions SOs per each rule in separate requests to Elasticsearch. This was leading to the fact that in-browser app in attempt to load all the 500+ rules was generating 1500+ requests to ES under the hood. If we multiply that by the number of simultaneous users and add the fact that some of them have more than 500 rules, it becomes clear that it wasn't a scalable solution.
- Saved objects API didn't support aggregations back then. Now it does, and we don't fetch N status SOs of N rules in N queries anymore.
Since now SO API supports aggs, and we're in the process of getting rid of legacy SOs, we are going to reconsider the in-memory approach.
## To do
- [x] Implement a POC for the in-memory implementation of the Rule Management and Monitoring tables.
- [x] Add support for sorting by all the existing columns of the Rule Management and Monitoring tables.
- [x] Test performance of the POC on a normal (600) and large (few thousand) amount of rules.
- Measure the full page load time and subsequent table refresh time
- Measure event loop blocking on the server (could be caused by JSON (de)serialization of a large amount of rules) | True | [Security Solution][Detections] In-memory Rule Management and Monitoring tables - ## Summary
Previously we considered an option of implementing in-memory filtering, sorting and searching in the browser for our Rule Management and Monitoring tables: https://github.com/elastic/kibana/pull/89877
This PR was abandoned because:
- Our `rules/_find` and `rules/_find_statuses` endpoints were extremely slow when the page size was 100+ items. We had N+1 problem and were fetching rule status and actions SOs per each rule in separate requests to Elasticsearch. This was leading to the fact that in-browser app in attempt to load all the 500+ rules was generating 1500+ requests to ES under the hood. If we multiply that by the number of simultaneous users and add the fact that some of them have more than 500 rules, it becomes clear that it wasn't a scalable solution.
- Saved objects API didn't support aggregations back then. Now it does, and we don't fetch N status SOs of N rules in N queries anymore.
Since now SO API supports aggs, and we're in the process of getting rid of legacy SOs, we are going to reconsider the in-memory approach.
## To do
- [x] Implement a POC for the in-memory implementation of the Rule Management and Monitoring tables.
- [x] Add support for sorting by all the existing columns of the Rule Management and Monitoring tables.
- [x] Test performance of the POC on a normal (600) and large (few thousand) amount of rules.
- Measure the full page load time and subsequent table refresh time
- Measure event loop blocking on the server (could be caused by JSON (de)serialization of a large amount of rules) | non_priority | in memory rule management and monitoring tables summary previously we considered an option of implementing in memory filtering sorting and searching in the browser for our rule management and monitoring tables this pr was abandoned because our rules find and rules find statuses endpoints were extremely slow when the page size was items we had n problem and were fetching rule status and actions sos per each rule in separate requests to elasticsearch this was leading to the fact that in browser app in attempt to load all the rules was generating requests to es under the hood if we multiply that by the number of simultaneous users and add the fact that some of them have more than rules it becomes clear that it wasn t a scalable solution saved objects api didn t support aggregations back then now it does and we don t fetch n status sos of n rules in n queries anymore since now so api supports aggs and we re in the process of getting rid of legacy sos we are going to reconsider the in memory approach to do implement a poc for the in memory implementation of the rule management and monitoring tables add support for sorting by all the existing columns of the rule management and monitoring tables test performance of the poc on a normal and large few thousand amount of rules measure the full page load time and subsequent table refresh time measure event loop blocking on the server could be caused by json de serialization of a large amount of rules | 0 |
754,484 | 26,390,473,104 | IssuesEvent | 2023-01-12 15:20:27 | azerothcore/azerothcore-wotlk | https://api.github.com/repos/azerothcore/azerothcore-wotlk | opened | [NPC] Bleeding Hollow Tormentor | Confirmed Priority-Low 61-64 | Original Issue: https://github.com/chromiecraft/chromiecraft/issues/4708
### What client do you play on?
enUS
### Faction
Both
### Content Phase:
61-64
### Current Behaviour
Mounted Bleeding Hollow Tormentors waddle while walking
They don't cast Mend Pet when their pet gets low
Unmounted ones still summon their "riding" wolf
### Expected Blizzlike Behaviour
The animation speed of the mounted ones fits their speed
They cast Mend Pet when their pet gets low enough
Only mounted ones summon a riding wolf on aggro (probably)
### Source
Animation looks normal here
https://youtu.be/Q6BFcTcxeOM?t=321
Can see one starting to cast Mend Pet for a split second here

https://youtu.be/Q6BFcTcxeOM?t=129
### Steps to reproduce the problem
1. `.go c 69471`
2. Watch some mounted Tormentors waddle
3. Engage a tormentor and bring the pet low
4. Engage a non-mounted tormentor
### Extra Notes
https://wowgaming.altervista.org/aowow/?npc=19424
### AC rev. hash/commit
592a26cb8c46 2023-01-11
### Operating system
Ubuntu 20.04 - Windows 10 x64
### Modules
- [mod-ah-bot](https://github.com/azerothcore/mod-ah-bot)
- [mod-bg-item-reward](https://github.com/azerothcore/mod-bg-item-reward)
- [mod-cfbg](https://github.com/azerothcore/mod-cfbg)
- [mod-chat-transmitter](https://github.com/azerothcore/mod-chat-transmitter)
- [mod-chromie-xp](https://github.com/azerothcore/mod-chromie-xp)
- [mod-cta-switch](https://github.com/azerothcore/mod-cta-switch)
- [mod-desertion-warnings](https://github.com/azerothcore/mod-desertion-warnings)
- [mod-duel-reset](https://github.com/azerothcore/mod-duel-reset)
- [mod-eluna](https://github.com/azerothcore/mod-eluna)
- [mod-ip-tracker](https://github.com/azerothcore/mod-ip-tracker)
- [mod-low-level-arena](https://github.com/azerothcore/mod-low-level-arena)
- [mod-low-level-rbg](https://github.com/azerothcore/mod-low-level-rbg)
- [mod-multi-client-check](https://github.com/azerothcore/mod-multi-client-check)
- [mod-progression-system](https://github.com/azerothcore/mod-progression-system)
- [mod-pvp-titles](https://github.com/azerothcore/mod-pvp-titles)
- [mod-pvpstats-announcer](https://github.com/azerothcore/mod-pvpstats-announcer)
- [mod-queue-list-cache](https://github.com/azerothcore/mod-queue-list-cache)
- [mod-rdf-expansion](https://github.com/azerothcore/mod-rdf-expansion)
- [mod-transmog](https://github.com/azerothcore/mod-transmog)
- [mod-weekend-xp](https://github.com/azerothcore/mod-weekend-xp)
- [mod-instanced-worldbosses](https://github.com/nyeriah/mod-instanced-worldbosses)
- [mod-zone-difficulty](https://github.com/azerothcore/mod-zone-difficulty)
- [lua-carbon-copy](https://github.com/55Honey/Acore_CarbonCopy)
- [lua-exchange-npc](https://github.com/55Honey/Acore_ExchangeNpc)
- [lua-custom-worldboss](https://github.com/55Honey/Acore_CustomWorldboss)
- [lua-level-up-reward](https://github.com/55Honey/Acore_LevelUpReward)
- [lua-recruit-a-friend](https://github.com/55Honey/Acore_RecruitAFriend)
- [lua-send-and-bind](https://github.com/55Honey/Acore_SendAndBind)
- [lua-temp-announcements](https://github.com/55Honey/Acore_TempAnnouncements)
- [lua-zonecheck](https://github.com/55Honey/acore_Zonecheck)
### Customizations
None
### Server
ChromieCraft
| 1.0 | [NPC] Bleeding Hollow Tormentor - Original Issue: https://github.com/chromiecraft/chromiecraft/issues/4708
### What client do you play on?
enUS
### Faction
Both
### Content Phase:
61-64
### Current Behaviour
Mounted Bleeding Hollow Tormentors waddle while walking
They don't cast Mend Pet when their pet gets low
Unmounted ones still summon their "riding" wolf
### Expected Blizzlike Behaviour
The animation speed of the mounted ones fits their speed
They cast Mend Pet when their pet gets low enough
Only mounted ones summon a riding wolf on aggro (probably)
### Source
Animation looks normal here
https://youtu.be/Q6BFcTcxeOM?t=321
Can see one starting to cast Mend Pet for a split second here

https://youtu.be/Q6BFcTcxeOM?t=129
### Steps to reproduce the problem
1. `.go c 69471`
2. Watch some mounted Tormentors waddle
3. Engage a tormentor and bring the pet low
4. Engage a non-mounted tormentor
### Extra Notes
https://wowgaming.altervista.org/aowow/?npc=19424
### AC rev. hash/commit
592a26cb8c46 2023-01-11
### Operating system
Ubuntu 20.04 - Windows 10 x64
### Modules
- [mod-ah-bot](https://github.com/azerothcore/mod-ah-bot)
- [mod-bg-item-reward](https://github.com/azerothcore/mod-bg-item-reward)
- [mod-cfbg](https://github.com/azerothcore/mod-cfbg)
- [mod-chat-transmitter](https://github.com/azerothcore/mod-chat-transmitter)
- [mod-chromie-xp](https://github.com/azerothcore/mod-chromie-xp)
- [mod-cta-switch](https://github.com/azerothcore/mod-cta-switch)
- [mod-desertion-warnings](https://github.com/azerothcore/mod-desertion-warnings)
- [mod-duel-reset](https://github.com/azerothcore/mod-duel-reset)
- [mod-eluna](https://github.com/azerothcore/mod-eluna)
- [mod-ip-tracker](https://github.com/azerothcore/mod-ip-tracker)
- [mod-low-level-arena](https://github.com/azerothcore/mod-low-level-arena)
- [mod-low-level-rbg](https://github.com/azerothcore/mod-low-level-rbg)
- [mod-multi-client-check](https://github.com/azerothcore/mod-multi-client-check)
- [mod-progression-system](https://github.com/azerothcore/mod-progression-system)
- [mod-pvp-titles](https://github.com/azerothcore/mod-pvp-titles)
- [mod-pvpstats-announcer](https://github.com/azerothcore/mod-pvpstats-announcer)
- [mod-queue-list-cache](https://github.com/azerothcore/mod-queue-list-cache)
- [mod-rdf-expansion](https://github.com/azerothcore/mod-rdf-expansion)
- [mod-transmog](https://github.com/azerothcore/mod-transmog)
- [mod-weekend-xp](https://github.com/azerothcore/mod-weekend-xp)
- [mod-instanced-worldbosses](https://github.com/nyeriah/mod-instanced-worldbosses)
- [mod-zone-difficulty](https://github.com/azerothcore/mod-zone-difficulty)
- [lua-carbon-copy](https://github.com/55Honey/Acore_CarbonCopy)
- [lua-exchange-npc](https://github.com/55Honey/Acore_ExchangeNpc)
- [lua-custom-worldboss](https://github.com/55Honey/Acore_CustomWorldboss)
- [lua-level-up-reward](https://github.com/55Honey/Acore_LevelUpReward)
- [lua-recruit-a-friend](https://github.com/55Honey/Acore_RecruitAFriend)
- [lua-send-and-bind](https://github.com/55Honey/Acore_SendAndBind)
- [lua-temp-announcements](https://github.com/55Honey/Acore_TempAnnouncements)
- [lua-zonecheck](https://github.com/55Honey/acore_Zonecheck)
### Customizations
None
### Server
ChromieCraft
| priority | bleeding hollow tormentor original issue what client do you play on enus faction both content phase current behaviour mounted bleeding hollow tormentors waddle while walking they don t cast mend pet when their pet gets low unmounted ones still summon their riding wolf expected blizzlike behaviour the animation speed of the mounted ones fits their speed they cast mend pet when their pet gets low enough only mounted ones summon a riding wolf on aggro probably source animation looks normal here can see one starting to cast mend pet for a split second here steps to reproduce the problem go c watch some mounted tormentors waddle engage a tormentor and bring the pet low engage a non mounted tormentor extra notes ac rev hash commit operating system ubuntu windows modules customizations none server chromiecraft | 1 |
462,384 | 13,245,982,374 | IssuesEvent | 2020-08-19 15:05:23 | airshipit/airshipctl | https://api.github.com/repos/airshipit/airshipctl | closed | Add Label phase injections in airshipctl. | enhancement priority/low | **Problem description (if applicable)**
Given the new approach of delivering document bundles that have been grouped or segregated as kustomization sets by the intended phase of airship in which they are delivered.
These feature calls for the injection of an appropriate label to mark the artifacts with. the appropriate phase label.
The label to be injected would be :
**_airshipit.org/phase: "..."_**
Where the phases are :
- bootstrap
- initinfra
- ...
**Proposed change**
During the appropriate airshipctl commands such as cluster initinfra, or deliver we will inject the appropriate phase label.
**Potential impacts**
N/A
| 1.0 | Add Label phase injections in airshipctl. - **Problem description (if applicable)**
Given the new approach of delivering document bundles that have been grouped or segregated as kustomization sets by the intended phase of airship in which they are delivered.
These feature calls for the injection of an appropriate label to mark the artifacts with. the appropriate phase label.
The label to be injected would be :
**_airshipit.org/phase: "..."_**
Where the phases are :
- bootstrap
- initinfra
- ...
**Proposed change**
During the appropriate airshipctl commands such as cluster initinfra, or deliver we will inject the appropriate phase label.
**Potential impacts**
N/A
| priority | add label phase injections in airshipctl problem description if applicable given the new approach of delivering document bundles that have been grouped or segregated as kustomization sets by the intended phase of airship in which they are delivered these feature calls for the injection of an appropriate label to mark the artifacts with the appropriate phase label the label to be injected would be airshipit org phase where the phases are bootstrap initinfra proposed change during the appropriate airshipctl commands such as cluster initinfra or deliver we will inject the appropriate phase label potential impacts n a | 1 |
523,391 | 15,180,770,662 | IssuesEvent | 2021-02-15 01:11:43 | QuantEcon/lecture-python.myst | https://api.github.com/repos/QuantEcon/lecture-python.myst | closed | [lecture_comparison]math_size_in_headings | medium-priority | This is a minor issue that math expressions in headings is too small in ```MyST```. I wonder whether there is any solution for this.
The example is: (Left: ```RST```, Right: ```MyST```.)

cc: @mmcky
| 1.0 | [lecture_comparison]math_size_in_headings - This is a minor issue that math expressions in headings is too small in ```MyST```. I wonder whether there is any solution for this.
The example is: (Left: ```RST```, Right: ```MyST```.)

cc: @mmcky
| priority | math size in headings this is a minor issue that math expressions in headings is too small in myst i wonder whether there is any solution for this the example is left rst right myst cc mmcky | 1 |
67,034 | 8,070,547,671 | IssuesEvent | 2018-08-06 10:05:56 | JohnSegerstedt/Game1 | https://api.github.com/repos/JohnSegerstedt/Game1 | closed | Redesign StartMenu animation to match that of the game | redesign shelved | **ACCEPTANCE CRITERIA:**
* The player shapes share the same Material as the real player models.
* The ground shares the same Material as the real ground model.
* The edges of the ground has been correctly added. | 1.0 | Redesign StartMenu animation to match that of the game - **ACCEPTANCE CRITERIA:**
* The player shapes share the same Material as the real player models.
* The ground shares the same Material as the real ground model.
* The edges of the ground has been correctly added. | non_priority | redesign startmenu animation to match that of the game acceptance criteria the player shapes share the same material as the real player models the ground shares the same material as the real ground model the edges of the ground has been correctly added | 0 |
490,269 | 14,117,537,765 | IssuesEvent | 2020-11-08 09:37:03 | ntop/ntopng | https://api.github.com/repos/ntop/ntopng | closed | Extend speedtest with additional metrics | enhancement priority ticket | Speedtest should be extended to support alerts and charts on the upload and latency metrics. | 1.0 | Extend speedtest with additional metrics - Speedtest should be extended to support alerts and charts on the upload and latency metrics. | priority | extend speedtest with additional metrics speedtest should be extended to support alerts and charts on the upload and latency metrics | 1 |
236,965 | 26,073,377,812 | IssuesEvent | 2022-12-24 05:22:22 | RG4421/ampere-centos-kernel | https://api.github.com/repos/RG4421/ampere-centos-kernel | reopened | CVE-2018-1130 (Medium) detected in linuxv5.2 | security vulnerability | ## CVE-2018-1130 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.2</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in base branch: <b>amp-centos-8.0-kernel</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/dccp/output.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/dccp/output.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/dccp/output.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Linux kernel before version 4.16-rc7 is vulnerable to a null pointer dereference in dccp_write_xmit() function in net/dccp/output.c in that allows a local user to cause a denial of service by a number of certain crafted system calls.
<p>Publish Date: 2018-05-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-1130>CVE-2018-1130</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-1130">https://nvd.nist.gov/vuln/detail/CVE-2018-1130</a></p>
<p>Release Date: 2018-05-10</p>
<p>Fix Resolution: 4.16-rc7</p>
</p>
</details>
<p></p>
| True | CVE-2018-1130 (Medium) detected in linuxv5.2 - ## CVE-2018-1130 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.2</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in base branch: <b>amp-centos-8.0-kernel</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/dccp/output.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/dccp/output.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/dccp/output.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Linux kernel before version 4.16-rc7 is vulnerable to a null pointer dereference in dccp_write_xmit() function in net/dccp/output.c in that allows a local user to cause a denial of service by a number of certain crafted system calls.
<p>Publish Date: 2018-05-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-1130>CVE-2018-1130</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-1130">https://nvd.nist.gov/vuln/detail/CVE-2018-1130</a></p>
<p>Release Date: 2018-05-10</p>
<p>Fix Resolution: 4.16-rc7</p>
</p>
</details>
<p></p>
| non_priority | cve medium detected in cve medium severity vulnerability vulnerable library linux kernel source tree library home page a href found in base branch amp centos kernel vulnerable source files net dccp output c net dccp output c net dccp output c vulnerability details linux kernel before version is vulnerable to a null pointer dereference in dccp write xmit function in net dccp output c in that allows a local user to cause a denial of service by a number of certain crafted system calls publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution | 0 |
306,633 | 26,485,398,229 | IssuesEvent | 2023-01-17 17:35:16 | AscEmu/AscEmu | https://api.github.com/repos/AscEmu/AscEmu | opened | 👾 [Bug Report] build "NOT USE_PCH" not work (develop) | Issue - Needs retesting | **Description**:
build "NOT USE_PCH" not work
**Steps to reproduce the problem**:
1. build "NOT USE_PCH"
**AscEmu hash/commit**:
latest dev | 1.0 | 👾 [Bug Report] build "NOT USE_PCH" not work (develop) - **Description**:
build "NOT USE_PCH" not work
**Steps to reproduce the problem**:
1. build "NOT USE_PCH"
**AscEmu hash/commit**:
latest dev | non_priority | 👾 build not use pch not work develop description build not use pch not work steps to reproduce the problem build not use pch ascemu hash commit latest dev | 0 |
529,199 | 15,383,158,184 | IssuesEvent | 2021-03-03 02:12:23 | mantidproject/mantid | https://api.github.com/repos/mantidproject/mantid | closed | ConvexPolygon assignable behaviour | Framework Low Priority Stale | As a corollary to fix #14200 I notice that the use-case for _FractionalRebinning_ [here](https://github.com/mantidproject/mantid/blob/master/Framework/DataObjects/src/FractionalRebinning.cpp#L147) has a single ConvexPolygon object, which is global to the loop, and is _cleared_ via `clear` and then inserted into via `insert` upon each iteration. It raises a few questions:
1. Surely you have n assignments more than necessary because clear is setting all members to double max, double min etc?
2. This section of the FractionalRebinning code does not look like it can be made thread-safe in it's current form. Is that why we have a parallel critical block, but no openmp loop?
3. Given that the solution to both of the above would be to make ConvexPolygon immutable, is there any use-case I've missed?
@martyngigg since you seem to be the original author of this, perhaps you could comment on the questions and then reassign to @tom-perkins for the fix.
| 1.0 | ConvexPolygon assignable behaviour - As a corollary to fix #14200 I notice that the use-case for _FractionalRebinning_ [here](https://github.com/mantidproject/mantid/blob/master/Framework/DataObjects/src/FractionalRebinning.cpp#L147) has a single ConvexPolygon object, which is global to the loop, and is _cleared_ via `clear` and then inserted into via `insert` upon each iteration. It raises a few questions:
1. Surely you have n assignments more than necessary because clear is setting all members to double max, double min etc?
2. This section of the FractionalRebinning code does not look like it can be made thread-safe in it's current form. Is that why we have a parallel critical block, but no openmp loop?
3. Given that the solution to both of the above would be to make ConvexPolygon immutable, is there any use-case I've missed?
@martyngigg since you seem to be the original author of this, perhaps you could comment on the questions and then reassign to @tom-perkins for the fix.
| priority | convexpolygon assignable behaviour as a corollary to fix i notice that the use case for fractionalrebinning has a single convexpolygon object which is global to the loop and is cleared via clear and then inserted into via insert upon each iteration it raises a few questions surely you have n assignments more than necessary because clear is setting all members to double max double min etc this section of the fractionalrebinning code does not look like it can be made thread safe in it s current form is that why we have a parallel critical block but no openmp loop given that the solution to both of the above would be to make convexpolygon immutable is there any use case i ve missed martyngigg since you seem to be the original author of this perhaps you could comment on the questions and then reassign to tom perkins for the fix | 1 |
395,201 | 11,672,501,991 | IssuesEvent | 2020-03-04 06:48:57 | AugurProject/augur | https://api.github.com/repos/AugurProject/augur | closed | Message on ROI is missing when inputting pre-filled stake | Add post v2 launch Priority: Low | Should appear once the user enters an amount:
Design: https://www.figma.com/file/aAzKHh4cA6OT2t7WFv2BQ7fB/Reporting-and-Disputing?node-id=231%3A60628
 | 1.0 | Message on ROI is missing when inputting pre-filled stake - Should appear once the user enters an amount:
Design: https://www.figma.com/file/aAzKHh4cA6OT2t7WFv2BQ7fB/Reporting-and-Disputing?node-id=231%3A60628
 | priority | message on roi is missing when inputting pre filled stake should appear once the user enters an amount design | 1 |
736,359 | 25,470,632,694 | IssuesEvent | 2022-11-25 09:52:06 | SimformSolutionsPvtLtd/flutter_calendar_view | https://api.github.com/repos/SimformSolutionsPvtLtd/flutter_calendar_view | closed | Week view incorrectly displays events when a day contains overlapping events | bug priority:1 | When two events in the week view are overlapping they become half as wide to fit over each other. This is perfectly fine. The problem is that when this happens at any time in that day **all** other events that day adapt the same half-size measure even if they are not overlapping with anything.
I would expect only the overlapping events to become small and all other events to remain the same.
My week view looks as follows:
```dart
WeekView(
key: _weekViewStateKey,
heightPerMinute: heightPerMinute,
onEventTap: _onEventTap(),
weekPageHeaderBuilder: (weekStartDate, weekEndDate) {
return DayPageHeader(
date: weekStartDate,
dateStringBuilder: (date, {DateTime? secondaryDate}) {
return DateFormat.yMMMd(Platform.localeName).format(weekStartDate) +
' - ' +
DateFormat.yMMMd(Platform.localeName).format(weekEndDate);
},
onNextDay: () {
_weekViewStateKey.currentState!.nextPage();
},
onPreviousDay: () {
_weekViewStateKey.currentState!.previousPage();
},
backgroundColor: CupertinoDynamicColor.resolve(
CupertinoColors.secondarySystemBackground, context),
);
},
);
```
Visual of the week view with the issue happening:
<img width="345" alt="Screenshot 2022-11-04 at 09 53 12" src="https://user-images.githubusercontent.com/78813883/199932306-8d82b168-a656-4963-b8f3-967302ccb85a.png">
In the image, if we take a look at Thursday for example, the events from ~12:30-2:30 overlaps with ~2:30-4:30. It may look a little weird but the end and start of them are at the exact same moment so that makes sense, but I see no reason for the event from ~9:30-10:30 to also be half size.
Not sure if this is a bug or something I can change myself. Help would be appreciated! | 1.0 | Week view incorrectly displays events when a day contains overlapping events - When two events in the week view are overlapping they become half as wide to fit over each other. This is perfectly fine. The problem is that when this happens at any time in that day **all** other events that day adapt the same half-size measure even if they are not overlapping with anything.
I would expect only the overlapping events to become small and all other events to remain the same.
My week view looks as follows:
```dart
WeekView(
key: _weekViewStateKey,
heightPerMinute: heightPerMinute,
onEventTap: _onEventTap(),
weekPageHeaderBuilder: (weekStartDate, weekEndDate) {
return DayPageHeader(
date: weekStartDate,
dateStringBuilder: (date, {DateTime? secondaryDate}) {
return DateFormat.yMMMd(Platform.localeName).format(weekStartDate) +
' - ' +
DateFormat.yMMMd(Platform.localeName).format(weekEndDate);
},
onNextDay: () {
_weekViewStateKey.currentState!.nextPage();
},
onPreviousDay: () {
_weekViewStateKey.currentState!.previousPage();
},
backgroundColor: CupertinoDynamicColor.resolve(
CupertinoColors.secondarySystemBackground, context),
);
},
);
```
Visual of the week view with the issue happening:
<img width="345" alt="Screenshot 2022-11-04 at 09 53 12" src="https://user-images.githubusercontent.com/78813883/199932306-8d82b168-a656-4963-b8f3-967302ccb85a.png">
In the image, if we take a look at Thursday for example, the events from ~12:30-2:30 overlaps with ~2:30-4:30. It may look a little weird but the end and start of them are at the exact same moment so that makes sense, but I see no reason for the event from ~9:30-10:30 to also be half size.
Not sure if this is a bug or something I can change myself. Help would be appreciated! | priority | week view incorrectly displays events when a day contains overlapping events when two events in the week view are overlapping they become half as wide to fit over each other this is perfectly fine the problem is that when this happens at any time in that day all other events that day adapt the same half size measure even if they are not overlapping with anything i would expect only the overlapping events to become small and all other events to remain the same my week view looks as follows dart weekview key weekviewstatekey heightperminute heightperminute oneventtap oneventtap weekpageheaderbuilder weekstartdate weekenddate return daypageheader date weekstartdate datestringbuilder date datetime secondarydate return dateformat ymmmd platform localename format weekstartdate dateformat ymmmd platform localename format weekenddate onnextday weekviewstatekey currentstate nextpage onpreviousday weekviewstatekey currentstate previouspage backgroundcolor cupertinodynamiccolor resolve cupertinocolors secondarysystembackground context visual of the week view with the issue happening img width alt screenshot at src in the image if we take a look at thursday for example the events from overlaps with it may look a little weird but the end and start of them are at the exact same moment so that makes sense but i see no reason for the event from to also be half size not sure if this is a bug or something i can change myself help would be appreciated | 1 |
132,960 | 12,527,916,778 | IssuesEvent | 2020-06-04 08:43:48 | coin-or/SHOT | https://api.github.com/repos/coin-or/SHOT | opened | API documentation | API documentation | Document the API, e.g. how to create and solve a model from an external program:
- [ ] create an example in the model tests.
- [ ] document this on the project web site.
- [ ] example on how to use the termination callback. | 1.0 | API documentation - Document the API, e.g. how to create and solve a model from an external program:
- [ ] create an example in the model tests.
- [ ] document this on the project web site.
- [ ] example on how to use the termination callback. | non_priority | api documentation document the api e g how to create and solve a model from an external program create an example in the model tests document this on the project web site example on how to use the termination callback | 0 |
145,437 | 22,688,137,795 | IssuesEvent | 2022-07-04 16:06:34 | audacity/audacity | https://api.github.com/repos/audacity/audacity | closed | Incorrect cursor/pointer for moving/dragging Loop Regions | bug P3 Design / UI / UX Known Issue Looping | **Describe the bug**
The cursor used for the grabber for moving Loop Regions in the timeline is incorrect (according to both Microsoft and Apple design guidelines - and in line with common usage elsewhere in most other apps)
The hand icon is **_not_** implemented correctly for the loop region drag:
a) when hovering over the loop region you get the open-hand icon. This **_is_** correct as it indicates the **_potential_** for dragging.
b) but when you then click the open hand-should change to a closed-hand to indicate the drag.
Compare this to the cursor icon behavior over the clip-handle drag-bars which does this correctly.
See:
a) https://docs.microsoft.com/en-us/windows/win32/uxguide/inter-mouse
b) https://developer.apple.com/design/human-interface-guidelines/macos/user-interaction/mouse-and-trackpad/
**To Reproduce**
Steps to reproduce the behavior:
1. Create a Loop Region on the Timeline
2. Hover your cursor over the Loop Region
3. Observe: the open hand icon _(which indicates the potential for dragging)_
4. Click there
5. Observe: the icon does **not** change to the closed gloved hand for the drag
**Expected behavior**
When clicking in a Loop Region in the Timeline _(while audio is not loop playing)_ the cursor icon should change to a closed-hand to indicated draggabilty/drag.
_Note that when Loop play is active then clicking in the Loop Region restarts the Lop play from that point in the Loop Region._
**Screenshots**
Can't do this as screenshots don't capture the pointers
**Additional information (please complete the following information):**
- OS: Windows 10 and macOS 12.1 Monterey- but assume all OS
- Version: Audacity - all including all 3.1.x and 3.2.0 alpha
**Additional context**
@Tantacrul - flagging this as a UX issue - _consistency with cursor icon behaviors on the Clip-handle drag-bars_
| 1.0 | Incorrect cursor/pointer for moving/dragging Loop Regions - **Describe the bug**
The cursor used for the grabber for moving Loop Regions in the timeline is incorrect (according to both Microsoft and Apple design guidelines - and in line with common usage elsewhere in most other apps)
The hand icon is **_not_** implemented correctly for the loop region drag:
a) when hovering over the loop region you get the open-hand icon. This **_is_** correct as it indicates the **_potential_** for dragging.
b) but when you then click the open hand-should change to a closed-hand to indicate the drag.
Compare this to the cursor icon behavior over the clip-handle drag-bars which does this correctly.
See:
a) https://docs.microsoft.com/en-us/windows/win32/uxguide/inter-mouse
b) https://developer.apple.com/design/human-interface-guidelines/macos/user-interaction/mouse-and-trackpad/
**To Reproduce**
Steps to reproduce the behavior:
1. Create a Loop Region on the Timeline
2. Hover your cursor over the Loop Region
3. Observe: the open hand icon _(which indicates the potential for dragging)_
4. Click there
5. Observe: the icon does **not** change to the closed gloved hand for the drag
**Expected behavior**
When clicking in a Loop Region in the Timeline _(while audio is not loop playing)_ the cursor icon should change to a closed-hand to indicated draggabilty/drag.
_Note that when Loop play is active then clicking in the Loop Region restarts the Lop play from that point in the Loop Region._
**Screenshots**
Can't do this as screenshots don't capture the pointers
**Additional information (please complete the following information):**
- OS: Windows 10 and macOS 12.1 Monterey- but assume all OS
- Version: Audacity - all including all 3.1.x and 3.2.0 alpha
**Additional context**
@Tantacrul - flagging this as a UX issue - _consistency with cursor icon behaviors on the Clip-handle drag-bars_
| non_priority | incorrect cursor pointer for moving dragging loop regions describe the bug the cursor used for the grabber for moving loop regions in the timeline is incorrect according to both microsoft and apple design guidelines and in line with common usage elsewhere in most other apps the hand icon is not implemented correctly for the loop region drag a when hovering over the loop region you get the open hand icon this is correct as it indicates the potential for dragging b but when you then click the open hand should change to a closed hand to indicate the drag compare this to the cursor icon behavior over the clip handle drag bars which does this correctly see a b to reproduce steps to reproduce the behavior create a loop region on the timeline hover your cursor over the loop region observe the open hand icon which indicates the potential for dragging click there observe the icon does not change to the closed gloved hand for the drag expected behavior when clicking in a loop region in the timeline while audio is not loop playing the cursor icon should change to a closed hand to indicated draggabilty drag note that when loop play is active then clicking in the loop region restarts the lop play from that point in the loop region screenshots can t do this as screenshots don t capture the pointers additional information please complete the following information os windows and macos monterey but assume all os version audacity all including all x and alpha additional context tantacrul flagging this as a ux issue consistency with cursor icon behaviors on the clip handle drag bars | 0 |
644,707 | 20,985,576,762 | IssuesEvent | 2022-03-29 02:31:22 | quickwit-oss/quickwit | https://api.github.com/repos/quickwit-oss/quickwit | closed | Switch to time::Duration (the time crate, not std::time) | enhancement low-priority | We may want to switch from std::time::Duration to chrono::Duration, it has a more reasonable API.
```
const RUN_INTERVAL: Duration = Duration::minutes(1);
const STAGED_GRACE_PERIOD: Duration = Duration::hours(1);
const DELETION_GRACE_PERIOD: Duration = Duration::minutes(2);
```
vs
```
const RUN_INTERVAL: Duration = Duration::from_secs(60); // 1 minutes
const STAGED_GRACE_PERIOD: Duration = Duration::from_secs(60 * 60); // 1 hour
const DELETION_GRACE_PERIOD: Duration = Duration::from_secs(120); // 2 min
``` | 1.0 | Switch to time::Duration (the time crate, not std::time) - We may want to switch from std::time::Duration to chrono::Duration, it has a more reasonable API.
```
const RUN_INTERVAL: Duration = Duration::minutes(1);
const STAGED_GRACE_PERIOD: Duration = Duration::hours(1);
const DELETION_GRACE_PERIOD: Duration = Duration::minutes(2);
```
vs
```
const RUN_INTERVAL: Duration = Duration::from_secs(60); // 1 minutes
const STAGED_GRACE_PERIOD: Duration = Duration::from_secs(60 * 60); // 1 hour
const DELETION_GRACE_PERIOD: Duration = Duration::from_secs(120); // 2 min
``` | priority | switch to time duration the time crate not std time we may want to switch from std time duration to chrono duration it has a more reasonable api const run interval duration duration minutes const staged grace period duration duration hours const deletion grace period duration duration minutes vs const run interval duration duration from secs minutes const staged grace period duration duration from secs hour const deletion grace period duration duration from secs min | 1 |
656,823 | 21,777,173,471 | IssuesEvent | 2022-05-13 14:51:06 | 1ForeverHD/ZonePlus | https://api.github.com/repos/1ForeverHD/ZonePlus | opened | Fully account for Streaming | Type: Bug Type: Enhancement Priority: High Scope: Core | 1. Ensure ``:getRandomPoint()`` returns nil instead of recurring infinitely if zone parts are 0, e.g. if all streamed all:
```lua
function Zone:getRandomPoint()
local region = self.exactRegion
local size = region.Size
local cframe = region.CFrame
local random = Random.new()
local randomCFrame
local success, touchingZoneParts
local pointIsWithinZone
local totalPartsInZone = #self.overlapParams.zonePartsWhitelist.FilterDescendantsInstances
if totalPartsInZone <= 0 then
-- Its important we return if there are no parts within the zone otherwise the checks below will recur infinitely.
-- This could occur for example if streaming is enabled and the zone's parts disappear on the client
return nil
end
repeat
randomCFrame = cframe * CFrame.new(random:NextNumber(-size.X/2,size.X/2), random:NextNumber(-size.Y/2,size.Y/2), random:NextNumber(-size.Z/2,size.Z/2))
success, touchingZoneParts = self:findPoint(randomCFrame)
if success then
pointIsWithinZone = true
end
until pointIsWithinZone
local randomVector = randomCFrame.Position
return randomVector, touchingZoneParts
end
```
2. line 87 of tracker so that it accounts for parts being streamed in:
```lua
local function playerAdded(player)
local function charAdded(character)
local function trackChar()
updatePlayerCharacters()
self:update()
for _, valueInstance in pairs(character.Humanoid:GetChildren()) do
if valueInstance:IsA("NumberValue") then
valueInstance.Changed:Connect(function()
self:update()
end)
end
end
end
local humanoid = character:FindFirstChild("HumanoidRootPart")
if humanoid then
task.defer(trackChar)
else
character.ChildAdded:Connect(function(child)
if child.Name == "HumanoidRootPart" and child:IsA("BasePart") then
task.defer(trackChar)
end
end)
end
end
if player.Character then
charAdded(player.Character)
end
player.CharacterAdded:Connect(function(char)
charAdded(char)
end)
player.CharacterRemoving:Connect(function(removingCharacter)
self.exitDetections[removingCharacter] = nil
end)
end
```
| 1.0 | Fully account for Streaming - 1. Ensure ``:getRandomPoint()`` returns nil instead of recurring infinitely if zone parts are 0, e.g. if all streamed all:
```lua
function Zone:getRandomPoint()
local region = self.exactRegion
local size = region.Size
local cframe = region.CFrame
local random = Random.new()
local randomCFrame
local success, touchingZoneParts
local pointIsWithinZone
local totalPartsInZone = #self.overlapParams.zonePartsWhitelist.FilterDescendantsInstances
if totalPartsInZone <= 0 then
-- Its important we return if there are no parts within the zone otherwise the checks below will recur infinitely.
-- This could occur for example if streaming is enabled and the zone's parts disappear on the client
return nil
end
repeat
randomCFrame = cframe * CFrame.new(random:NextNumber(-size.X/2,size.X/2), random:NextNumber(-size.Y/2,size.Y/2), random:NextNumber(-size.Z/2,size.Z/2))
success, touchingZoneParts = self:findPoint(randomCFrame)
if success then
pointIsWithinZone = true
end
until pointIsWithinZone
local randomVector = randomCFrame.Position
return randomVector, touchingZoneParts
end
```
2. line 87 of tracker so that it accounts for parts being streamed in:
```lua
local function playerAdded(player)
local function charAdded(character)
local function trackChar()
updatePlayerCharacters()
self:update()
for _, valueInstance in pairs(character.Humanoid:GetChildren()) do
if valueInstance:IsA("NumberValue") then
valueInstance.Changed:Connect(function()
self:update()
end)
end
end
end
local humanoid = character:FindFirstChild("HumanoidRootPart")
if humanoid then
task.defer(trackChar)
else
character.ChildAdded:Connect(function(child)
if child.Name == "HumanoidRootPart" and child:IsA("BasePart") then
task.defer(trackChar)
end
end)
end
end
if player.Character then
charAdded(player.Character)
end
player.CharacterAdded:Connect(function(char)
charAdded(char)
end)
player.CharacterRemoving:Connect(function(removingCharacter)
self.exitDetections[removingCharacter] = nil
end)
end
```
| priority | fully account for streaming ensure getrandompoint returns nil instead of recurring infinitely if zone parts are e g if all streamed all lua function zone getrandompoint local region self exactregion local size region size local cframe region cframe local random random new local randomcframe local success touchingzoneparts local pointiswithinzone local totalpartsinzone self overlapparams zonepartswhitelist filterdescendantsinstances if totalpartsinzone then its important we return if there are no parts within the zone otherwise the checks below will recur infinitely this could occur for example if streaming is enabled and the zone s parts disappear on the client return nil end repeat randomcframe cframe cframe new random nextnumber size x size x random nextnumber size y size y random nextnumber size z size z success touchingzoneparts self findpoint randomcframe if success then pointiswithinzone true end until pointiswithinzone local randomvector randomcframe position return randomvector touchingzoneparts end line of tracker so that it accounts for parts being streamed in lua local function playeradded player local function charadded character local function trackchar updateplayercharacters self update for valueinstance in pairs character humanoid getchildren do if valueinstance isa numbervalue then valueinstance changed connect function self update end end end end local humanoid character findfirstchild humanoidrootpart if humanoid then task defer trackchar else character childadded connect function child if child name humanoidrootpart and child isa basepart then task defer trackchar end end end end if player character then charadded player character end player characteradded connect function char charadded char end player characterremoving connect function removingcharacter self exitdetections nil end end | 1 |
720,475 | 24,794,231,594 | IssuesEvent | 2022-10-24 15:53:54 | OpenLiberty/liberty-tools-vscode | https://api.github.com/repos/OpenLiberty/liberty-tools-vscode | closed | Liberty: History action to save previous Start... choices | 3 medium priority GUI SVT-req | To avoid having to re-type the same custom parameters each run add a "History" option similar to VS Code for Maven to view past runs
<img width="989" alt="image" src="https://user-images.githubusercontent.com/26146482/186692064-e4191a2e-d107-4827-abd4-e79efce2738b.png">
<img width="878" alt="image" src="https://user-images.githubusercontent.com/26146482/189976999-17385f9a-92c2-4c82-83f6-d370db9a785c.png">
| 1.0 | Liberty: History action to save previous Start... choices - To avoid having to re-type the same custom parameters each run add a "History" option similar to VS Code for Maven to view past runs
<img width="989" alt="image" src="https://user-images.githubusercontent.com/26146482/186692064-e4191a2e-d107-4827-abd4-e79efce2738b.png">
<img width="878" alt="image" src="https://user-images.githubusercontent.com/26146482/189976999-17385f9a-92c2-4c82-83f6-d370db9a785c.png">
| priority | liberty history action to save previous start choices to avoid having to re type the same custom parameters each run add a history option similar to vs code for maven to view past runs img width alt image src img width alt image src | 1 |
331,857 | 10,077,534,245 | IssuesEvent | 2019-07-24 18:58:13 | Atlantiss/NetherwingBugtracker | https://api.github.com/repos/Atlantiss/NetherwingBugtracker | closed | [NPC]Parasitic Shadowfiend instantly dying to Consecration | Exploit/Abuse - Priority Raid | **Description**:
On Illidan Phase 1, every now and then someone in the raid gets a dot which spawns two Parasitic Shadowfiend with like 3k hp that target raid members and smash them with their evil psyonic powers.
**Current behaviour**:
When the Shadowfiends spawn and there's a paladin's consecration on the ground, they instantly die. Illidan killing guilds have abused this exploit for weeks without reporting it (e.g. https://www.twitch.tv/videos/451325666?t=2h45m25s where you see the consecration drop right before the Shadowfiend spawns), making this method *The Golden Standard* for anyone who is attempting Illidan the first time.
**Expected behaviour**:
The Parasitic Shadowfiends should not instantly die from a consecration, but be killed by the raid members.
**Server Revision**:
3072 | 1.0 | [NPC]Parasitic Shadowfiend instantly dying to Consecration - **Description**:
On Illidan Phase 1, every now and then someone in the raid gets a dot which spawns two Parasitic Shadowfiend with like 3k hp that target raid members and smash them with their evil psyonic powers.
**Current behaviour**:
When the Shadowfiends spawn and there's a paladin's consecration on the ground, they instantly die. Illidan killing guilds have abused this exploit for weeks without reporting it (e.g. https://www.twitch.tv/videos/451325666?t=2h45m25s where you see the consecration drop right before the Shadowfiend spawns), making this method *The Golden Standard* for anyone who is attempting Illidan the first time.
**Expected behaviour**:
The Parasitic Shadowfiends should not instantly die from a consecration, but be killed by the raid members.
**Server Revision**:
3072 | priority | parasitic shadowfiend instantly dying to consecration description on illidan phase every now and then someone in the raid gets a dot which spawns two parasitic shadowfiend with like hp that target raid members and smash them with their evil psyonic powers current behaviour when the shadowfiends spawn and there s a paladin s consecration on the ground they instantly die illidan killing guilds have abused this exploit for weeks without reporting it e g where you see the consecration drop right before the shadowfiend spawns making this method the golden standard for anyone who is attempting illidan the first time expected behaviour the parasitic shadowfiends should not instantly die from a consecration but be killed by the raid members server revision | 1 |
341,099 | 10,287,405,736 | IssuesEvent | 2019-08-27 09:02:44 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | closed | Make visible bookmark bar on NTP by default | QA/Yes feature/new-tab feature/settings priority/P2 | As https://github.com/brave/brave-core/pull/2869 is merged, user can choose whether bookmark bar is visible or not on NTP regardless of bookmark show option.
By default bookmark bar is off on NTP.
It should be visible by default. | 1.0 | Make visible bookmark bar on NTP by default - As https://github.com/brave/brave-core/pull/2869 is merged, user can choose whether bookmark bar is visible or not on NTP regardless of bookmark show option.
By default bookmark bar is off on NTP.
It should be visible by default. | priority | make visible bookmark bar on ntp by default as is merged user can choose whether bookmark bar is visible or not on ntp regardless of bookmark show option by default bookmark bar is off on ntp it should be visible by default | 1 |
307,272 | 26,521,619,460 | IssuesEvent | 2023-01-19 03:34:49 | ConaireD/TolimanWIP | https://api.github.com/repos/ConaireD/TolimanWIP | closed | Assess Test Coverage with `pytest-cov`. | tests | Hi all.
The title explains it all. This is a little overkill given how small the package is but I think it is worth it as a learning experience.
Regards
Jordan. | 1.0 | Assess Test Coverage with `pytest-cov`. - Hi all.
The title explains it all. This is a little overkill given how small the package is but I think it is worth it as a learning experience.
Regards
Jordan. | non_priority | assess test coverage with pytest cov hi all the title explains it all this is a little overkill given how small the package is but i think it is worth it as a learning experience regards jordan | 0 |
74,744 | 15,368,451,521 | IssuesEvent | 2021-03-02 05:35:15 | iUoB/help.iuob.uk | https://api.github.com/repos/iUoB/help.iuob.uk | closed | CVE-2016-10735 (Medium) detected in bootstrap-3.3.5.min.js | security vulnerability | ## CVE-2016-10735 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.5.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/js/bootstrap.min.js</a></p>
<p>Path to dependency file: help.iuob.uk/node_modules/autocomplete.js/test/playground_jquery.html</p>
<p>Path to vulnerable library: help.iuob.uk/node_modules/autocomplete.js/test/playground_jquery.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.5.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/iUoB/help.iuob.uk/commit/bcc75f2ab9e6cb9d1e223057420d85c615ce7619">bcc75f2ab9e6cb9d1e223057420d85c615ce7619</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap 3.x before 3.4.0 and 4.x-beta before 4.0.0-beta.2, XSS is possible in the data-target attribute, a different vulnerability than CVE-2018-14041.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10735>CVE-2016-10735</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/issues/20184">https://github.com/twbs/bootstrap/issues/20184</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2016-10735 (Medium) detected in bootstrap-3.3.5.min.js - ## CVE-2016-10735 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.5.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/js/bootstrap.min.js</a></p>
<p>Path to dependency file: help.iuob.uk/node_modules/autocomplete.js/test/playground_jquery.html</p>
<p>Path to vulnerable library: help.iuob.uk/node_modules/autocomplete.js/test/playground_jquery.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.5.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/iUoB/help.iuob.uk/commit/bcc75f2ab9e6cb9d1e223057420d85c615ce7619">bcc75f2ab9e6cb9d1e223057420d85c615ce7619</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap 3.x before 3.4.0 and 4.x-beta before 4.0.0-beta.2, XSS is possible in the data-target attribute, a different vulnerability than CVE-2018-14041.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10735>CVE-2016-10735</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/issues/20184">https://github.com/twbs/bootstrap/issues/20184</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in bootstrap min js cve medium severity vulnerability vulnerable library bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to dependency file help iuob uk node modules autocomplete js test playground jquery html path to vulnerable library help iuob uk node modules autocomplete js test playground jquery html dependency hierarchy x bootstrap min js vulnerable library found in head commit a href found in base branch master vulnerability details in bootstrap x before and x beta before beta xss is possible in the data target attribute a different vulnerability than cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
357,382 | 10,605,932,660 | IssuesEvent | 2019-10-10 21:40:29 | wherebyus/general-tasks | https://api.github.com/repos/wherebyus/general-tasks | opened | When creating promo coupon codes, I would like to be able to upload the codes and data as a csv and/or clone codes to make the work less tedious | Priority: Low Product: Promos UX: Not Validated | ## Feature or problem
We have to create coupons one at a time, which can be a real pain in the butt. It would be great to have a bulk upload for coupon codes or some ability to clone/duplicate codes to reduce the tediuos data entry
## UX Validation
Not Validated
### Suggested priority
Low
### Stakeholders
*Submitted:* rebekah
### Definition of done
How will we know when this feature is complete?
### Subtasks
A detailed list of changes that need to be made or subtasks. One checkbox per.
- [ ] Brew the coffee
## Developer estimate
To help the team accurately estimate the complexity of this task,
take a moment to walk through this list and estimate each item. At the end, you can total
the estimates and round to the nearest prime number.
If any of these are at a `5` or higher, or if the total is above a `5`, consider breaking
this issue into multiple smaller issues.
- [ ] Changes to the database ()
- [ ] Changes to the API ()
- [ ] Testing Changes to the API ()
- [ ] Changes to Application Code ()
- [ ] Adding or updating unit tests ()
- [ ] Local developer testing ()
### Total developer estimate: 0
## Additional estimate
- [ ] Code review ()
- [ ] QA Testing ()
- [ ] Stakeholder Sign-off ()
- [ ] Deploy to Production ()
### Total additional estimate: 3
## QA Notes
Detailed instructions for testing, one checkbox per test to be completed.
### Contextual tests
- [ ] Accessibility check
- [ ] Cross-browser check (Edge, Chrome, Firefox)
- [ ] Responsive check
| 1.0 | When creating promo coupon codes, I would like to be able to upload the codes and data as a csv and/or clone codes to make the work less tedious - ## Feature or problem
We have to create coupons one at a time, which can be a real pain in the butt. It would be great to have a bulk upload for coupon codes or some ability to clone/duplicate codes to reduce the tediuos data entry
## UX Validation
Not Validated
### Suggested priority
Low
### Stakeholders
*Submitted:* rebekah
### Definition of done
How will we know when this feature is complete?
### Subtasks
A detailed list of changes that need to be made or subtasks. One checkbox per.
- [ ] Brew the coffee
## Developer estimate
To help the team accurately estimate the complexity of this task,
take a moment to walk through this list and estimate each item. At the end, you can total
the estimates and round to the nearest prime number.
If any of these are at a `5` or higher, or if the total is above a `5`, consider breaking
this issue into multiple smaller issues.
- [ ] Changes to the database ()
- [ ] Changes to the API ()
- [ ] Testing Changes to the API ()
- [ ] Changes to Application Code ()
- [ ] Adding or updating unit tests ()
- [ ] Local developer testing ()
### Total developer estimate: 0
## Additional estimate
- [ ] Code review ()
- [ ] QA Testing ()
- [ ] Stakeholder Sign-off ()
- [ ] Deploy to Production ()
### Total additional estimate: 3
## QA Notes
Detailed instructions for testing, one checkbox per test to be completed.
### Contextual tests
- [ ] Accessibility check
- [ ] Cross-browser check (Edge, Chrome, Firefox)
- [ ] Responsive check
| priority | when creating promo coupon codes i would like to be able to upload the codes and data as a csv and or clone codes to make the work less tedious feature or problem we have to create coupons one at a time which can be a real pain in the butt it would be great to have a bulk upload for coupon codes or some ability to clone duplicate codes to reduce the tediuos data entry ux validation not validated suggested priority low stakeholders submitted rebekah definition of done how will we know when this feature is complete subtasks a detailed list of changes that need to be made or subtasks one checkbox per brew the coffee developer estimate to help the team accurately estimate the complexity of this task take a moment to walk through this list and estimate each item at the end you can total the estimates and round to the nearest prime number if any of these are at a or higher or if the total is above a consider breaking this issue into multiple smaller issues changes to the database changes to the api testing changes to the api changes to application code adding or updating unit tests local developer testing total developer estimate additional estimate code review qa testing stakeholder sign off deploy to production total additional estimate qa notes detailed instructions for testing one checkbox per test to be completed contextual tests accessibility check cross browser check edge chrome firefox responsive check | 1 |
12,154 | 9,582,581,676 | IssuesEvent | 2019-05-08 01:22:11 | microsoft/vscode-cpptools | https://api.github.com/repos/microsoft/vscode-cpptools | closed | Go to definition and intellisense | Feature: Go to Definition Language Service bug fixed (release pending) quick fix regression | **Type: LanguageService**
<!----- Input information below ----->
**Describe the bug**
- VS Code Version: 1.31
- C/C++ Extension Version: 0.21

Code as above.
- A small bug: symbol behind #ifndef can't go to definition. I think it's the same with symbols behind "goto".
- If DT_VOID isn't defined, symbol P and ulLen can't go to definition and it doesn't show wave line under symbol "DT_VOID". | 1.0 | Go to definition and intellisense - **Type: LanguageService**
<!----- Input information below ----->
**Describe the bug**
- VS Code Version: 1.31
- C/C++ Extension Version: 0.21

Code as above.
- A small bug: symbol behind #ifndef can't go to definition. I think it's the same with symbols behind "goto".
- If DT_VOID isn't defined, symbol P and ulLen can't go to definition and it doesn't show wave line under symbol "DT_VOID". | non_priority | go to definition and intellisense type languageservice describe the bug vs code version c c extension version code as above a small bug symbol behind ifndef can t go to definition i think it s the same with symbols behind goto if dt void isn t defined symbol p and ullen can t go to definition and it doesn t show wave line under symbol dt void | 0 |
28,932 | 5,437,887,476 | IssuesEvent | 2017-03-06 08:48:41 | line/armeria | https://api.github.com/repos/line/armeria | closed | Exception flooding in ZooKeeperRegistrationTest | defect | https://travis-ci.org/line/armeria/builds/206876098#L1240
```
05:14:02.621 [Executors-Default-1-SendThread(127.0.0.1:45011)] WARN org.apache.zookeeper.ClientCnxn - Session 0x15a8d7038580016 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
...
```
Probably some race condition? Could you take a look, @jonefeewang ? | 1.0 | Exception flooding in ZooKeeperRegistrationTest - https://travis-ci.org/line/armeria/builds/206876098#L1240
```
05:14:02.621 [Executors-Default-1-SendThread(127.0.0.1:45011)] WARN org.apache.zookeeper.ClientCnxn - Session 0x15a8d7038580016 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
...
```
Probably some race condition? Could you take a look, @jonefeewang ? | non_priority | exception flooding in zookeeperregistrationtest warn org apache zookeeper clientcnxn session for server null unexpected error closing socket connection and attempting reconnect java net connectexception connection refused at sun nio ch socketchannelimpl checkconnect native method at sun nio ch socketchannelimpl finishconnect socketchannelimpl java at org apache zookeeper clientcnxnsocketnio dotransport clientcnxnsocketnio java at org apache zookeeper clientcnxn sendthread run clientcnxn java probably some race condition could you take a look jonefeewang | 0 |
115,488 | 24,770,835,221 | IssuesEvent | 2022-10-23 05:21:13 | IAmTamal/Milan | https://api.github.com/repos/IAmTamal/Milan | closed | Updates to Bug issue templates | 💻 aspect: code 🟧 priority: high 🛠 goal: fix 🤖 aspect: dx 🛠 status : under development good first issue hacktoberfest | ### Description
- For docs, features we have a title by default of `[DOCS] <description>`
- I want the same for the bugs too. Make the title `[BUGS] <description>` for the bug issues.
- In the dropdown change `None` to `No, someone else can work on it` **(Screenshot 3)**
- Replace `Have you checked if this issue has been raised before?` with `Have you checked for similar open issues ?` **(Screenshot 4)**
### Screenshots
## Other issues :

## Bugs :

## Screenshot 3

## Screenshot 4

### Additional information
_No response_
### 🥦 Browser
Brave
### 👀 Have you checked if this issue has been raised before?
- [X] I checked and didn't find similar issue
### 🏢 Have you read the Contributing Guidelines?
- [X] I have read the [Contributing Guidelines](https://github.com/IAmTamal/Milan/blob/main/CONTRIBUTING.md)
### Are you willing to work on this issue ?
_No response_ | 1.0 | Updates to Bug issue templates - ### Description
- For docs, features we have a title by default of `[DOCS] <description>`
- I want the same for the bugs too. Make the title `[BUGS] <description>` for the bug issues.
- In the dropdown change `None` to `No, someone else can work on it` **(Screenshot 3)**
- Replace `Have you checked if this issue has been raised before?` with `Have you checked for similar open issues ?` **(Screenshot 4)**
### Screenshots
## Other issues :

## Bugs :

## Screenshot 3

## Screenshot 4

### Additional information
_No response_
### 🥦 Browser
Brave
### 👀 Have you checked if this issue has been raised before?
- [X] I checked and didn't find similar issue
### 🏢 Have you read the Contributing Guidelines?
- [X] I have read the [Contributing Guidelines](https://github.com/IAmTamal/Milan/blob/main/CONTRIBUTING.md)
### Are you willing to work on this issue ?
_No response_ | non_priority | updates to bug issue templates description for docs features we have a title by default of i want the same for the bugs too make the title for the bug issues in the dropdown change none to no someone else can work on it screenshot replace have you checked if this issue has been raised before with have you checked for similar open issues screenshot screenshots other issues bugs screenshot screenshot additional information no response 🥦 browser brave 👀 have you checked if this issue has been raised before i checked and didn t find similar issue 🏢 have you read the contributing guidelines i have read the are you willing to work on this issue no response | 0 |
175,357 | 21,300,995,147 | IssuesEvent | 2022-04-15 03:04:27 | rvvergara/exercism-challenges | https://api.github.com/repos/rvvergara/exercism-challenges | opened | CVE-2021-37712 (High) detected in tar-4.4.1.tgz | security vulnerability | ## CVE-2021-37712 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.1.tgz">https://registry.npmjs.org/tar/-/tar-4.4.1.tgz</a></p>
<p>
Dependency Hierarchy:
- jest-21.2.1.tgz (Root Library)
- jest-cli-21.2.1.tgz
- jest-haste-map-21.2.0.tgz
- sane-2.5.2.tgz
- fsevents-1.2.4.tgz
- node-pre-gyp-0.10.0.tgz
- :x: **tar-4.4.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value. Additionally, on Windows systems, long path portions would resolve to the same file system entities as their 8.3 "short path" counterparts. A specially crafted tar archive could thus include a directory with one form of the path, followed by a symbolic link with a different string that resolves to the same file system entity, followed by a file using the first form. By first creating a directory, and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-qq89-hq3f-393p.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37712>CVE-2021-37712</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p">https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution (tar): 4.4.18</p>
<p>Direct dependency fix Resolution (jest): 21.3.0-alpha.1e3ee68e</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-37712 (High) detected in tar-4.4.1.tgz - ## CVE-2021-37712 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.1.tgz">https://registry.npmjs.org/tar/-/tar-4.4.1.tgz</a></p>
<p>
Dependency Hierarchy:
- jest-21.2.1.tgz (Root Library)
- jest-cli-21.2.1.tgz
- jest-haste-map-21.2.0.tgz
- sane-2.5.2.tgz
- fsevents-1.2.4.tgz
- node-pre-gyp-0.10.0.tgz
- :x: **tar-4.4.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value. Additionally, on Windows systems, long path portions would resolve to the same file system entities as their 8.3 "short path" counterparts. A specially crafted tar archive could thus include a directory with one form of the path, followed by a symbolic link with a different string that resolves to the same file system entity, followed by a file using the first form. By first creating a directory, and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-qq89-hq3f-393p.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37712>CVE-2021-37712</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p">https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution (tar): 4.4.18</p>
<p>Direct dependency fix Resolution (jest): 21.3.0-alpha.1e3ee68e</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in tar tgz cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href dependency hierarchy jest tgz root library jest cli tgz jest haste map tgz sane tgz fsevents tgz node pre gyp tgz x tar tgz vulnerable library found in base branch master vulnerability details the npm package tar aka node tar before versions and has an arbitrary file creation overwrite and arbitrary code execution vulnerability node tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted this is in part achieved by ensuring that extracted directories are not symlinks additionally in order to prevent unnecessary stat calls to determine whether a given path is a directory paths are cached when directories are created this logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value additionally on windows systems long path portions would resolve to the same file system entities as their short path counterparts a specially crafted tar archive could thus include a directory with one form of the path followed by a symbolic link with a different string that resolves to the same file system entity followed by a file using the first form by first creating a directory and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem it was thus possible to bypass node tar symlink checks on directories essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location thus allowing arbitrary file creation and overwrite these issues were addressed in releases and the branch of node tar has been deprecated and did not receive patches for these issues if you are still using a release we recommend you update to a more recent version of node tar if this is not possible a workaround is available in the referenced ghsa publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar direct dependency fix resolution jest alpha step up your open source security game with whitesource | 0 |
445,332 | 12,829,098,689 | IssuesEvent | 2020-07-06 22:00:14 | rpitv/glimpse-api | https://api.github.com/repos/rpitv/glimpse-api | closed | Add Docker support | Priority: LOW enhancement | Vagrant is currently supported, however a Docker file should also be written for production. | 1.0 | Add Docker support - Vagrant is currently supported, however a Docker file should also be written for production. | priority | add docker support vagrant is currently supported however a docker file should also be written for production | 1 |
91,884 | 3,863,516,844 | IssuesEvent | 2016-04-08 09:45:41 | iamxavier/elmah | https://api.github.com/repos/iamxavier/elmah | closed | SqlCompactErrorLog assumes that if the database file exists, it has the elmah tables | auto-migrated Component-Persistence Priority-Medium Type-Enhancement | ```
What steps will reproduce the problem?
1. Use elmah on a ASP.NET project that already uses SQL Server CE 4.0
2. Go to the elmah page
3. It fails.
What is the expected output? What do you see instead?
Expected: elmah page.
Instead: error.
What version of the product are you using? On what operating system?
elmah 1.2 beta
Please provide any additional information below.
The SqlCompactErrorLog.InitializeDatabase() method assumes (twice) that if the
database file exists, it must already have the elmah tables. This is not always
the case. So, the simplest alternative could be to verify that the table really
exists (once you verify that the database file exists), instead of assuming
that it does.
```
Original issue reported on code.google.com by `je...@garzazambrano.net` on 28 Apr 2011 at 8:15 | 1.0 | SqlCompactErrorLog assumes that if the database file exists, it has the elmah tables - ```
What steps will reproduce the problem?
1. Use elmah on a ASP.NET project that already uses SQL Server CE 4.0
2. Go to the elmah page
3. It fails.
What is the expected output? What do you see instead?
Expected: elmah page.
Instead: error.
What version of the product are you using? On what operating system?
elmah 1.2 beta
Please provide any additional information below.
The SqlCompactErrorLog.InitializeDatabase() method assumes (twice) that if the
database file exists, it must already have the elmah tables. This is not always
the case. So, the simplest alternative could be to verify that the table really
exists (once you verify that the database file exists), instead of assuming
that it does.
```
Original issue reported on code.google.com by `je...@garzazambrano.net` on 28 Apr 2011 at 8:15 | priority | sqlcompacterrorlog assumes that if the database file exists it has the elmah tables what steps will reproduce the problem use elmah on a asp net project that already uses sql server ce go to the elmah page it fails what is the expected output what do you see instead expected elmah page instead error what version of the product are you using on what operating system elmah beta please provide any additional information below the sqlcompacterrorlog initializedatabase method assumes twice that if the database file exists it must already have the elmah tables this is not always the case so the simplest alternative could be to verify that the table really exists once you verify that the database file exists instead of assuming that it does original issue reported on code google com by je garzazambrano net on apr at | 1 |
84,470 | 16,504,469,765 | IssuesEvent | 2021-05-25 17:32:07 | PyTorchLightning/pytorch-lightning | https://api.github.com/repos/PyTorchLightning/pytorch-lightning | closed | Remove TPU training related logic from `ModelCheckpointCallback` | Priority P1 checkpoint help wanted refactors and code health | ## 🐛 Enhancement
Remove TPU training related logic from `ModelCheckpointCallback`
### Expected behavior
Design-wise, it should be decoupled from any specific training plugins-related logic. Had it initially for writing checkpoints on every host machine for TPU Pod training
| 1.0 | Remove TPU training related logic from `ModelCheckpointCallback` - ## 🐛 Enhancement
Remove TPU training related logic from `ModelCheckpointCallback`
### Expected behavior
Design-wise, it should be decoupled from any specific training plugins-related logic. Had it initially for writing checkpoints on every host machine for TPU Pod training
| non_priority | remove tpu training related logic from modelcheckpointcallback 🐛 enhancement remove tpu training related logic from modelcheckpointcallback expected behavior design wise it should be decoupled from any specific training plugins related logic had it initially for writing checkpoints on every host machine for tpu pod training | 0 |
165,436 | 20,574,672,166 | IssuesEvent | 2022-03-04 02:22:46 | AlexRogalskiy/github-action-user-contribution | https://api.github.com/repos/AlexRogalskiy/github-action-user-contribution | closed | CVE-2022-0155 (Medium) detected in follow-redirects-1.14.5.tgz - autoclosed | security vulnerability | ## CVE-2022-0155 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>follow-redirects-1.14.5.tgz</b></p></summary>
<p>HTTP and HTTPS modules that follow redirects.</p>
<p>Library home page: <a href="https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.5.tgz">https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/follow-redirects/package.json</p>
<p>
Dependency Hierarchy:
- monika-1.6.8.tgz (Root Library)
- axios-0.21.4.tgz
- :x: **follow-redirects-1.14.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-user-contribution/commit/9e68ebed6109cfe070cb944c0e7e2f4c54d717de">9e68ebed6109cfe070cb944c0e7e2f4c54d717de</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
follow-redirects is vulnerable to Exposure of Private Personal Information to an Unauthorized Actor
<p>Publish Date: 2022-01-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0155>CVE-2022-0155</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/">https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/</a></p>
<p>Release Date: 2022-01-10</p>
<p>Fix Resolution: follow-redirects - v1.14.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-0155 (Medium) detected in follow-redirects-1.14.5.tgz - autoclosed - ## CVE-2022-0155 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>follow-redirects-1.14.5.tgz</b></p></summary>
<p>HTTP and HTTPS modules that follow redirects.</p>
<p>Library home page: <a href="https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.5.tgz">https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/follow-redirects/package.json</p>
<p>
Dependency Hierarchy:
- monika-1.6.8.tgz (Root Library)
- axios-0.21.4.tgz
- :x: **follow-redirects-1.14.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-user-contribution/commit/9e68ebed6109cfe070cb944c0e7e2f4c54d717de">9e68ebed6109cfe070cb944c0e7e2f4c54d717de</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
follow-redirects is vulnerable to Exposure of Private Personal Information to an Unauthorized Actor
<p>Publish Date: 2022-01-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0155>CVE-2022-0155</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/">https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/</a></p>
<p>Release Date: 2022-01-10</p>
<p>Fix Resolution: follow-redirects - v1.14.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in follow redirects tgz autoclosed cve medium severity vulnerability vulnerable library follow redirects tgz http and https modules that follow redirects library home page a href path to dependency file package json path to vulnerable library node modules follow redirects package json dependency hierarchy monika tgz root library axios tgz x follow redirects tgz vulnerable library found in head commit a href vulnerability details follow redirects is vulnerable to exposure of private personal information to an unauthorized actor publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution follow redirects step up your open source security game with whitesource | 0 |
813,450 | 30,458,528,799 | IssuesEvent | 2023-07-17 03:45:48 | openmsupply/open-msupply | https://api.github.com/repos/openmsupply/open-msupply | closed | Remote authorisation module: add the possibility to authorise in quantity of default pack size (or "packs") | enhancement Priority: Must Have | <!-- NOTE: before adding something to your issue name or description, check that there does not already exist a label for it! -->
Following discussion between customer, @DhanyaHerath and @craigdrown, @craigdrown asked me to do a mockup of they would like to have:
## Is your feature request related to a problem? Please describe 👀
user is used to deal in packs not in units.
## Describe the solution you'd like 🎁
have the possibility to authorise in quantity of default pack size (or "packs")
<img width="1024" alt="Screenshot 2023-05-28 at 5 59 26 PM" src="https://github.com/openmsupply/open-msupply/assets/74992958/7b6fc951-616a-4410-b1b2-0eeb0bb2bde8">
- [ ] Add a column called "Authorised Quantity (Packs)" (FR: "Quantité Autorisée (Boites)")
- [ ] Column "Authorised Quantity (Units)" (FR: Quantité Autorisée (Unités) is not editable and will only display the following calculation: pack size (FR: "Conditionnement") x "Authorised Quantity (Packs)"
- [ ] you can shorten "Conditionnement" to "Cond." if needed
### Describe alternatives you've considered 💭
That's the only thing I could think of my tired brain but I'm sure there is other stuff we can do. May be just as good to have a button to switch the whole interface in "number of packs" based on the default pack size. For Djibouti, default view would be "Quantity in packs".
Impacted fields:
- Supplier SOH
- Customer SOH
- Suggested Quantity (2 decimals max)
- Customer AMC (2 decimals max)
- Authorised Quantity
When in packs, all word "(Units)" (FR: "(Unités)") must be replaced by "(Packs)" (FR: "(Boites)")
### Additional context 💌
<!-- Add any other context or screenshots about the feature request here. -->
### Moneyworks Jobcode 🧰
DJ
<!-- Add the moneyworks jobcode for this change if you know what it is. --> | 1.0 | Remote authorisation module: add the possibility to authorise in quantity of default pack size (or "packs") - <!-- NOTE: before adding something to your issue name or description, check that there does not already exist a label for it! -->
Following discussion between customer, @DhanyaHerath and @craigdrown, @craigdrown asked me to do a mockup of they would like to have:
## Is your feature request related to a problem? Please describe 👀
user is used to deal in packs not in units.
## Describe the solution you'd like 🎁
have the possibility to authorise in quantity of default pack size (or "packs")
<img width="1024" alt="Screenshot 2023-05-28 at 5 59 26 PM" src="https://github.com/openmsupply/open-msupply/assets/74992958/7b6fc951-616a-4410-b1b2-0eeb0bb2bde8">
- [ ] Add a column called "Authorised Quantity (Packs)" (FR: "Quantité Autorisée (Boites)")
- [ ] Column "Authorised Quantity (Units)" (FR: Quantité Autorisée (Unités) is not editable and will only display the following calculation: pack size (FR: "Conditionnement") x "Authorised Quantity (Packs)"
- [ ] you can shorten "Conditionnement" to "Cond." if needed
### Describe alternatives you've considered 💭
That's the only thing I could think of my tired brain but I'm sure there is other stuff we can do. May be just as good to have a button to switch the whole interface in "number of packs" based on the default pack size. For Djibouti, default view would be "Quantity in packs".
Impacted fields:
- Supplier SOH
- Customer SOH
- Suggested Quantity (2 decimals max)
- Customer AMC (2 decimals max)
- Authorised Quantity
When in packs, all word "(Units)" (FR: "(Unités)") must be replaced by "(Packs)" (FR: "(Boites)")
### Additional context 💌
<!-- Add any other context or screenshots about the feature request here. -->
### Moneyworks Jobcode 🧰
DJ
<!-- Add the moneyworks jobcode for this change if you know what it is. --> | priority | remote authorisation module add the possibility to authorise in quantity of default pack size or packs following discussion between customer dhanyaherath and craigdrown craigdrown asked me to do a mockup of they would like to have is your feature request related to a problem please describe 👀 user is used to deal in packs not in units describe the solution you d like 🎁 have the possibility to authorise in quantity of default pack size or packs img width alt screenshot at pm src add a column called authorised quantity packs fr quantité autorisée boites column authorised quantity units fr quantité autorisée unités is not editable and will only display the following calculation pack size fr conditionnement x authorised quantity packs you can shorten conditionnement to cond if needed describe alternatives you ve considered 💭 that s the only thing i could think of my tired brain but i m sure there is other stuff we can do may be just as good to have a button to switch the whole interface in number of packs based on the default pack size for djibouti default view would be quantity in packs impacted fields supplier soh customer soh suggested quantity decimals max customer amc decimals max authorised quantity when in packs all word units fr unités must be replaced by packs fr boites additional context 💌 moneyworks jobcode 🧰 dj | 1 |
136,147 | 30,484,307,107 | IssuesEvent | 2023-07-17 23:49:43 | files-community/Files | https://api.github.com/repos/files-community/Files | closed | Code Quality: Move away from storage apis for enumerating items | codebase quality | ### What feature or improvement do you think would benefit Files?
We're still using the WinRT Storage apis to enumerate items in certain locations, it works but it has performance issues.
### Requirements
- Move away from the Storage APIs for enumerating items in network locations (and phones?)
### Files Version
v2.4.55
### Windows Version
Windows 11
### Comments
_No response_ | 1.0 | Code Quality: Move away from storage apis for enumerating items - ### What feature or improvement do you think would benefit Files?
We're still using the WinRT Storage apis to enumerate items in certain locations, it works but it has performance issues.
### Requirements
- Move away from the Storage APIs for enumerating items in network locations (and phones?)
### Files Version
v2.4.55
### Windows Version
Windows 11
### Comments
_No response_ | non_priority | code quality move away from storage apis for enumerating items what feature or improvement do you think would benefit files we re still using the winrt storage apis to enumerate items in certain locations it works but it has performance issues requirements move away from the storage apis for enumerating items in network locations and phones files version windows version windows comments no response | 0 |
36,205 | 8,059,914,004 | IssuesEvent | 2018-08-03 00:37:02 | fdorg/flashdevelop | https://api.github.com/repos/fdorg/flashdevelop | closed | [Haxe][CodeComplete][InferVariableType] Wrong inference the type of the variable | bug coderefactor haxe | ```haxe
class Main {
public static function main(?v$(EntryPoint) = "") {
}
}
```
actual result after execution `Generate private variable`:
```haxe
class Main {
static var v:Null<Dynamic>;
public static function main(?v = "") {
Main.v = v;
}
}
```
expected result
```haxe
class Main {
static var v:Null<String>;
public static function main(?v = "") {
Main.v = v;
}
}
``` | 1.0 | [Haxe][CodeComplete][InferVariableType] Wrong inference the type of the variable - ```haxe
class Main {
public static function main(?v$(EntryPoint) = "") {
}
}
```
actual result after execution `Generate private variable`:
```haxe
class Main {
static var v:Null<Dynamic>;
public static function main(?v = "") {
Main.v = v;
}
}
```
expected result
```haxe
class Main {
static var v:Null<String>;
public static function main(?v = "") {
Main.v = v;
}
}
``` | non_priority | wrong inference the type of the variable haxe class main public static function main v entrypoint actual result after execution generate private variable haxe class main static var v null public static function main v main v v expected result haxe class main static var v null public static function main v main v v | 0 |
61,031 | 14,939,118,069 | IssuesEvent | 2021-01-25 16:33:50 | EIDSS/EIDSS7 | https://api.github.com/repos/EIDSS/EIDSS7 | closed | ADHOC: Human Aberration Analysis | Build 82.2 Major bug | **Summary**
Tried several diseases and date ranges, but each time the graph that displayed showed zero for observed and expected cases.
**To Reproduce**
Steps to reproduce the behavior:
1. Log in as: adamanders
2. Go to: Human Aberration Analysis
3. Click on: Any disease, different date ranges
**Expected behavior**
There should be a two trend lines one for observed and one for expected cases over time.
**Screenshots**


SEPARATE RUN

**Additional details:**
- Build: 82.2
- Script title (enter ad hoc if not script-based): Ad-hoc
**Issue severity (Optional)**
Severity (critical, major, minor, low):
**Additional context**
Add any other context about the problem here.
| 1.0 | ADHOC: Human Aberration Analysis - **Summary**
Tried several diseases and date ranges, but each time the graph that displayed showed zero for observed and expected cases.
**To Reproduce**
Steps to reproduce the behavior:
1. Log in as: adamanders
2. Go to: Human Aberration Analysis
3. Click on: Any disease, different date ranges
**Expected behavior**
There should be a two trend lines one for observed and one for expected cases over time.
**Screenshots**


SEPARATE RUN

**Additional details:**
- Build: 82.2
- Script title (enter ad hoc if not script-based): Ad-hoc
**Issue severity (Optional)**
Severity (critical, major, minor, low):
**Additional context**
Add any other context about the problem here.
| non_priority | adhoc human aberration analysis summary tried several diseases and date ranges but each time the graph that displayed showed zero for observed and expected cases to reproduce steps to reproduce the behavior log in as adamanders go to human aberration analysis click on any disease different date ranges expected behavior there should be a two trend lines one for observed and one for expected cases over time screenshots separate run additional details build script title enter ad hoc if not script based ad hoc issue severity optional severity critical major minor low additional context add any other context about the problem here | 0 |
197,987 | 14,952,743,798 | IssuesEvent | 2021-01-26 15:53:09 | microsoftgraph/microsoft-graph-explorer-v4 | https://api.github.com/repos/microsoftgraph/microsoft-graph-explorer-v4 | closed | Create tests to eliminate accessibility issues in Graph Explorer | Area: Accessibility Area: Testing enhancement | **Is your feature request related to a problem? Please describe.**
GE needs to be accessible to everyone.
**Describe the solution you'd like**
Create a test to eliminate accessibility issues in Graph Explorer
[AB#7595](https://microsoftgraph.visualstudio.com/0985d294-5762-4bc2-a565-161ef349ca3e/_workitems/edit/7595) | 1.0 | Create tests to eliminate accessibility issues in Graph Explorer - **Is your feature request related to a problem? Please describe.**
GE needs to be accessible to everyone.
**Describe the solution you'd like**
Create a test to eliminate accessibility issues in Graph Explorer
[AB#7595](https://microsoftgraph.visualstudio.com/0985d294-5762-4bc2-a565-161ef349ca3e/_workitems/edit/7595) | non_priority | create tests to eliminate accessibility issues in graph explorer is your feature request related to a problem please describe ge needs to be accessible to everyone describe the solution you d like create a test to eliminate accessibility issues in graph explorer | 0 |
64,961 | 14,704,928,018 | IssuesEvent | 2021-01-04 17:15:23 | SmartBear/ruby-handlebars | https://api.github.com/repos/SmartBear/ruby-handlebars | opened | CVE-2020-10663 (High) detected in json-2.1.0.gem | security vulnerability | ## CVE-2020-10663 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-2.1.0.gem</b></p></summary>
<p>This is a JSON implementation as a Ruby extension in C.</p>
<p>Library home page: <a href="https://rubygems.org/gems/json-2.1.0.gem">https://rubygems.org/gems/json-2.1.0.gem</a></p>
<p>
Dependency Hierarchy:
- simplecov-0.16.1.gem (Root Library)
- :x: **json-2.1.0.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/SmartBear/ruby-handlebars/commit/1e38db52d43b521c768fe20ae08352cce6994c01">1e38db52d43b521c768fe20ae08352cce6994c01</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The JSON gem through 2.2.0 for Ruby, as used in Ruby 2.4 through 2.4.9, 2.5 through 2.5.7, and 2.6 through 2.6.5, has an Unsafe Object Creation Vulnerability. This is quite similar to CVE-2013-0269, but does not rely on poor garbage-collection behavior within Ruby. Specifically, use of JSON parsing methods can lead to creation of a malicious object within the interpreter, with adverse effects that are application-dependent.
<p>Publish Date: 2020-04-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10663>CVE-2020-10663</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.ruby-lang.org/en/news/2020/03/19/json-dos-cve-2020-10663/">https://www.ruby-lang.org/en/news/2020/03/19/json-dos-cve-2020-10663/</a></p>
<p>Release Date: 2020-03-28</p>
<p>Fix Resolution: 2.3.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Ruby","packageName":"json","packageVersion":"2.1.0","isTransitiveDependency":true,"dependencyTree":"simplecov:0.16.1;json:2.1.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.3.0"}],"vulnerabilityIdentifier":"CVE-2020-10663","vulnerabilityDetails":"The JSON gem through 2.2.0 for Ruby, as used in Ruby 2.4 through 2.4.9, 2.5 through 2.5.7, and 2.6 through 2.6.5, has an Unsafe Object Creation Vulnerability. This is quite similar to CVE-2013-0269, but does not rely on poor garbage-collection behavior within Ruby. Specifically, use of JSON parsing methods can lead to creation of a malicious object within the interpreter, with adverse effects that are application-dependent.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10663","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-10663 (High) detected in json-2.1.0.gem - ## CVE-2020-10663 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-2.1.0.gem</b></p></summary>
<p>This is a JSON implementation as a Ruby extension in C.</p>
<p>Library home page: <a href="https://rubygems.org/gems/json-2.1.0.gem">https://rubygems.org/gems/json-2.1.0.gem</a></p>
<p>
Dependency Hierarchy:
- simplecov-0.16.1.gem (Root Library)
- :x: **json-2.1.0.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/SmartBear/ruby-handlebars/commit/1e38db52d43b521c768fe20ae08352cce6994c01">1e38db52d43b521c768fe20ae08352cce6994c01</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The JSON gem through 2.2.0 for Ruby, as used in Ruby 2.4 through 2.4.9, 2.5 through 2.5.7, and 2.6 through 2.6.5, has an Unsafe Object Creation Vulnerability. This is quite similar to CVE-2013-0269, but does not rely on poor garbage-collection behavior within Ruby. Specifically, use of JSON parsing methods can lead to creation of a malicious object within the interpreter, with adverse effects that are application-dependent.
<p>Publish Date: 2020-04-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10663>CVE-2020-10663</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.ruby-lang.org/en/news/2020/03/19/json-dos-cve-2020-10663/">https://www.ruby-lang.org/en/news/2020/03/19/json-dos-cve-2020-10663/</a></p>
<p>Release Date: 2020-03-28</p>
<p>Fix Resolution: 2.3.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Ruby","packageName":"json","packageVersion":"2.1.0","isTransitiveDependency":true,"dependencyTree":"simplecov:0.16.1;json:2.1.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.3.0"}],"vulnerabilityIdentifier":"CVE-2020-10663","vulnerabilityDetails":"The JSON gem through 2.2.0 for Ruby, as used in Ruby 2.4 through 2.4.9, 2.5 through 2.5.7, and 2.6 through 2.6.5, has an Unsafe Object Creation Vulnerability. This is quite similar to CVE-2013-0269, but does not rely on poor garbage-collection behavior within Ruby. Specifically, use of JSON parsing methods can lead to creation of a malicious object within the interpreter, with adverse effects that are application-dependent.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10663","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_priority | cve high detected in json gem cve high severity vulnerability vulnerable library json gem this is a json implementation as a ruby extension in c library home page a href dependency hierarchy simplecov gem root library x json gem vulnerable library found in head commit a href found in base branch master vulnerability details the json gem through for ruby as used in ruby through through and through has an unsafe object creation vulnerability this is quite similar to cve but does not rely on poor garbage collection behavior within ruby specifically use of json parsing methods can lead to creation of a malicious object within the interpreter with adverse effects that are application dependent publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails the json gem through for ruby as used in ruby through through and through has an unsafe object creation vulnerability this is quite similar to cve but does not rely on poor garbage collection behavior within ruby specifically use of json parsing methods can lead to creation of a malicious object within the interpreter with adverse effects that are application dependent vulnerabilityurl | 0 |
526,989 | 15,306,160,757 | IssuesEvent | 2021-02-24 19:05:19 | blchelle/collabogreat | https://api.github.com/repos/blchelle/collabogreat | closed | Error Dialog Appears when Renaming Stage | Priority: High Status: In Progress Type: Bug | ## Description
When the user renames a stage an error dialogue pops up saying there was an unknown error, though it appears that the operation was completed successfully. Find out where the error comes from, whether or not the operation is truly successful and then fix the issue. | 1.0 | Error Dialog Appears when Renaming Stage - ## Description
When the user renames a stage an error dialogue pops up saying there was an unknown error, though it appears that the operation was completed successfully. Find out where the error comes from, whether or not the operation is truly successful and then fix the issue. | priority | error dialog appears when renaming stage description when the user renames a stage an error dialogue pops up saying there was an unknown error though it appears that the operation was completed successfully find out where the error comes from whether or not the operation is truly successful and then fix the issue | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.