Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 4 112 | repo_url stringlengths 33 141 | action stringclasses 3 values | title stringlengths 1 1.02k | labels stringlengths 4 1.54k | body stringlengths 1 262k | index stringclasses 17 values | text_combine stringlengths 95 262k | label stringclasses 2 values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
25,070 | 5,119,314,104 | IssuesEvent | 2017-01-08 16:43:29 | PythonNut/virtualbox-remote-snapshots | https://api.github.com/repos/PythonNut/virtualbox-remote-snapshots | closed | Document usage of "local" repositories | documentation priority:low | Since `borg mount` does not exists locally, differential extraction does not exist for local repositories.
Although `virtualbox-remote-snapshots` optimizes specifically for remote devices, local devices (i.e. big hard drives connected locally, while the VMs are stored on SSDs) should work too. | 1.0 | Document usage of "local" repositories - Since `borg mount` does not exists locally, differential extraction does not exist for local repositories.
Although `virtualbox-remote-snapshots` optimizes specifically for remote devices, local devices (i.e. big hard drives connected locally, while the VMs are stored on SSDs) should work too. | non_test | document usage of local repositories since borg mount does not exists locally differential extraction does not exist for local repositories although virtualbox remote snapshots optimizes specifically for remote devices local devices i e big hard drives connected locally while the vms are stored on ssds should work too | 0 |
6,177 | 22,362,922,429 | IssuesEvent | 2022-06-15 22:50:27 | nautobot/nautobot | https://api.github.com/repos/nautobot/nautobot | closed | Option to hide job source | group: automation group: ui-ux | ### As ...
Patti - Platform Admin
### I want ...
A configuration option to entirely disable the Job `Source` tab from the Job execution and Job results pages.
### So that ...
Potentially sensitive code is not exposed to end users of Jobs, and users are not confused by potentially inconsistent Job source based on historical Job results (see #1372)
### I know this is done when...
- I have a global configuration setting (boolean) that controls whether the Job source tab is shown in the UI or not.
- The global configuration settings can be set in the UI (Constance config)
- The setting defaults to `True` for existing installs.
### Optional - Feature groups this request pertains to.
- [X] Automation
- [ ] Circuits
- [ ] DCIM
- [ ] IPAM
- [X] Misc (including Data Sources)
- [ ] Organization
- [ ] Plugins (and other Extensibility)
- [X] Security (Secrets, etc)
- [ ] Image Management
- [X] UI/UX
- [ ] Documentation
- [ ] Other (not directly a platform feature)
### Database Changes
_No response_
### External Dependencies
_No response_ | 1.0 | Option to hide job source - ### As ...
Patti - Platform Admin
### I want ...
A configuration option to entirely disable the Job `Source` tab from the Job execution and Job results pages.
### So that ...
Potentially sensitive code is not exposed to end users of Jobs, and users are not confused by potentially inconsistent Job source based on historical Job results (see #1372)
### I know this is done when...
- I have a global configuration setting (boolean) that controls whether the Job source tab is shown in the UI or not.
- The global configuration settings can be set in the UI (Constance config)
- The setting defaults to `True` for existing installs.
### Optional - Feature groups this request pertains to.
- [X] Automation
- [ ] Circuits
- [ ] DCIM
- [ ] IPAM
- [X] Misc (including Data Sources)
- [ ] Organization
- [ ] Plugins (and other Extensibility)
- [X] Security (Secrets, etc)
- [ ] Image Management
- [X] UI/UX
- [ ] Documentation
- [ ] Other (not directly a platform feature)
### Database Changes
_No response_
### External Dependencies
_No response_ | non_test | option to hide job source as patti platform admin i want a configuration option to entirely disable the job source tab from the job execution and job results pages so that potentially sensitive code is not exposed to end users of jobs and users are not confused by potentially inconsistent job source based on historical job results see i know this is done when i have a global configuration setting boolean that controls whether the job source tab is shown in the ui or not the global configuration settings can be set in the ui constance config the setting defaults to true for existing installs optional feature groups this request pertains to automation circuits dcim ipam misc including data sources organization plugins and other extensibility security secrets etc image management ui ux documentation other not directly a platform feature database changes no response external dependencies no response | 0 |
514,567 | 14,940,979,489 | IssuesEvent | 2021-01-25 19:04:13 | Sequel-Ace/Sequel-Ace | https://api.github.com/repos/Sequel-Ace/Sequel-Ace | closed | Change Complete with Backticks --> Always Complete With Backticks | Feature Request Low priority PR Welcome stale | Please rename the preference "Complete with Backticks" to be "Always Complete With Backticks".
Some people want to use backticks with every identifier. That's rare, but I guess HFWT.
Everybody else that is using autocomplete expects that backticks will be added automatically if required. e.g. if you have a table column named `varchar` for some reason then autocompleting `mytable.var...` should insert backticks when you accept the recommended text.
---
Alternatively there is a case to be made in UX design to always avoid checkboxes. If you subscribe to that idea then I recommend:
"Complete using backticks:"
- ( ) Always
- ( ) Only when required | 1.0 | Change Complete with Backticks --> Always Complete With Backticks - Please rename the preference "Complete with Backticks" to be "Always Complete With Backticks".
Some people want to use backticks with every identifier. That's rare, but I guess HFWT.
Everybody else that is using autocomplete expects that backticks will be added automatically if required. e.g. if you have a table column named `varchar` for some reason then autocompleting `mytable.var...` should insert backticks when you accept the recommended text.
---
Alternatively there is a case to be made in UX design to always avoid checkboxes. If you subscribe to that idea then I recommend:
"Complete using backticks:"
- ( ) Always
- ( ) Only when required | non_test | change complete with backticks always complete with backticks please rename the preference complete with backticks to be always complete with backticks some people want to use backticks with every identifier that s rare but i guess hfwt everybody else that is using autocomplete expects that backticks will be added automatically if required e g if you have a table column named varchar for some reason then autocompleting mytable var should insert backticks when you accept the recommended text alternatively there is a case to be made in ux design to always avoid checkboxes if you subscribe to that idea then i recommend complete using backticks always only when required | 0 |
320,328 | 9,779,445,200 | IssuesEvent | 2019-06-07 14:31:46 | SatelliteQE/robottelo | https://api.github.com/repos/SatelliteQE/robottelo | closed | Boundary testing for parameters under settings menu | Low Priority RFT UI | TASK- Boundary testing for Settings submenus (not provisioning-specific per say, could possibly be broken out into its own section.
[P1] Provisioning ( Partially done)
[P2] Bootdisk
[P2] Discovered
[P2] Puppet
Assure proper positive and negative testing of each field
Referencing this task with #986 as few sub-tabs are already automated there. | 1.0 | Boundary testing for parameters under settings menu - TASK- Boundary testing for Settings submenus (not provisioning-specific per say, could possibly be broken out into its own section.
[P1] Provisioning ( Partially done)
[P2] Bootdisk
[P2] Discovered
[P2] Puppet
Assure proper positive and negative testing of each field
Referencing this task with #986 as few sub-tabs are already automated there. | non_test | boundary testing for parameters under settings menu task boundary testing for settings submenus not provisioning specific per say could possibly be broken out into its own section provisioning partially done bootdisk discovered puppet assure proper positive and negative testing of each field referencing this task with as few sub tabs are already automated there | 0 |
186,318 | 14,394,660,090 | IssuesEvent | 2020-12-03 01:49:23 | github-vet/rangeclosure-findings | https://api.github.com/repos/github-vet/rangeclosure-findings | closed | jinzhongwei/etcd: integration/v3_watch_test.go; 86 LoC | fresh medium test |
Found a possible issue in [jinzhongwei/etcd](https://www.github.com/jinzhongwei/etcd) at [integration/v3_watch_test.go](https://github.com/jinzhongwei/etcd/blob/48567b8b38e6bfb11b6d74df832fd99b1b182ee3/integration/v3_watch_test.go#L206-L291)
The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements
which capture loop variables.
[Click here to see the code in its original context.](https://github.com/jinzhongwei/etcd/blob/48567b8b38e6bfb11b6d74df832fd99b1b182ee3/integration/v3_watch_test.go#L206-L291)
<details>
<summary>Click here to show the 86 line(s) of Go which triggered the analyzer.</summary>
```go
for i, tt := range tests {
clus := NewClusterV3(t, &ClusterConfig{Size: 3})
wAPI := toGRPC(clus.RandClient()).Watch
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
wStream, err := wAPI.Watch(ctx)
if err != nil {
t.Fatalf("#%d: wAPI.Watch error: %v", i, err)
}
err = wStream.Send(tt.watchRequest)
if err != nil {
t.Fatalf("#%d: wStream.Send error: %v", i, err)
}
// ensure watcher request created a new watcher
cresp, err := wStream.Recv()
if err != nil {
t.Errorf("#%d: wStream.Recv error: %v", i, err)
clus.Terminate(t)
continue
}
if !cresp.Created {
t.Errorf("#%d: did not create watchid, got %+v", i, cresp)
clus.Terminate(t)
continue
}
if cresp.Canceled {
t.Errorf("#%d: canceled watcher on create %+v", i, cresp)
clus.Terminate(t)
continue
}
createdWatchId := cresp.WatchId
if cresp.Header == nil || cresp.Header.Revision != 1 {
t.Errorf("#%d: header revision got +%v, wanted revison 1", i, cresp)
clus.Terminate(t)
continue
}
// asynchronously create keys
go func() {
for _, k := range tt.putKeys {
kvc := toGRPC(clus.RandClient()).KV
req := &pb.PutRequest{Key: []byte(k), Value: []byte("bar")}
if _, err := kvc.Put(context.TODO(), req); err != nil {
t.Fatalf("#%d: couldn't put key (%v)", i, err)
}
}
}()
// check stream results
for j, wresp := range tt.wresps {
resp, err := wStream.Recv()
if err != nil {
t.Errorf("#%d.%d: wStream.Recv error: %v", i, j, err)
}
if resp.Header == nil {
t.Fatalf("#%d.%d: unexpected nil resp.Header", i, j)
}
if resp.Header.Revision != wresp.Header.Revision {
t.Errorf("#%d.%d: resp.Header.Revision got = %d, want = %d", i, j, resp.Header.Revision, wresp.Header.Revision)
}
if wresp.Created != resp.Created {
t.Errorf("#%d.%d: resp.Created got = %v, want = %v", i, j, resp.Created, wresp.Created)
}
if resp.WatchId != createdWatchId {
t.Errorf("#%d.%d: resp.WatchId got = %d, want = %d", i, j, resp.WatchId, createdWatchId)
}
if !reflect.DeepEqual(resp.Events, wresp.Events) {
t.Errorf("#%d.%d: resp.Events got = %+v, want = %+v", i, j, resp.Events, wresp.Events)
}
}
rok, nr := waitResponse(wStream, 1*time.Second)
if !rok {
t.Errorf("unexpected pb.WatchResponse is received %+v", nr)
}
// can't defer because tcp ports will be in use
clus.Terminate(t)
}
```
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> range-loop variable tt used in defer or goroutine at line 249
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 48567b8b38e6bfb11b6d74df832fd99b1b182ee3
| 1.0 | jinzhongwei/etcd: integration/v3_watch_test.go; 86 LoC -
Found a possible issue in [jinzhongwei/etcd](https://www.github.com/jinzhongwei/etcd) at [integration/v3_watch_test.go](https://github.com/jinzhongwei/etcd/blob/48567b8b38e6bfb11b6d74df832fd99b1b182ee3/integration/v3_watch_test.go#L206-L291)
The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements
which capture loop variables.
[Click here to see the code in its original context.](https://github.com/jinzhongwei/etcd/blob/48567b8b38e6bfb11b6d74df832fd99b1b182ee3/integration/v3_watch_test.go#L206-L291)
<details>
<summary>Click here to show the 86 line(s) of Go which triggered the analyzer.</summary>
```go
for i, tt := range tests {
clus := NewClusterV3(t, &ClusterConfig{Size: 3})
wAPI := toGRPC(clus.RandClient()).Watch
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
wStream, err := wAPI.Watch(ctx)
if err != nil {
t.Fatalf("#%d: wAPI.Watch error: %v", i, err)
}
err = wStream.Send(tt.watchRequest)
if err != nil {
t.Fatalf("#%d: wStream.Send error: %v", i, err)
}
// ensure watcher request created a new watcher
cresp, err := wStream.Recv()
if err != nil {
t.Errorf("#%d: wStream.Recv error: %v", i, err)
clus.Terminate(t)
continue
}
if !cresp.Created {
t.Errorf("#%d: did not create watchid, got %+v", i, cresp)
clus.Terminate(t)
continue
}
if cresp.Canceled {
t.Errorf("#%d: canceled watcher on create %+v", i, cresp)
clus.Terminate(t)
continue
}
createdWatchId := cresp.WatchId
if cresp.Header == nil || cresp.Header.Revision != 1 {
t.Errorf("#%d: header revision got +%v, wanted revison 1", i, cresp)
clus.Terminate(t)
continue
}
// asynchronously create keys
go func() {
for _, k := range tt.putKeys {
kvc := toGRPC(clus.RandClient()).KV
req := &pb.PutRequest{Key: []byte(k), Value: []byte("bar")}
if _, err := kvc.Put(context.TODO(), req); err != nil {
t.Fatalf("#%d: couldn't put key (%v)", i, err)
}
}
}()
// check stream results
for j, wresp := range tt.wresps {
resp, err := wStream.Recv()
if err != nil {
t.Errorf("#%d.%d: wStream.Recv error: %v", i, j, err)
}
if resp.Header == nil {
t.Fatalf("#%d.%d: unexpected nil resp.Header", i, j)
}
if resp.Header.Revision != wresp.Header.Revision {
t.Errorf("#%d.%d: resp.Header.Revision got = %d, want = %d", i, j, resp.Header.Revision, wresp.Header.Revision)
}
if wresp.Created != resp.Created {
t.Errorf("#%d.%d: resp.Created got = %v, want = %v", i, j, resp.Created, wresp.Created)
}
if resp.WatchId != createdWatchId {
t.Errorf("#%d.%d: resp.WatchId got = %d, want = %d", i, j, resp.WatchId, createdWatchId)
}
if !reflect.DeepEqual(resp.Events, wresp.Events) {
t.Errorf("#%d.%d: resp.Events got = %+v, want = %+v", i, j, resp.Events, wresp.Events)
}
}
rok, nr := waitResponse(wStream, 1*time.Second)
if !rok {
t.Errorf("unexpected pb.WatchResponse is received %+v", nr)
}
// can't defer because tcp ports will be in use
clus.Terminate(t)
}
```
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> range-loop variable tt used in defer or goroutine at line 249
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 48567b8b38e6bfb11b6d74df832fd99b1b182ee3
| test | jinzhongwei etcd integration watch test go loc found a possible issue in at the below snippet of go code triggered static analysis which searches for goroutines and or defer statements which capture loop variables click here to show the line s of go which triggered the analyzer go for i tt range tests clus t clusterconfig size wapi togrpc clus randclient watch ctx cancel context withtimeout context background time second defer cancel wstream err wapi watch ctx if err nil t fatalf d wapi watch error v i err err wstream send tt watchrequest if err nil t fatalf d wstream send error v i err ensure watcher request created a new watcher cresp err wstream recv if err nil t errorf d wstream recv error v i err clus terminate t continue if cresp created t errorf d did not create watchid got v i cresp clus terminate t continue if cresp canceled t errorf d canceled watcher on create v i cresp clus terminate t continue createdwatchid cresp watchid if cresp header nil cresp header revision t errorf d header revision got v wanted revison i cresp clus terminate t continue asynchronously create keys go func for k range tt putkeys kvc togrpc clus randclient kv req pb putrequest key byte k value byte bar if err kvc put context todo req err nil t fatalf d couldn t put key v i err check stream results for j wresp range tt wresps resp err wstream recv if err nil t errorf d d wstream recv error v i j err if resp header nil t fatalf d d unexpected nil resp header i j if resp header revision wresp header revision t errorf d d resp header revision got d want d i j resp header revision wresp header revision if wresp created resp created t errorf d d resp created got v want v i j resp created wresp created if resp watchid createdwatchid t errorf d d resp watchid got d want d i j resp watchid createdwatchid if reflect deepequal resp events wresp events t errorf d d resp events got v want v i j resp events wresp events rok nr waitresponse wstream time second if rok t errorf unexpected pb watchresponse is received v nr can t defer because tcp ports will be in use clus terminate t below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message range loop variable tt used in defer or goroutine at line leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 1 |
153,300 | 12,139,810,196 | IssuesEvent | 2020-04-23 19:28:19 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: acceptance/bank/node-restart failed | C-test-failure O-roachtest O-robot branch-master release-blocker | [(roachtest).acceptance/bank/node-restart failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1860006&tab=buildLog) on [master@2d8263b1d5e5fb7c9697af1b17d3935bb88ad3d0](https://github.com/cockroachdb/cockroach/commits/2d8263b1d5e5fb7c9697af1b17d3935bb88ad3d0):
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: artifacts/acceptance/bank/node-restart/run_1
bank.go:375,bank.go:490,acceptance.go:84,test_runner.go:753: pq: query execution canceled due to statement timeout
after 31.0s
main.(*bankClient).transferMoney
/go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/bank.go:74
main.(*bankState).transferMoney
/go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/bank.go:158
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
```
<details><summary>More</summary><p>
Artifacts: [/acceptance/bank/node-restart](https://teamcity.cockroachdb.com/viewLog.html?buildId=1860006&tab=artifacts#/acceptance/bank/node-restart)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Aacceptance%2Fbank%2Fnode-restart.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| 2.0 | roachtest: acceptance/bank/node-restart failed - [(roachtest).acceptance/bank/node-restart failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1860006&tab=buildLog) on [master@2d8263b1d5e5fb7c9697af1b17d3935bb88ad3d0](https://github.com/cockroachdb/cockroach/commits/2d8263b1d5e5fb7c9697af1b17d3935bb88ad3d0):
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: artifacts/acceptance/bank/node-restart/run_1
bank.go:375,bank.go:490,acceptance.go:84,test_runner.go:753: pq: query execution canceled due to statement timeout
after 31.0s
main.(*bankClient).transferMoney
/go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/bank.go:74
main.(*bankState).transferMoney
/go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/bank.go:158
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
```
<details><summary>More</summary><p>
Artifacts: [/acceptance/bank/node-restart](https://teamcity.cockroachdb.com/viewLog.html?buildId=1860006&tab=artifacts#/acceptance/bank/node-restart)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Aacceptance%2Fbank%2Fnode-restart.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| test | roachtest acceptance bank node restart failed on the test failed on branch master cloud gce test artifacts and logs in artifacts acceptance bank node restart run bank go bank go acceptance go test runner go pq query execution canceled due to statement timeout after main bankclient transfermoney go src github com cockroachdb cockroach pkg cmd roachtest bank go main bankstate transfermoney go src github com cockroachdb cockroach pkg cmd roachtest bank go runtime goexit usr local go src runtime asm s more artifacts powered by | 1 |
37,326 | 5,112,442,223 | IssuesEvent | 2017-01-06 11:12:43 | hpi-swt2/wimi-portal | https://api.github.com/repos/hpi-swt2/wimi-portal | closed | Test for handing in / rejecting / accepting time sheet twice | test-needed | What happens if the reject / accept / hand_in POST is sent multiple times? | 1.0 | Test for handing in / rejecting / accepting time sheet twice - What happens if the reject / accept / hand_in POST is sent multiple times? | test | test for handing in rejecting accepting time sheet twice what happens if the reject accept hand in post is sent multiple times | 1 |
123,788 | 10,289,023,059 | IssuesEvent | 2019-08-27 12:18:39 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Homepage amphtml not getting generated when there are no posts added. | NEED FAST REVIEW Need Testing [Priority: HIGH] bug | If we do not create any posts and set up a custom front page, then no amphtml is created on the homepage.
Ref: https://secure.helpscout.net/conversation/911411024/75266?folderId=2322649 | 1.0 | Homepage amphtml not getting generated when there are no posts added. - If we do not create any posts and set up a custom front page, then no amphtml is created on the homepage.
Ref: https://secure.helpscout.net/conversation/911411024/75266?folderId=2322649 | test | homepage amphtml not getting generated when there are no posts added if we do not create any posts and set up a custom front page then no amphtml is created on the homepage ref | 1 |
803,709 | 29,187,124,280 | IssuesEvent | 2023-05-19 16:22:14 | stratosphererl/stratosphere | https://api.github.com/repos/stratosphererl/stratosphere | closed | Make reference to number of users and replays dynamic in home.tsx and about.tsx | priority: high | # Acceptance Criteria
- [ ] When a user views either the home or about page, have the references in the text referring to the number of users and replays on our site display the actual number present on our site
# Estimation of Work
- TBA
# Tasks
For both home.tsx and replay.tsx:
- [ ] Call stats service for number of users and number of replays
- [ ] Dynamically present the two numbers in the text of each page
# Risks
None
# Notes
n/a | 1.0 | Make reference to number of users and replays dynamic in home.tsx and about.tsx - # Acceptance Criteria
- [ ] When a user views either the home or about page, have the references in the text referring to the number of users and replays on our site display the actual number present on our site
# Estimation of Work
- TBA
# Tasks
For both home.tsx and replay.tsx:
- [ ] Call stats service for number of users and number of replays
- [ ] Dynamically present the two numbers in the text of each page
# Risks
None
# Notes
n/a | non_test | make reference to number of users and replays dynamic in home tsx and about tsx acceptance criteria when a user views either the home or about page have the references in the text referring to the number of users and replays on our site display the actual number present on our site estimation of work tba tasks for both home tsx and replay tsx call stats service for number of users and number of replays dynamically present the two numbers in the text of each page risks none notes n a | 0 |
306,159 | 26,441,099,663 | IssuesEvent | 2023-01-16 00:09:50 | pandas-dev/pandas | https://api.github.com/repos/pandas-dev/pandas | closed | BUG: ValueError converting dense categorical series to sparse when `fill_value` not in series | Bug Sparse Categorical Needs Tests | ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the main branch of pandas.
### Reproducible Example
```python
import pandas as pd
from pandas import SparseDtype
df = pd.DataFrame([["a", 0],["b", 1], ["b", 2]], columns=["A","B"])
df["A"].astype(SparseDtype("category"))
# or: df["A"].astype(SparseDtype("category", fill_value="not_in_series"))
```
### Issue Description
I am unable to convert a dense categorical series to a sparse one when I leave the `fill_value` at default, or a value which does not exist in the series.
Stacktrace:
<details>
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File ~/repositories/arff-to-parquet/venv/lib/python3.10/site-packages/IPython/core/formatters.py:706, in PlainTextFormatter.__call__(self, obj)
699 stream = StringIO()
700 printer = pretty.RepresentationPrinter(stream, self.verbose,
701 self.max_width, self.newline,
702 max_seq_length=self.max_seq_length,
703 singleton_pprinters=self.singleton_printers,
704 type_pprinters=self.type_printers,
705 deferred_pprinters=self.deferred_printers)
--> 706 printer.pretty(obj)
707 printer.flush()
708 return stream.getvalue()
File ~/repositories/arff-to-parquet/venv/lib/python3.10/site-packages/IPython/lib/pretty.py:410, in RepresentationPrinter.pretty(self, obj)
407 return meth(obj, self, cycle)
408 if cls is not object \
409 and callable(cls.__dict__.get('__repr__')):
--> 410 return _repr_pprint(obj, self, cycle)
412 return _default_pprint(obj, self, cycle)
413 finally:
File ~/repositories/arff-to-parquet/venv/lib/python3.10/site-packages/IPython/lib/pretty.py:778, in _repr_pprint(obj, p, cycle)
776 """A pprint that just redirects to the normal repr function."""
777 # Find newlines and replace them with p.break_()
--> 778 output = repr(obj)
779 lines = output.splitlines()
780 with p.group():
File ~/repositories/arff-to-parquet/venv/lib/python3.10/site-packages/pandas/core/series.py:1550, in Series.__repr__(self)
1548 # pylint: disable=invalid-repr-returned
1549 repr_params = fmt.get_series_repr_params()
-> 1550 return self.to_string(**repr_params)
File ~/repositories/arff-to-parquet/venv/lib/python3.10/site-packages/pandas/core/series.py:1643, in Series.to_string(self, buf, na_rep, float_format, header, index, length, dtype, name, max_rows, min_rows)
1597 """
1598 Render a string representation of the Series.
1599
(...)
1629 String representation of Series if ``buf=None``, otherwise None.
1630 """
1631 formatter = fmt.SeriesFormatter(
1632 self,
1633 name=name,
(...)
1641 max_rows=max_rows,
1642 )
-> 1643 result = formatter.to_string()
1645 # catch contract violations
1646 if not isinstance(result, str):
File ~/repositories/arff-to-parquet/venv/lib/python3.10/site-packages/pandas/io/formats/format.py:393, in SeriesFormatter.to_string(self)
390 return f"{type(self.series).__name__}([], {footer})"
392 fmt_index, have_header = self._get_formatted_index()
--> 393 fmt_values = self._get_formatted_values()
395 if self.is_truncated_vertically:
396 n_header_rows = 0
File ~/repositories/arff-to-parquet/venv/lib/python3.10/site-packages/pandas/io/formats/format.py:377, in SeriesFormatter._get_formatted_values(self)
376 def _get_formatted_values(self) -> list[str]:
--> 377 return format_array(
378 self.tr_series._values,
379 None,
380 float_format=self.float_format,
381 na_rep=self.na_rep,
382 leading_space=self.index,
383 )
File ~/repositories/arff-to-parquet/venv/lib/python3.10/site-packages/pandas/io/formats/format.py:1326, in format_array(values, formatter, float_format, na_rep, digits, space, justify, decimal, leading_space, quoting)
1311 digits = get_option("display.precision")
1313 fmt_obj = fmt_klass(
1314 values,
1315 digits=digits,
(...)
1323 quoting=quoting,
1324 )
-> 1326 return fmt_obj.get_result()
File ~/repositories/arff-to-parquet/venv/lib/python3.10/site-packages/pandas/io/formats/format.py:1357, in GenericArrayFormatter.get_result(self)
1356 def get_result(self) -> list[str]:
-> 1357 fmt_values = self._format_strings()
1358 return _make_fixed_width(fmt_values, self.justify)
File ~/repositories/arff-to-parquet/venv/lib/python3.10/site-packages/pandas/io/formats/format.py:1658, in ExtensionArrayFormatter._format_strings(self)
1656 array = values._internal_get_values()
1657 else:
-> 1658 array = np.asarray(values)
1660 fmt_values = format_array(
1661 array,
1662 formatter,
(...)
1670 quoting=self.quoting,
1671 )
1672 return fmt_values
ValueError: object __array__ method not producing an array
</details>
### Expected Behavior
I expect it to "just work", similar to providing a fill value which does exist in the series, or how it works with other dtypes:
```python
import pandas as pd
from pandas import SparseDtype
df = pd.DataFrame([["a", 0],["b", 1], ["b", 2]], columns=["A","B"])
# works, since "a" is a value present in the series
df["A"].astype(SparseDtype("category", fill_value="a"))
# also works, despite -1 not being present in the series
df["B"].astype(SparseDtype(int, fill_value=-1))
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 8dab54d6573f7186ff0c3b6364d5e4dd635ff3e7
python : 3.10.5.final.0
python-bits : 64
OS : Darwin
OS-release : 21.5.0
Version : Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:37 PDT 2022; root:xnu-8020.121.3~4/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.UTF-8
pandas : 1.5.2
numpy : 1.23.5
pytz : 2022.6
dateutil : 2.8.2
setuptools : 58.1.0
pip : 22.3.1
Cython : None
pytest : 7.2.0
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.2
IPython : 8.6.0
pandas_datareader: None
bs4 : 4.11.1
bottleneck : None
brotli : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : 3.6.2
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 10.0.0
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.9.3
snappy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
zstandard : None
tzdata : None
</details>
| 1.0 | BUG: ValueError converting dense categorical series to sparse when `fill_value` not in series - ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the main branch of pandas.
### Reproducible Example
```python
import pandas as pd
from pandas import SparseDtype
df = pd.DataFrame([["a", 0],["b", 1], ["b", 2]], columns=["A","B"])
df["A"].astype(SparseDtype("category"))
# or: df["A"].astype(SparseDtype("category", fill_value="not_in_series"))
```
### Issue Description
I am unable to convert a dense categorical series to a sparse one when I leave the `fill_value` at default, or a value which does not exist in the series.
Stacktrace:
<details>
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File ~/repositories/arff-to-parquet/venv/lib/python3.10/site-packages/IPython/core/formatters.py:706, in PlainTextFormatter.__call__(self, obj)
699 stream = StringIO()
700 printer = pretty.RepresentationPrinter(stream, self.verbose,
701 self.max_width, self.newline,
702 max_seq_length=self.max_seq_length,
703 singleton_pprinters=self.singleton_printers,
704 type_pprinters=self.type_printers,
705 deferred_pprinters=self.deferred_printers)
--> 706 printer.pretty(obj)
707 printer.flush()
708 return stream.getvalue()
File ~/repositories/arff-to-parquet/venv/lib/python3.10/site-packages/IPython/lib/pretty.py:410, in RepresentationPrinter.pretty(self, obj)
407 return meth(obj, self, cycle)
408 if cls is not object \
409 and callable(cls.__dict__.get('__repr__')):
--> 410 return _repr_pprint(obj, self, cycle)
412 return _default_pprint(obj, self, cycle)
413 finally:
File ~/repositories/arff-to-parquet/venv/lib/python3.10/site-packages/IPython/lib/pretty.py:778, in _repr_pprint(obj, p, cycle)
776 """A pprint that just redirects to the normal repr function."""
777 # Find newlines and replace them with p.break_()
--> 778 output = repr(obj)
779 lines = output.splitlines()
780 with p.group():
File ~/repositories/arff-to-parquet/venv/lib/python3.10/site-packages/pandas/core/series.py:1550, in Series.__repr__(self)
1548 # pylint: disable=invalid-repr-returned
1549 repr_params = fmt.get_series_repr_params()
-> 1550 return self.to_string(**repr_params)
File ~/repositories/arff-to-parquet/venv/lib/python3.10/site-packages/pandas/core/series.py:1643, in Series.to_string(self, buf, na_rep, float_format, header, index, length, dtype, name, max_rows, min_rows)
1597 """
1598 Render a string representation of the Series.
1599
(...)
1629 String representation of Series if ``buf=None``, otherwise None.
1630 """
1631 formatter = fmt.SeriesFormatter(
1632 self,
1633 name=name,
(...)
1641 max_rows=max_rows,
1642 )
-> 1643 result = formatter.to_string()
1645 # catch contract violations
1646 if not isinstance(result, str):
File ~/repositories/arff-to-parquet/venv/lib/python3.10/site-packages/pandas/io/formats/format.py:393, in SeriesFormatter.to_string(self)
390 return f"{type(self.series).__name__}([], {footer})"
392 fmt_index, have_header = self._get_formatted_index()
--> 393 fmt_values = self._get_formatted_values()
395 if self.is_truncated_vertically:
396 n_header_rows = 0
File ~/repositories/arff-to-parquet/venv/lib/python3.10/site-packages/pandas/io/formats/format.py:377, in SeriesFormatter._get_formatted_values(self)
376 def _get_formatted_values(self) -> list[str]:
--> 377 return format_array(
378 self.tr_series._values,
379 None,
380 float_format=self.float_format,
381 na_rep=self.na_rep,
382 leading_space=self.index,
383 )
File ~/repositories/arff-to-parquet/venv/lib/python3.10/site-packages/pandas/io/formats/format.py:1326, in format_array(values, formatter, float_format, na_rep, digits, space, justify, decimal, leading_space, quoting)
1311 digits = get_option("display.precision")
1313 fmt_obj = fmt_klass(
1314 values,
1315 digits=digits,
(...)
1323 quoting=quoting,
1324 )
-> 1326 return fmt_obj.get_result()
File ~/repositories/arff-to-parquet/venv/lib/python3.10/site-packages/pandas/io/formats/format.py:1357, in GenericArrayFormatter.get_result(self)
1356 def get_result(self) -> list[str]:
-> 1357 fmt_values = self._format_strings()
1358 return _make_fixed_width(fmt_values, self.justify)
File ~/repositories/arff-to-parquet/venv/lib/python3.10/site-packages/pandas/io/formats/format.py:1658, in ExtensionArrayFormatter._format_strings(self)
1656 array = values._internal_get_values()
1657 else:
-> 1658 array = np.asarray(values)
1660 fmt_values = format_array(
1661 array,
1662 formatter,
(...)
1670 quoting=self.quoting,
1671 )
1672 return fmt_values
ValueError: object __array__ method not producing an array
</details>
### Expected Behavior
I expect it to "just work", similar to providing a fill value which does exist in the series, or how it works with other dtypes:
```python
import pandas as pd
from pandas import SparseDtype
df = pd.DataFrame([["a", 0],["b", 1], ["b", 2]], columns=["A","B"])
# works, since "a" is a value present in the series
df["A"].astype(SparseDtype("category", fill_value="a"))
# also works, despite -1 not being present in the series
df["B"].astype(SparseDtype(int, fill_value=-1))
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 8dab54d6573f7186ff0c3b6364d5e4dd635ff3e7
python : 3.10.5.final.0
python-bits : 64
OS : Darwin
OS-release : 21.5.0
Version : Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:37 PDT 2022; root:xnu-8020.121.3~4/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.UTF-8
pandas : 1.5.2
numpy : 1.23.5
pytz : 2022.6
dateutil : 2.8.2
setuptools : 58.1.0
pip : 22.3.1
Cython : None
pytest : 7.2.0
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.2
IPython : 8.6.0
pandas_datareader: None
bs4 : 4.11.1
bottleneck : None
brotli : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : 3.6.2
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 10.0.0
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.9.3
snappy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
zstandard : None
tzdata : None
</details>
| test | bug valueerror converting dense categorical series to sparse when fill value not in series pandas version checks i have checked that this issue has not already been reported i have confirmed this bug exists on the of pandas i have confirmed this bug exists on the main branch of pandas reproducible example python import pandas as pd from pandas import sparsedtype df pd dataframe columns df astype sparsedtype category or df astype sparsedtype category fill value not in series issue description i am unable to convert a dense categorical series to a sparse one when i leave the fill value at default or a value which does not exist in the series stacktrace valueerror traceback most recent call last file repositories arff to parquet venv lib site packages ipython core formatters py in plaintextformatter call self obj stream stringio printer pretty representationprinter stream self verbose self max width self newline max seq length self max seq length singleton pprinters self singleton printers type pprinters self type printers deferred pprinters self deferred printers printer pretty obj printer flush return stream getvalue file repositories arff to parquet venv lib site packages ipython lib pretty py in representationprinter pretty self obj return meth obj self cycle if cls is not object and callable cls dict get repr return repr pprint obj self cycle return default pprint obj self cycle finally file repositories arff to parquet venv lib site packages ipython lib pretty py in repr pprint obj p cycle a pprint that just redirects to the normal repr function find newlines and replace them with p break output repr obj lines output splitlines with p group file repositories arff to parquet venv lib site packages pandas core series py in series repr self pylint disable invalid repr returned repr params fmt get series repr params return self to string repr params file repositories arff to parquet venv lib site packages pandas core series py in series to string self buf na rep float format header index length dtype name max rows min rows render a string representation of the series string representation of series if buf none otherwise none formatter fmt seriesformatter self name name max rows max rows result formatter to string catch contract violations if not isinstance result str file repositories arff to parquet venv lib site packages pandas io formats format py in seriesformatter to string self return f type self series name footer fmt index have header self get formatted index fmt values self get formatted values if self is truncated vertically n header rows file repositories arff to parquet venv lib site packages pandas io formats format py in seriesformatter get formatted values self def get formatted values self list return format array self tr series values none float format self float format na rep self na rep leading space self index file repositories arff to parquet venv lib site packages pandas io formats format py in format array values formatter float format na rep digits space justify decimal leading space quoting digits get option display precision fmt obj fmt klass values digits digits quoting quoting return fmt obj get result file repositories arff to parquet venv lib site packages pandas io formats format py in genericarrayformatter get result self def get result self list fmt values self format strings return make fixed width fmt values self justify file repositories arff to parquet venv lib site packages pandas io formats format py in extensionarrayformatter format strings self array values internal get values else array np asarray values fmt values format array array formatter quoting self quoting return fmt values valueerror object array method not producing an array expected behavior i expect it to just work similar to providing a fill value which does exist in the series or how it works with other dtypes python import pandas as pd from pandas import sparsedtype df pd dataframe columns works since a is a value present in the series df astype sparsedtype category fill value a also works despite not being present in the series df astype sparsedtype int fill value installed versions installed versions commit python final python bits os darwin os release version darwin kernel version tue apr pdt root xnu release machine processor arm byteorder little lc all none lang none locale none utf pandas numpy pytz dateutil setuptools pip cython none pytest hypothesis none sphinx none blosc none feather none xlsxwriter none lxml etree none none pymysql none none ipython pandas datareader none bottleneck none brotli none fastparquet none fsspec none gcsfs none matplotlib numba none numexpr none odfpy none openpyxl none pandas gbq none pyarrow pyreadstat none pyxlsb none none scipy snappy none sqlalchemy none tables none tabulate none xarray none xlrd none xlwt none zstandard none tzdata none | 1 |
794,935 | 28,055,385,506 | IssuesEvent | 2023-03-29 09:02:31 | NorskRegnesentral/ccc21 | https://api.github.com/repos/NorskRegnesentral/ccc21 | closed | 3. Write function to simulate array with colors | color_deficiency high_priority JSL | Write a function that takes simulation_type and array with original colors as input. Compute simualted colors. Return simulation_type and array with simulation colors.

| 1.0 | 3. Write function to simulate array with colors - Write a function that takes simulation_type and array with original colors as input. Compute simualted colors. Return simulation_type and array with simulation colors.

| non_test | write function to simulate array with colors write a function that takes simulation type and array with original colors as input compute simualted colors return simulation type and array with simulation colors | 0 |
208,084 | 15,874,081,884 | IssuesEvent | 2021-04-09 04:06:05 | wesnoth/wesnoth | https://api.github.com/repos/wesnoth/wesnoth | closed | CI: WML unit tests results make it very hard to find errors | Bug Unit Tests | @CelticMinstrel the WML unit test script's results are currently very confusing. Below is part of the output of the c611128 validation (which passes, none of these are problems). However, this makes it very hard to work out why the CI failed on other runs - there's one other error, somewhere.
```
Running test test_move_fail_6
Error (strict mode, strict_level = 1): wesnoth reported on channel warning replay
20210314 23:50:58 warning replay: Warning: Path data contained something which could not be parsed to a sequence of locations:
config = x = 16,15,14,13,12,11
y = 3,3,3,3,3,bock
FAIL TEST (BROKE STRICT): test_move_fail_6
Running test test_store_unit_defense_deprecated
Error (strict mode, strict_level = 1): wesnoth reported on channel error deprecation
20210314 23:50:59 error deprecation: [store_unit_defense] has been deprecated and will be removed in version 1.17.0.
This function returns the chance to be hit, high values represent bad defenses. Using [store_unit_defense_on] is recommended instead.
20210314 23:50:59 error deprecation: [store_unit_defense] has been deprecated and will be removed in version 1.17.0.
This function returns the chance to be hit, high values represent bad defenses. Using [store_unit_defense_on] is recommended instead.
20210314 23:50:59 error deprecation: [store_unit_defense] has been deprecated and will be removed in version 1.17.0.
This function returns the chance to be hit, high values represent bad defenses. Using [store_unit_defense_on] is recommended instead.
20210314 23:50:59 error deprecation: [store_unit_defense] has been deprecated and will be removed in version 1.17.0.
This function returns the chance to be hit, high values represent bad defenses. Using [store_unit_defense_on] is recommended instead.
20210314 23:50:59 error deprecation: [store_unit_defense] has been deprecated and will be removed in version 1.17.0.
This function returns the chance to be hit, high values represent bad defenses. Using [store_unit_defense_on] is recommended instead.
20210314 23:50:59 error deprecation: [store_unit_defense] has been deprecated and will be removed in version 1.17.0.
This function returns the chance to be hit, high values represent bad defenses. Using [store_unit_defense_on] is recommended instead.
FAIL TEST (BROKE STRICT): test_store_unit_defense_deprecated
Running test alice_kills_bob
PASS TEST (VICTORY): alice_kills_bob
Running test bob_kills_alice_on_retal
PASS TEST (VICTORY): bob_kills_alice_on_retal
Running test alice_kills_bob_levelup
PASS TEST (VICTORY): alice_kills_bob_levelup
Running test bob_kills_alice
PASS TEST (VICTORY): bob_kills_alice
Running test alice_kills_bob_on_retal
PASS TEST (VICTORY): alice_kills_bob_on_retal
Running test alice_kills_bob_on_retal_levelup
PASS TEST (VICTORY): alice_kills_bob_on_retal_levelup
Running test test_wml_menu_items_2
Error (strict mode, strict_level = 1): wesnoth reported on channel warning wml
20210314 23:51:08 warning wml: The following conditional test unexpectedly failed:
[variable]
boolean_equals=yes
name="result"
[/variable]
Interpolated to:
[variable]
boolean_equals=yes
name="result"
[/variable]
Note: The variable result currently has the value false.
FAIL TEST: test_wml_menu_items_2
Running test filter_formula_unit_error
Error (strict mode, strict_level = 1): wesnoth reported on channel error scripting/lua
20210314 23:51:09 error scripting/lua: Formula error in formula:1
In formula +
Error: Illegal unary operator: '+'
stack traceback:
[C]: in ?
[C]: in field 'eval_conditional'
lua/wml-flow.lua:17: in local 'cmd'
lua/wml-utils.lua:144: in field 'handle_event_commands'
lua/wml-flow.lua:19: in local 'cmd'
lua/wml-utils.lua:144: in field 'handle_event_commands'
lua/wml-flow.lua:5: in function <lua/wml-flow.lua:4>
FAIL TEST (BROKE STRICT): filter_formula_unit_error
``` | 1.0 | CI: WML unit tests results make it very hard to find errors - @CelticMinstrel the WML unit test script's results are currently very confusing. Below is part of the output of the c611128 validation (which passes, none of these are problems). However, this makes it very hard to work out why the CI failed on other runs - there's one other error, somewhere.
```
Running test test_move_fail_6
Error (strict mode, strict_level = 1): wesnoth reported on channel warning replay
20210314 23:50:58 warning replay: Warning: Path data contained something which could not be parsed to a sequence of locations:
config = x = 16,15,14,13,12,11
y = 3,3,3,3,3,bock
FAIL TEST (BROKE STRICT): test_move_fail_6
Running test test_store_unit_defense_deprecated
Error (strict mode, strict_level = 1): wesnoth reported on channel error deprecation
20210314 23:50:59 error deprecation: [store_unit_defense] has been deprecated and will be removed in version 1.17.0.
This function returns the chance to be hit, high values represent bad defenses. Using [store_unit_defense_on] is recommended instead.
20210314 23:50:59 error deprecation: [store_unit_defense] has been deprecated and will be removed in version 1.17.0.
This function returns the chance to be hit, high values represent bad defenses. Using [store_unit_defense_on] is recommended instead.
20210314 23:50:59 error deprecation: [store_unit_defense] has been deprecated and will be removed in version 1.17.0.
This function returns the chance to be hit, high values represent bad defenses. Using [store_unit_defense_on] is recommended instead.
20210314 23:50:59 error deprecation: [store_unit_defense] has been deprecated and will be removed in version 1.17.0.
This function returns the chance to be hit, high values represent bad defenses. Using [store_unit_defense_on] is recommended instead.
20210314 23:50:59 error deprecation: [store_unit_defense] has been deprecated and will be removed in version 1.17.0.
This function returns the chance to be hit, high values represent bad defenses. Using [store_unit_defense_on] is recommended instead.
20210314 23:50:59 error deprecation: [store_unit_defense] has been deprecated and will be removed in version 1.17.0.
This function returns the chance to be hit, high values represent bad defenses. Using [store_unit_defense_on] is recommended instead.
FAIL TEST (BROKE STRICT): test_store_unit_defense_deprecated
Running test alice_kills_bob
PASS TEST (VICTORY): alice_kills_bob
Running test bob_kills_alice_on_retal
PASS TEST (VICTORY): bob_kills_alice_on_retal
Running test alice_kills_bob_levelup
PASS TEST (VICTORY): alice_kills_bob_levelup
Running test bob_kills_alice
PASS TEST (VICTORY): bob_kills_alice
Running test alice_kills_bob_on_retal
PASS TEST (VICTORY): alice_kills_bob_on_retal
Running test alice_kills_bob_on_retal_levelup
PASS TEST (VICTORY): alice_kills_bob_on_retal_levelup
Running test test_wml_menu_items_2
Error (strict mode, strict_level = 1): wesnoth reported on channel warning wml
20210314 23:51:08 warning wml: The following conditional test unexpectedly failed:
[variable]
boolean_equals=yes
name="result"
[/variable]
Interpolated to:
[variable]
boolean_equals=yes
name="result"
[/variable]
Note: The variable result currently has the value false.
FAIL TEST: test_wml_menu_items_2
Running test filter_formula_unit_error
Error (strict mode, strict_level = 1): wesnoth reported on channel error scripting/lua
20210314 23:51:09 error scripting/lua: Formula error in formula:1
In formula +
Error: Illegal unary operator: '+'
stack traceback:
[C]: in ?
[C]: in field 'eval_conditional'
lua/wml-flow.lua:17: in local 'cmd'
lua/wml-utils.lua:144: in field 'handle_event_commands'
lua/wml-flow.lua:19: in local 'cmd'
lua/wml-utils.lua:144: in field 'handle_event_commands'
lua/wml-flow.lua:5: in function <lua/wml-flow.lua:4>
FAIL TEST (BROKE STRICT): filter_formula_unit_error
``` | test | ci wml unit tests results make it very hard to find errors celticminstrel the wml unit test script s results are currently very confusing below is part of the output of the validation which passes none of these are problems however this makes it very hard to work out why the ci failed on other runs there s one other error somewhere running test test move fail error strict mode strict level wesnoth reported on channel warning replay warning replay warning path data contained something which could not be parsed to a sequence of locations config x y bock fail test broke strict test move fail running test test store unit defense deprecated error strict mode strict level wesnoth reported on channel error deprecation error deprecation has been deprecated and will be removed in version this function returns the chance to be hit high values represent bad defenses using is recommended instead error deprecation has been deprecated and will be removed in version this function returns the chance to be hit high values represent bad defenses using is recommended instead error deprecation has been deprecated and will be removed in version this function returns the chance to be hit high values represent bad defenses using is recommended instead error deprecation has been deprecated and will be removed in version this function returns the chance to be hit high values represent bad defenses using is recommended instead error deprecation has been deprecated and will be removed in version this function returns the chance to be hit high values represent bad defenses using is recommended instead error deprecation has been deprecated and will be removed in version this function returns the chance to be hit high values represent bad defenses using is recommended instead fail test broke strict test store unit defense deprecated running test alice kills bob pass test victory alice kills bob running test bob kills alice on retal pass test victory bob kills alice on retal running test alice kills bob levelup pass test victory alice kills bob levelup running test bob kills alice pass test victory bob kills alice running test alice kills bob on retal pass test victory alice kills bob on retal running test alice kills bob on retal levelup pass test victory alice kills bob on retal levelup running test test wml menu items error strict mode strict level wesnoth reported on channel warning wml warning wml the following conditional test unexpectedly failed boolean equals yes name result interpolated to boolean equals yes name result note the variable result currently has the value false fail test test wml menu items running test filter formula unit error error strict mode strict level wesnoth reported on channel error scripting lua error scripting lua formula error in formula in formula error illegal unary operator stack traceback in in field eval conditional lua wml flow lua in local cmd lua wml utils lua in field handle event commands lua wml flow lua in local cmd lua wml utils lua in field handle event commands lua wml flow lua in function fail test broke strict filter formula unit error | 1 |
599,058 | 18,264,855,086 | IssuesEvent | 2021-10-04 07:10:06 | harvester/harvester | https://api.github.com/repos/harvester/harvester | closed | [BUG] VMs are crashing when writing bulk data to additional volumes | bug area/ui priority/1 | **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Running `dd` to write data to an additional volume causes the VM to crash.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a VM with an additional volume attached. Configure VM Memory limit to 10GB.
2. Run`dd if=/dev/zero of=/dev/sda bs=1M conv=sync` on the volume and wait for 10-20 seconds
3. The VM crashes and is automatically restarted
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
A VM should not crash when writing data to a secondary disc.
**Environment:**
- Harvester ISO version: 0.3.0-preview
**Additional context**
It appears the qemu pod is OOM killed because it exceeds the memory limit of 10GB that Harvester has configured.
```
Sep 16 08:38:09 lpedge01003 kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=06ea4e4c81e3b77d6c6d4d150016783adb9476391a3d713e4ccf0180889a4557,mems_allowed=0-1,oom_memcg=/kubepods/pod4f6371a7-c7ea-416a-837a-a4692d874756,task_memcg=/kubepods/pod4f6371a7-c7ea-416a-837a-a4692d874756/06ea4e4c81e3b77d6c6d4d150016783adb9476391a3d713e4ccf0180889a4557,task=qemu-system-x86,pid=1526,uid=107
Sep 16 08:38:09 lpedge01003 kernel: Memory cgroup out of memory: Killed process 1526 (qemu-system-x86) total-vm:11226804kB, anon-rss:10439204kB, file-rss:24704kB, shmem-rss:4kB
Sep 16 08:38:09 lpedge01003 kernel: oom_reaper: reaped process 1526 (qemu-system-x86), now anon-rss:0kB, file-rss:12kB, shmem-rss:4kB
```
| 1.0 | [BUG] VMs are crashing when writing bulk data to additional volumes - **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Running `dd` to write data to an additional volume causes the VM to crash.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a VM with an additional volume attached. Configure VM Memory limit to 10GB.
2. Run`dd if=/dev/zero of=/dev/sda bs=1M conv=sync` on the volume and wait for 10-20 seconds
3. The VM crashes and is automatically restarted
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
A VM should not crash when writing data to a secondary disc.
**Environment:**
- Harvester ISO version: 0.3.0-preview
**Additional context**
It appears the qemu pod is OOM killed because it exceeds the memory limit of 10GB that Harvester has configured.
```
Sep 16 08:38:09 lpedge01003 kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=06ea4e4c81e3b77d6c6d4d150016783adb9476391a3d713e4ccf0180889a4557,mems_allowed=0-1,oom_memcg=/kubepods/pod4f6371a7-c7ea-416a-837a-a4692d874756,task_memcg=/kubepods/pod4f6371a7-c7ea-416a-837a-a4692d874756/06ea4e4c81e3b77d6c6d4d150016783adb9476391a3d713e4ccf0180889a4557,task=qemu-system-x86,pid=1526,uid=107
Sep 16 08:38:09 lpedge01003 kernel: Memory cgroup out of memory: Killed process 1526 (qemu-system-x86) total-vm:11226804kB, anon-rss:10439204kB, file-rss:24704kB, shmem-rss:4kB
Sep 16 08:38:09 lpedge01003 kernel: oom_reaper: reaped process 1526 (qemu-system-x86), now anon-rss:0kB, file-rss:12kB, shmem-rss:4kB
```
| non_test | vms are crashing when writing bulk data to additional volumes describe the bug running dd to write data to an additional volume causes the vm to crash to reproduce steps to reproduce the behavior create a vm with an additional volume attached configure vm memory limit to run dd if dev zero of dev sda bs conv sync on the volume and wait for seconds the vm crashes and is automatically restarted expected behavior a vm should not crash when writing data to a secondary disc environment harvester iso version preview additional context it appears the qemu pod is oom killed because it exceeds the memory limit of that harvester has configured sep kernel oom kill constraint constraint memcg nodemask null cpuset mems allowed oom memcg kubepods task memcg kubepods task qemu system pid uid sep kernel memory cgroup out of memory killed process qemu system total vm anon rss file rss shmem rss sep kernel oom reaper reaped process qemu system now anon rss file rss shmem rss | 0 |
136,872 | 30,599,416,368 | IssuesEvent | 2023-07-22 06:59:28 | SCIInstitute/ShapeWorks | https://api.github.com/repos/SCIInstitute/ShapeWorks | closed | Parent Issue for Repo Cleanup | Priority: Medium Status: Code Cleanup | This is a parent issue for all issues that pertain to cleaning up the repo:
- [x] #160
- [x] #101
- [x] #171
- [ ] #408
- [ ] #173
- [ ] #1149 | 1.0 | Parent Issue for Repo Cleanup - This is a parent issue for all issues that pertain to cleaning up the repo:
- [x] #160
- [x] #101
- [x] #171
- [ ] #408
- [ ] #173
- [ ] #1149 | non_test | parent issue for repo cleanup this is a parent issue for all issues that pertain to cleaning up the repo | 0 |
329,125 | 10,012,497,822 | IssuesEvent | 2019-07-15 13:20:35 | weaveworks/ignite | https://api.github.com/repos/weaveworks/ignite | closed | missing /etc/resolv.conf | kind/bug priority/important-soon | So far with both the centos and ubuntu weaveworks images for ignite when I start a VM network connectivity seems to be fine, except I cannot lookup any hostnames. Creating a trivial `/etc/resolv.conf` with `nameserver 8.8.8.8` solves this and makes it a lot easier to work inside the VMs.
Docker and Kubernetes are both known to create / manage this file based on the hosts resolver settings and user options, it might be nice if ignite behaved similar to docker here.
Alternatively, if we don't want to do this and want to leave it out of the scope for `ignite`, it will probably be friendlier to new users if the default images ship a simple resolv.conf with at least one nameserver so package management, wget, etc. work. | 1.0 | missing /etc/resolv.conf - So far with both the centos and ubuntu weaveworks images for ignite when I start a VM network connectivity seems to be fine, except I cannot lookup any hostnames. Creating a trivial `/etc/resolv.conf` with `nameserver 8.8.8.8` solves this and makes it a lot easier to work inside the VMs.
Docker and Kubernetes are both known to create / manage this file based on the hosts resolver settings and user options, it might be nice if ignite behaved similar to docker here.
Alternatively, if we don't want to do this and want to leave it out of the scope for `ignite`, it will probably be friendlier to new users if the default images ship a simple resolv.conf with at least one nameserver so package management, wget, etc. work. | non_test | missing etc resolv conf so far with both the centos and ubuntu weaveworks images for ignite when i start a vm network connectivity seems to be fine except i cannot lookup any hostnames creating a trivial etc resolv conf with nameserver solves this and makes it a lot easier to work inside the vms docker and kubernetes are both known to create manage this file based on the hosts resolver settings and user options it might be nice if ignite behaved similar to docker here alternatively if we don t want to do this and want to leave it out of the scope for ignite it will probably be friendlier to new users if the default images ship a simple resolv conf with at least one nameserver so package management wget etc work | 0 |
346,317 | 30,884,496,601 | IssuesEvent | 2023-08-03 20:27:03 | prisma/prisma | https://api.github.com/repos/prisma/prisma | closed | Upgrading from Prisma 5.0.0 -> 5.1.0 results in "TS2321: Excessive stack depth comparing types" error using `mockDeep<PrismaClient>()` | bug/1-unconfirmed kind/bug topic: tests tech/typescript team/client 5.1.0 | ### Bug description
We use the InversifyJS dependency injection framework and Typescript 5.1.6. When upgrading from Prisma 5.0.0 to 5.1.0 with no other code changes, `tsc` results in the following error:
```
... - error TS2321: Excessive stack depth comparing types 'DeepMockProxy<PrismaClient<PrismaClientOptions, never, DefaultArgs>>' and 'PrismaClient<PrismaClientOptions, unknown, Args_2>'.
31 tc.bind(PrismaClient).toConstantValue(mockDeep<PrismaClient>());
```
Not sure if there are workarounds, but this directly reflects the recommended practice from the Prisma guide on unit testing here: https://www.prisma.io/docs/guides/testing/unit-testing#dependency-injection
### How to reproduce
1. Follow Prisma unit testing guide for dependency testing here: https://www.prisma.io/docs/guides/testing/unit-testing#dependency-injection
2. Use `mockDeep<PrismaClient>()` in a unit test
3. Upgrade to Prisma 5.1.0
4. Run TypeScript (i.e. v5.1.6) typechecker
### Expected behavior
TS compilation and typecheck behavior matches 5.0.0.
### Prisma information
<!-- Do not include your database credentials when sharing your Prisma schema! -->
```prisma
// Add your schema.prisma
```
```ts
const tc = new Container();
tc.bind(PrismaClient).toConstantValue(mockDeep<PrismaClient>());
return tc;
```
### Environment & setup
- OS: macOS and Debian
- Database: PostgreSQL
- Node.js version: 18.6.0
### Prisma Version
```
5.1.0
```
| 1.0 | Upgrading from Prisma 5.0.0 -> 5.1.0 results in "TS2321: Excessive stack depth comparing types" error using `mockDeep<PrismaClient>()` - ### Bug description
We use the InversifyJS dependency injection framework and Typescript 5.1.6. When upgrading from Prisma 5.0.0 to 5.1.0 with no other code changes, `tsc` results in the following error:
```
... - error TS2321: Excessive stack depth comparing types 'DeepMockProxy<PrismaClient<PrismaClientOptions, never, DefaultArgs>>' and 'PrismaClient<PrismaClientOptions, unknown, Args_2>'.
31 tc.bind(PrismaClient).toConstantValue(mockDeep<PrismaClient>());
```
Not sure if there are workarounds, but this directly reflects the recommended practice from the Prisma guide on unit testing here: https://www.prisma.io/docs/guides/testing/unit-testing#dependency-injection
### How to reproduce
1. Follow Prisma unit testing guide for dependency testing here: https://www.prisma.io/docs/guides/testing/unit-testing#dependency-injection
2. Use `mockDeep<PrismaClient>()` in a unit test
3. Upgrade to Prisma 5.1.0
4. Run TypeScript (i.e. v5.1.6) typechecker
### Expected behavior
TS compilation and typecheck behavior matches 5.0.0.
### Prisma information
<!-- Do not include your database credentials when sharing your Prisma schema! -->
```prisma
// Add your schema.prisma
```
```ts
const tc = new Container();
tc.bind(PrismaClient).toConstantValue(mockDeep<PrismaClient>());
return tc;
```
### Environment & setup
- OS: macOS and Debian
- Database: PostgreSQL
- Node.js version: 18.6.0
### Prisma Version
```
5.1.0
```
| test | upgrading from prisma results in excessive stack depth comparing types error using mockdeep bug description we use the inversifyjs dependency injection framework and typescript when upgrading from prisma to with no other code changes tsc results in the following error error excessive stack depth comparing types deepmockproxy and prismaclient tc bind prismaclient toconstantvalue mockdeep not sure if there are workarounds but this directly reflects the recommended practice from the prisma guide on unit testing here how to reproduce follow prisma unit testing guide for dependency testing here use mockdeep in a unit test upgrade to prisma run typescript i e typechecker expected behavior ts compilation and typecheck behavior matches prisma information prisma add your schema prisma ts const tc new container tc bind prismaclient toconstantvalue mockdeep return tc environment setup os macos and debian database postgresql node js version prisma version | 1 |
351,129 | 10,512,967,822 | IssuesEvent | 2019-09-27 19:17:37 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Advanced Masonry table can fly | Medium Priority | Step to reproduce:
- place Advanced Masonry table

- Move away. It flies

| 1.0 | Advanced Masonry table can fly - Step to reproduce:
- place Advanced Masonry table

- Move away. It flies

| non_test | advanced masonry table can fly step to reproduce place advanced masonry table move away it flies | 0 |
11,656 | 32,007,866,529 | IssuesEvent | 2023-09-21 15:54:13 | Nexus-Mods/NexusMods.App | https://api.github.com/repos/Nexus-Mods/NexusMods.App | closed | Use an extensible solution for IDs instead of Enums for GameFolderType | area-game-support area-code-architecture-design | GameFolderType currently is an Enum defined in the main App, that identifies a small list of common game paths that the main application and other components might need to reference in a game installation agnostic way.
The issue is that some games might have particular paths they need to expose that aren't present in other games.
If new game needs to add one such identifiable path it would need to edit the shared Enum.
The problem is that the enum would quickly get polluted with game specific path ids that might be interpreted differently for different games.
A different solution is needed, one that doesn't rely on a central shared list of IDs but instead allows each game to define its own list in addition to the common ones.
Many of these paths could likely be useful for other extension components (mod installers, diagnostics, etc), while the main application could realistically not need to be aware of them.
So it would be convenient for games to be able to define new PathIds without having to change the main app code, while still allowing other components to reference them.
The developer of the extension or component will need to know the static ID of the path it needs and check for its existence in the currently managed game.
This admittedly isn't very elegant, but would allow bypassing visibility of the ID definition.
It can make it harder for developers to find the correct ID that is needed to be implemented for a game.
For example all bethesda games have the `Data` folder, to add a new Bethesda game, the developer would need to look for what ID was used in the other games to define Data and make sure to reuse the same. Any extension developer wanting to reference Data would need to look up the exact ID to use. This is somewhat simplified in the case of human readable IDs like strings. while more error prone in the case of GUIDs for example.
## Proposal:
Use statically defined GUIDs wrapped in ValueObjects (for Nominal Typing) as GamePathTypes (or GamePathIds)
To define a new GUID statically on Rider, you can press Shift twice, type in GUID and there is an option to generate a GUID in various formats.
App defines a default list of common GamePathTypes (game, saves, confic, appdata)
Game then exposes a collection of GamePaths with either the App defined IDs or custom IDs.
Upside is fixed size of IDs and very low chance of collision and ability for any component to check for a path if they know the ID, even if the game type that defined it isn't visible from the extension or component.
Downside is non human readable nature of the ID.
You can name the value object instance in the code that references a particular path ID instead.
| 1.0 | Use an extensible solution for IDs instead of Enums for GameFolderType - GameFolderType currently is an Enum defined in the main App, that identifies a small list of common game paths that the main application and other components might need to reference in a game installation agnostic way.
The issue is that some games might have particular paths they need to expose that aren't present in other games.
If new game needs to add one such identifiable path it would need to edit the shared Enum.
The problem is that the enum would quickly get polluted with game specific path ids that might be interpreted differently for different games.
A different solution is needed, one that doesn't rely on a central shared list of IDs but instead allows each game to define its own list in addition to the common ones.
Many of these paths could likely be useful for other extension components (mod installers, diagnostics, etc), while the main application could realistically not need to be aware of them.
So it would be convenient for games to be able to define new PathIds without having to change the main app code, while still allowing other components to reference them.
The developer of the extension or component will need to know the static ID of the path it needs and check for its existence in the currently managed game.
This admittedly isn't very elegant, but would allow bypassing visibility of the ID definition.
It can make it harder for developers to find the correct ID that is needed to be implemented for a game.
For example all bethesda games have the `Data` folder, to add a new Bethesda game, the developer would need to look for what ID was used in the other games to define Data and make sure to reuse the same. Any extension developer wanting to reference Data would need to look up the exact ID to use. This is somewhat simplified in the case of human readable IDs like strings. while more error prone in the case of GUIDs for example.
## Proposal:
Use statically defined GUIDs wrapped in ValueObjects (for Nominal Typing) as GamePathTypes (or GamePathIds)
To define a new GUID statically on Rider, you can press Shift twice, type in GUID and there is an option to generate a GUID in various formats.
App defines a default list of common GamePathTypes (game, saves, confic, appdata)
Game then exposes a collection of GamePaths with either the App defined IDs or custom IDs.
Upside is fixed size of IDs and very low chance of collision and ability for any component to check for a path if they know the ID, even if the game type that defined it isn't visible from the extension or component.
Downside is non human readable nature of the ID.
You can name the value object instance in the code that references a particular path ID instead.
| non_test | use an extensible solution for ids instead of enums for gamefoldertype gamefoldertype currently is an enum defined in the main app that identifies a small list of common game paths that the main application and other components might need to reference in a game installation agnostic way the issue is that some games might have particular paths they need to expose that aren t present in other games if new game needs to add one such identifiable path it would need to edit the shared enum the problem is that the enum would quickly get polluted with game specific path ids that might be interpreted differently for different games a different solution is needed one that doesn t rely on a central shared list of ids but instead allows each game to define its own list in addition to the common ones many of these paths could likely be useful for other extension components mod installers diagnostics etc while the main application could realistically not need to be aware of them so it would be convenient for games to be able to define new pathids without having to change the main app code while still allowing other components to reference them the developer of the extension or component will need to know the static id of the path it needs and check for its existence in the currently managed game this admittedly isn t very elegant but would allow bypassing visibility of the id definition it can make it harder for developers to find the correct id that is needed to be implemented for a game for example all bethesda games have the data folder to add a new bethesda game the developer would need to look for what id was used in the other games to define data and make sure to reuse the same any extension developer wanting to reference data would need to look up the exact id to use this is somewhat simplified in the case of human readable ids like strings while more error prone in the case of guids for example proposal use statically defined guids wrapped in valueobjects for nominal typing as gamepathtypes or gamepathids to define a new guid statically on rider you can press shift twice type in guid and there is an option to generate a guid in various formats app defines a default list of common gamepathtypes game saves confic appdata game then exposes a collection of gamepaths with either the app defined ids or custom ids upside is fixed size of ids and very low chance of collision and ability for any component to check for a path if they know the id even if the game type that defined it isn t visible from the extension or component downside is non human readable nature of the id you can name the value object instance in the code that references a particular path id instead | 0 |
331,416 | 28,963,083,256 | IssuesEvent | 2023-05-10 05:24:36 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | roachtest: acceptance/build-analyze failed | C-test-failure O-robot O-roachtest release-blocker branch-release-22.2 | roachtest.acceptance/build-analyze [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10006695?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10006695?buildTab=artifacts#/acceptance/build-analyze) on release-22.2 @ [9145535e8f6e57d09e3688d6b95fbdceecc47194](https://github.com/cockroachdb/cockroach/commits/9145535e8f6e57d09e3688d6b95fbdceecc47194):
```
test artifacts and logs in: /artifacts/acceptance/build-analyze/run_1
(cluster.go:1667).Put: cluster.PutE: put /go/src/github.com/cockroachdb/cockroach/bin/cockroach failed: error persisted after 3 attempts: ~ scp -r -C -o StrictHostKeyChecking=no -i /home/roach/.ssh/id_rsa -i /home/roach/.ssh/google_compute_engine ubuntu@34.148.223.10:./cockroach ubuntu@34.148.86.152:./cockroach
ubuntu@34.148.86.152: Permission denied (publickey).
lost connection: exit status 1
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #71799 roachtest: acceptance/build-analyze failed [C-test-failure O-roachtest O-robot T-testeng branch-release-21.2]
</p>
</details>
/cc @cockroachdb/dev-inf
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*acceptance/build-analyze.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 2.0 | roachtest: acceptance/build-analyze failed - roachtest.acceptance/build-analyze [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10006695?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10006695?buildTab=artifacts#/acceptance/build-analyze) on release-22.2 @ [9145535e8f6e57d09e3688d6b95fbdceecc47194](https://github.com/cockroachdb/cockroach/commits/9145535e8f6e57d09e3688d6b95fbdceecc47194):
```
test artifacts and logs in: /artifacts/acceptance/build-analyze/run_1
(cluster.go:1667).Put: cluster.PutE: put /go/src/github.com/cockroachdb/cockroach/bin/cockroach failed: error persisted after 3 attempts: ~ scp -r -C -o StrictHostKeyChecking=no -i /home/roach/.ssh/id_rsa -i /home/roach/.ssh/google_compute_engine ubuntu@34.148.223.10:./cockroach ubuntu@34.148.86.152:./cockroach
ubuntu@34.148.86.152: Permission denied (publickey).
lost connection: exit status 1
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #71799 roachtest: acceptance/build-analyze failed [C-test-failure O-roachtest O-robot T-testeng branch-release-21.2]
</p>
</details>
/cc @cockroachdb/dev-inf
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*acceptance/build-analyze.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| test | roachtest acceptance build analyze failed roachtest acceptance build analyze with on release test artifacts and logs in artifacts acceptance build analyze run cluster go put cluster pute put go src github com cockroachdb cockroach bin cockroach failed error persisted after attempts scp r c o stricthostkeychecking no i home roach ssh id rsa i home roach ssh google compute engine ubuntu cockroach ubuntu cockroach ubuntu permission denied publickey lost connection exit status parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest fs roachtest localssd true roachtest ssd help see see same failure on other branches roachtest acceptance build analyze failed cc cockroachdb dev inf | 1 |
201,832 | 15,814,617,917 | IssuesEvent | 2021-04-05 09:46:01 | AY2021S2-CS2103-W16-1/tp | https://api.github.com/repos/AY2021S2-CS2103-W16-1/tp | closed | [PE-D] Unclear definition of sorting criterias | documentation severity.MED | The description for “Sorts by task name, in increasing order” and “Sorts by task deadline, in increasing order” is not very clear. Maybe it would be clearer to define what it means by “increasing”.

<!--session: 1617429995887-4fddec0d-3b8a-49f7-9363-1197dccb41fd-->
-------------
Labels: `severity.Medium` `type.DocumentationBug`
original: zenlyj/ped#5 | 1.0 | [PE-D] Unclear definition of sorting criterias - The description for “Sorts by task name, in increasing order” and “Sorts by task deadline, in increasing order” is not very clear. Maybe it would be clearer to define what it means by “increasing”.

<!--session: 1617429995887-4fddec0d-3b8a-49f7-9363-1197dccb41fd-->
-------------
Labels: `severity.Medium` `type.DocumentationBug`
original: zenlyj/ped#5 | non_test | unclear definition of sorting criterias the description for “sorts by task name in increasing order” and “sorts by task deadline in increasing order” is not very clear maybe it would be clearer to define what it means by “increasing” labels severity medium type documentationbug original zenlyj ped | 0 |
113,726 | 11,813,043,571 | IssuesEvent | 2020-03-19 21:26:51 | carla-simulator/carla | https://api.github.com/repos/carla-simulator/carla | closed | Improve documentation | backlog documentation feature request | Our documentation needs a boost!
- [x] [Community profile](https://github.com/carla-simulator/carla/community)
- [x] Hands-on tutorial
- [x] How to run headless
- [x] How to chose GPU when running headless #116
- [x] How to create your own scenarios with our tools #110
- [x] Running CARLA server in-editor #143
- [ ] Software design #145
- [ ] How many cars and pedestrians can you add? #171
- [ ] More FAQ/Troubleshooting (get from [questions list](https://github.com/carla-simulator/carla/issues?q=is%3Aissue+label%3Aquestion+sort%3Acomments-desc) and [add to documentation](https://github.com/carla-simulator/carla/labels/add%20to%20documentation))
- [x] Better road-map
- [ ] Suggestions?
All contributions are appreciated ;)
Submit your commits/pull-requests to the `documentation` branch.
| 1.0 | Improve documentation - Our documentation needs a boost!
- [x] [Community profile](https://github.com/carla-simulator/carla/community)
- [x] Hands-on tutorial
- [x] How to run headless
- [x] How to chose GPU when running headless #116
- [x] How to create your own scenarios with our tools #110
- [x] Running CARLA server in-editor #143
- [ ] Software design #145
- [ ] How many cars and pedestrians can you add? #171
- [ ] More FAQ/Troubleshooting (get from [questions list](https://github.com/carla-simulator/carla/issues?q=is%3Aissue+label%3Aquestion+sort%3Acomments-desc) and [add to documentation](https://github.com/carla-simulator/carla/labels/add%20to%20documentation))
- [x] Better road-map
- [ ] Suggestions?
All contributions are appreciated ;)
Submit your commits/pull-requests to the `documentation` branch.
| non_test | improve documentation our documentation needs a boost hands on tutorial how to run headless how to chose gpu when running headless how to create your own scenarios with our tools running carla server in editor software design how many cars and pedestrians can you add more faq troubleshooting get from and better road map suggestions all contributions are appreciated submit your commits pull requests to the documentation branch | 0 |
562,208 | 16,653,947,766 | IssuesEvent | 2021-06-05 06:58:05 | AxonFramework/AxonFramework | https://api.github.com/repos/AxonFramework/AxonFramework | closed | Exceptions not logged when TrackingEventProcessor initializes TokenStore | Ideal for Contribution Priority 1: Must Target: 4.5.2 Type: Enhancement | When an exception occurs while initializing the TokenStore, it is only logged on debug level. This was done to avoid flooding of logs with re-occurring stacktraces.
Similar to exceptions found during processing, the first occurrence of the stack trace should log the entire stack trace, while errors during re-attempting should be logged using a summary only. | 1.0 | Exceptions not logged when TrackingEventProcessor initializes TokenStore - When an exception occurs while initializing the TokenStore, it is only logged on debug level. This was done to avoid flooding of logs with re-occurring stacktraces.
Similar to exceptions found during processing, the first occurrence of the stack trace should log the entire stack trace, while errors during re-attempting should be logged using a summary only. | non_test | exceptions not logged when trackingeventprocessor initializes tokenstore when an exception occurs while initializing the tokenstore it is only logged on debug level this was done to avoid flooding of logs with re occurring stacktraces similar to exceptions found during processing the first occurrence of the stack trace should log the entire stack trace while errors during re attempting should be logged using a summary only | 0 |
168,020 | 26,582,362,507 | IssuesEvent | 2023-01-22 16:09:13 | jsdelivr/www.jsdelivr.com | https://api.github.com/repos/jsdelivr/www.jsdelivr.com | closed | Third-party images and badges in readmes | new design 2022 | In order to keep Google happy and optimize the performance of our project pages I made a serverless image proxy and optimizer on top of Gcore. There is no origin at all and all sources must be manually pre-approved:
Current supported domains I took from popular readmes:
https://img.jsdelivr.com/cloud.githubusercontent.com/assets/835857/14581711/ba623018-0436-11e6-8fce-d2ccd4d379c9.gif
https://img.jsdelivr.com/img.shields.io/badge/code_style-standard-brightgreen.svg
https://img.jsdelivr.com/raw.githubusercontent.com/wiki/js-cookie/js-cookie/Browserstack-logo%402x.png
https://img.jsdelivr.com/github.com/jquery.png?s=20
https://img.jsdelivr.com/upload.wikimedia.org/wikipedia/commons/thumb/1/1a/Canal%2B.svg/2000px-Canal%2B.svg.png
https://img.jsdelivr.com/opencollective.com/bootstrap/sponsor/0/avatar.svg
https://img.jsdelivr.com/flat.badgen.net/circleci/github/nuxt-community/workbox-cdn
https://img.jsdelivr.com/images.opencollective.com/casinofiables-com/b824bab/logo.png
https://img.jsdelivr.com/avatars.githubusercontent.com/u/9919?v=4&s=128
https://img.jsdelivr.com/badgen.net/github/checks/pillarjs/router/master?label=ci
_Originally posted by @jimaek in https://github.com/jsdelivr/www.jsdelivr.com/issues/462#issuecomment-1282432018_
| 1.0 | Third-party images and badges in readmes - In order to keep Google happy and optimize the performance of our project pages I made a serverless image proxy and optimizer on top of Gcore. There is no origin at all and all sources must be manually pre-approved:
Current supported domains I took from popular readmes:
https://img.jsdelivr.com/cloud.githubusercontent.com/assets/835857/14581711/ba623018-0436-11e6-8fce-d2ccd4d379c9.gif
https://img.jsdelivr.com/img.shields.io/badge/code_style-standard-brightgreen.svg
https://img.jsdelivr.com/raw.githubusercontent.com/wiki/js-cookie/js-cookie/Browserstack-logo%402x.png
https://img.jsdelivr.com/github.com/jquery.png?s=20
https://img.jsdelivr.com/upload.wikimedia.org/wikipedia/commons/thumb/1/1a/Canal%2B.svg/2000px-Canal%2B.svg.png
https://img.jsdelivr.com/opencollective.com/bootstrap/sponsor/0/avatar.svg
https://img.jsdelivr.com/flat.badgen.net/circleci/github/nuxt-community/workbox-cdn
https://img.jsdelivr.com/images.opencollective.com/casinofiables-com/b824bab/logo.png
https://img.jsdelivr.com/avatars.githubusercontent.com/u/9919?v=4&s=128
https://img.jsdelivr.com/badgen.net/github/checks/pillarjs/router/master?label=ci
_Originally posted by @jimaek in https://github.com/jsdelivr/www.jsdelivr.com/issues/462#issuecomment-1282432018_
| non_test | third party images and badges in readmes in order to keep google happy and optimize the performance of our project pages i made a serverless image proxy and optimizer on top of gcore there is no origin at all and all sources must be manually pre approved current supported domains i took from popular readmes originally posted by jimaek in | 0 |
7,791 | 2,610,636,518 | IssuesEvent | 2015-02-26 21:33:34 | alistairreilly/open-ig | https://api.github.com/repos/alistairreilly/open-ig | closed | Kutatás/Fejlesztés | auto-migrated Component-Logic Priority-Medium Type-Defect | ```
Pontosan nem tudom, miért, de elég gyakran a fejlesztés menüben mikor
találmányt választanánk, csak a videó követi a kiválasztást, de a panel
összes többi része (Szükséges kutató központok száma, <<START>> gomb)
nem.
Úgy néz ki, az idegen épületek okozzák.:
Idenegek lakta kolónián épületek kiválasztása után (pl: mezonlöveg,
naperőmű, a fejleszthetők) F6-ot nyomva előáll a probléma.
lásd kép: még mindig a plazmavetőn áll a részletes táblázat.
```
Original issue reported on code.google.com by `Geby.Nal...@gmail.com` on 24 Aug 2011 at 2:27
Attachments:
* [Screen.jpg](https://storage.googleapis.com/google-code-attachments/open-ig/issue-79/comment-0/Screen.jpg)
| 1.0 | Kutatás/Fejlesztés - ```
Pontosan nem tudom, miért, de elég gyakran a fejlesztés menüben mikor
találmányt választanánk, csak a videó követi a kiválasztást, de a panel
összes többi része (Szükséges kutató központok száma, <<START>> gomb)
nem.
Úgy néz ki, az idegen épületek okozzák.:
Idenegek lakta kolónián épületek kiválasztása után (pl: mezonlöveg,
naperőmű, a fejleszthetők) F6-ot nyomva előáll a probléma.
lásd kép: még mindig a plazmavetőn áll a részletes táblázat.
```
Original issue reported on code.google.com by `Geby.Nal...@gmail.com` on 24 Aug 2011 at 2:27
Attachments:
* [Screen.jpg](https://storage.googleapis.com/google-code-attachments/open-ig/issue-79/comment-0/Screen.jpg)
| non_test | kutatás fejlesztés pontosan nem tudom miért de elég gyakran a fejlesztés menüben mikor találmányt választanánk csak a videó követi a kiválasztást de a panel összes többi része szükséges kutató központok száma gomb nem úgy néz ki az idegen épületek okozzák idenegek lakta kolónián épületek kiválasztása után pl mezonlöveg naperőmű a fejleszthetők ot nyomva előáll a probléma lásd kép még mindig a plazmavetőn áll a részletes táblázat original issue reported on code google com by geby nal gmail com on aug at attachments | 0 |
52,041 | 6,216,655,662 | IssuesEvent | 2017-07-08 06:20:29 | apache/couchdb | https://api.github.com/repos/apache/couchdb | closed | EUnit couchdb_views_tests failure: suspend_process after termination | testsuite | https://travis-ci.org/apache/couchdb/jobs/250135101#L3825
```
View group shutdown
couchdb_views_tests:315: couchdb_1283...*failed*
in function erlang:suspend_process/1
called as suspend_process(<0.27290.0>)
in call from couchdb_views_tests:'-couchdb_1283/0-fun-21-'/0 (test/couchdb_views_tests.erl, line 381)
**error:badarg
output:<<"">>
```
Analysis of the logfile shows the compaction completes in just 200ms:
```
[info] 2017-07-04T21:35:33.260737Z nonode@nohost <0.27290.0> -------- Compaction started for db: eunit-test-db-1499204132844268 idx: _design/foo
[info] 2017-07-04T21:35:33.262754Z nonode@nohost <0.27290.0> -------- Compaction finished for db: eunit-test-db-1499204132844268 idx: _design/foo
```
so my hunch is that it is already gone by the time we call erlang:suspend_process. This test is racey.
## Possible Solution
@rnewson says on IRC we could spawn it with `{hibernate_after,0}` but that only works for `gen_server:start_link|start|enter_loop`, and we'd have to get deep inside `couch_index_server` to do that, I think.
We could bump up the populate_db to 1000 or 10000 documents as a workaround. | 1.0 | EUnit couchdb_views_tests failure: suspend_process after termination - https://travis-ci.org/apache/couchdb/jobs/250135101#L3825
```
View group shutdown
couchdb_views_tests:315: couchdb_1283...*failed*
in function erlang:suspend_process/1
called as suspend_process(<0.27290.0>)
in call from couchdb_views_tests:'-couchdb_1283/0-fun-21-'/0 (test/couchdb_views_tests.erl, line 381)
**error:badarg
output:<<"">>
```
Analysis of the logfile shows the compaction completes in just 200ms:
```
[info] 2017-07-04T21:35:33.260737Z nonode@nohost <0.27290.0> -------- Compaction started for db: eunit-test-db-1499204132844268 idx: _design/foo
[info] 2017-07-04T21:35:33.262754Z nonode@nohost <0.27290.0> -------- Compaction finished for db: eunit-test-db-1499204132844268 idx: _design/foo
```
so my hunch is that it is already gone by the time we call erlang:suspend_process. This test is racey.
## Possible Solution
@rnewson says on IRC we could spawn it with `{hibernate_after,0}` but that only works for `gen_server:start_link|start|enter_loop`, and we'd have to get deep inside `couch_index_server` to do that, I think.
We could bump up the populate_db to 1000 or 10000 documents as a workaround. | test | eunit couchdb views tests failure suspend process after termination view group shutdown couchdb views tests couchdb failed in function erlang suspend process called as suspend process in call from couchdb views tests couchdb fun test couchdb views tests erl line error badarg output analysis of the logfile shows the compaction completes in just nonode nohost compaction started for db eunit test db idx design foo nonode nohost compaction finished for db eunit test db idx design foo so my hunch is that it is already gone by the time we call erlang suspend process this test is racey possible solution rnewson says on irc we could spawn it with hibernate after but that only works for gen server start link start enter loop and we d have to get deep inside couch index server to do that i think we could bump up the populate db to or documents as a workaround | 1 |
187,888 | 14,433,796,159 | IssuesEvent | 2020-12-07 05:44:51 | kalexmills/github-vet-tests-dec2020 | https://api.github.com/repos/kalexmills/github-vet-tests-dec2020 | closed | shelmangroup/terraform-provider-coredns: vendor/k8s.io/kubernetes/pkg/controller/statefulset/stateful_set_control_test.go; 3 LoC | fresh test tiny vendored |
Found a possible issue in [shelmangroup/terraform-provider-coredns](https://www.github.com/shelmangroup/terraform-provider-coredns) at [vendor/k8s.io/kubernetes/pkg/controller/statefulset/stateful_set_control_test.go](https://github.com/shelmangroup/terraform-provider-coredns/blob/cbba45637d269949601b77f60e62e1c47cdf0920/vendor/k8s.io/kubernetes/pkg/controller/statefulset/stateful_set_control_test.go#L1736-L1738)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to claim at line 1737 may start a goroutine
[Click here to see the code in its original context.](https://github.com/shelmangroup/terraform-provider-coredns/blob/cbba45637d269949601b77f60e62e1c47cdf0920/vendor/k8s.io/kubernetes/pkg/controller/statefulset/stateful_set_control_test.go#L1736-L1738)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, claim := range getPersistentVolumeClaims(set, pod) {
spc.claimsIndexer.Update(&claim)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: cbba45637d269949601b77f60e62e1c47cdf0920
| 1.0 | shelmangroup/terraform-provider-coredns: vendor/k8s.io/kubernetes/pkg/controller/statefulset/stateful_set_control_test.go; 3 LoC -
Found a possible issue in [shelmangroup/terraform-provider-coredns](https://www.github.com/shelmangroup/terraform-provider-coredns) at [vendor/k8s.io/kubernetes/pkg/controller/statefulset/stateful_set_control_test.go](https://github.com/shelmangroup/terraform-provider-coredns/blob/cbba45637d269949601b77f60e62e1c47cdf0920/vendor/k8s.io/kubernetes/pkg/controller/statefulset/stateful_set_control_test.go#L1736-L1738)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to claim at line 1737 may start a goroutine
[Click here to see the code in its original context.](https://github.com/shelmangroup/terraform-provider-coredns/blob/cbba45637d269949601b77f60e62e1c47cdf0920/vendor/k8s.io/kubernetes/pkg/controller/statefulset/stateful_set_control_test.go#L1736-L1738)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, claim := range getPersistentVolumeClaims(set, pod) {
spc.claimsIndexer.Update(&claim)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: cbba45637d269949601b77f60e62e1c47cdf0920
| test | shelmangroup terraform provider coredns vendor io kubernetes pkg controller statefulset stateful set control test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to claim at line may start a goroutine click here to show the line s of go which triggered the analyzer go for claim range getpersistentvolumeclaims set pod spc claimsindexer update claim leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 1 |
311,684 | 26,805,369,497 | IssuesEvent | 2023-02-01 17:52:21 | EddieHubCommunity/LinkFree | https://api.github.com/repos/EddieHubCommunity/LinkFree | closed | New Testimonial for Krish Gupta 🥑 | testimonial | ### Name
krshkun
### Title
Most Enthusiastic and Amazing Person!!
### Description
I meet Krish at 4C community, He's the most enthusiastic, always ready for challenges, always active in community, speaking and building stuff together. He's not only an awesome developer but one of the most inspiring person too. I love this contributions and supportive nature!! | 1.0 | New Testimonial for Krish Gupta 🥑 - ### Name
krshkun
### Title
Most Enthusiastic and Amazing Person!!
### Description
I meet Krish at 4C community, He's the most enthusiastic, always ready for challenges, always active in community, speaking and building stuff together. He's not only an awesome developer but one of the most inspiring person too. I love this contributions and supportive nature!! | test | new testimonial for krish gupta 🥑 name krshkun title most enthusiastic and amazing person description i meet krish at community he s the most enthusiastic always ready for challenges always active in community speaking and building stuff together he s not only an awesome developer but one of the most inspiring person too i love this contributions and supportive nature | 1 |
98,758 | 30,110,883,109 | IssuesEvent | 2023-06-30 07:39:11 | opencv/opencv | https://api.github.com/repos/opencv/opencv | closed | Frequent use of cmake_install_prefix and cmake_binary_dir will cause very serious errors | question (invalid tracker) category: build/install incomplete | ### System Information
OpenCV version:4.6.0
use aarach64-musl
### Detailed description
Frequent use of cmake_install_prefix and cmake_binary_dir will cause very serious errors when compiling multiple files. If you want to modify it, where should you start?
### Steps to reproduce
...
### Issue submission checklist
- [X] I report the issue, it's not a question
- [x] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [ ] I updated to the latest OpenCV version and the issue is still there
- [ ] There is reproducer code and related data files (videos, images, onnx, etc) | 1.0 | Frequent use of cmake_install_prefix and cmake_binary_dir will cause very serious errors - ### System Information
OpenCV version:4.6.0
use aarach64-musl
### Detailed description
Frequent use of cmake_install_prefix and cmake_binary_dir will cause very serious errors when compiling multiple files. If you want to modify it, where should you start?
### Steps to reproduce
...
### Issue submission checklist
- [X] I report the issue, it's not a question
- [x] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [ ] I updated to the latest OpenCV version and the issue is still there
- [ ] There is reproducer code and related data files (videos, images, onnx, etc) | non_test | frequent use of cmake install prefix and cmake binary dir will cause very serious errors system information opencv version: use musl detailed description frequent use of cmake install prefix and cmake binary dir will cause very serious errors when compiling multiple files if you want to modify it where should you start steps to reproduce issue submission checklist i report the issue it s not a question i checked the problem with documentation faq open issues forum opencv org stack overflow etc and have not found any solution i updated to the latest opencv version and the issue is still there there is reproducer code and related data files videos images onnx etc | 0 |
50,805 | 6,114,290,968 | IssuesEvent | 2017-06-22 00:33:24 | SecurityInnovation/PGPy | https://api.github.com/repos/SecurityInnovation/PGPy | closed | add regression test for #183 | testing | this should be relatively simple. I'll add this while on my way home this evening. It should be straightforward enough - given a message signed by one key and encrypted by another, the signing key should refuse to attempt decryption. | 1.0 | add regression test for #183 - this should be relatively simple. I'll add this while on my way home this evening. It should be straightforward enough - given a message signed by one key and encrypted by another, the signing key should refuse to attempt decryption. | test | add regression test for this should be relatively simple i ll add this while on my way home this evening it should be straightforward enough given a message signed by one key and encrypted by another the signing key should refuse to attempt decryption | 1 |
214,111 | 16,563,392,622 | IssuesEvent | 2021-05-29 01:02:43 | tarantool/tarantool-qa | https://api.github.com/repos/tarantool/tarantool-qa | closed | app-tap/gh-5040-inter-mode-isatty-via-errinj.test.lua is flaky | 1sp backlog flaky test prio5 teamE | Tarantool version: 2.9.0-30-g7bee61531.
How to reproduce (use debug tarantool build):
```sh
./test/test-run.py $(yes app-tap/gh-5040-inter-mode-isatty-via-errinj.test.lua app-tap/gh-4983-tnt-e-assert-false-hangs.test.lua | head -n 100)
```
Got:
```
[015] TAP version 13
[015] 1..6
[015] # ERRINJ_STDIN_ISATTY=1 /home/alex/projects/tarantool-meta/r/t-6/src/tarantool >/home/alex/projects/tarantool-meta/r/t-6/test/var/015_app-tap/out.txt & echo $!
[015] 1..1
[015] not ok - interactive mode detected
[015] ---
[015] filename: /home/alex/projects/tarantool-meta/r/t-6/test/app-tap/gh-5040-inter-mode-isatty-via-errinj.test.lua
[015] trace:
<..stripped the trace..>
[015] line: 0
[015] expected: tarantool>
[015] got: 'LuajitError: (command line):1: assertion failed!
[015] fatal error, exiting the event loop
[015] '
[015] ...
[015] # ERRINJ_STDIN_ISATTY=1 /home/alex/projects/tarantool-meta/r/t-6/src/tarantool >/home/alex/projects/tarantool-meta/r/t-6/test/var/015_app-tap/out.txt & echo $!: end
```
I guess the problem is that app-tap/gh-4983-tnt-e-assert-false-hangs.test.lua uses the same named file `out.txt` and does not clean it at the end of the test. | 1.0 | app-tap/gh-5040-inter-mode-isatty-via-errinj.test.lua is flaky - Tarantool version: 2.9.0-30-g7bee61531.
How to reproduce (use debug tarantool build):
```sh
./test/test-run.py $(yes app-tap/gh-5040-inter-mode-isatty-via-errinj.test.lua app-tap/gh-4983-tnt-e-assert-false-hangs.test.lua | head -n 100)
```
Got:
```
[015] TAP version 13
[015] 1..6
[015] # ERRINJ_STDIN_ISATTY=1 /home/alex/projects/tarantool-meta/r/t-6/src/tarantool >/home/alex/projects/tarantool-meta/r/t-6/test/var/015_app-tap/out.txt & echo $!
[015] 1..1
[015] not ok - interactive mode detected
[015] ---
[015] filename: /home/alex/projects/tarantool-meta/r/t-6/test/app-tap/gh-5040-inter-mode-isatty-via-errinj.test.lua
[015] trace:
<..stripped the trace..>
[015] line: 0
[015] expected: tarantool>
[015] got: 'LuajitError: (command line):1: assertion failed!
[015] fatal error, exiting the event loop
[015] '
[015] ...
[015] # ERRINJ_STDIN_ISATTY=1 /home/alex/projects/tarantool-meta/r/t-6/src/tarantool >/home/alex/projects/tarantool-meta/r/t-6/test/var/015_app-tap/out.txt & echo $!: end
```
I guess the problem is that app-tap/gh-4983-tnt-e-assert-false-hangs.test.lua uses the same named file `out.txt` and does not clean it at the end of the test. | test | app tap gh inter mode isatty via errinj test lua is flaky tarantool version how to reproduce use debug tarantool build sh test test run py yes app tap gh inter mode isatty via errinj test lua app tap gh tnt e assert false hangs test lua head n got tap version errinj stdin isatty home alex projects tarantool meta r t src tarantool home alex projects tarantool meta r t test var app tap out txt echo not ok interactive mode detected filename home alex projects tarantool meta r t test app tap gh inter mode isatty via errinj test lua trace line expected tarantool got luajiterror command line assertion failed fatal error exiting the event loop errinj stdin isatty home alex projects tarantool meta r t src tarantool home alex projects tarantool meta r t test var app tap out txt echo end i guess the problem is that app tap gh tnt e assert false hangs test lua uses the same named file out txt and does not clean it at the end of the test | 1 |
416,093 | 12,139,403,715 | IssuesEvent | 2020-04-23 18:48:05 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | yourcircuit.com - Content is breaking into a 2nd line when image tag is directly followed by a non breaking space | ML Correct ML OFF browser-chrome priority-normal severity-minor | <!-- @browser: Chrome 79.0.3932 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3932.0 Safari/537.36 Edg/79.0.300.0 -->
<!-- @reported_with: web -->
**URL**: https://yourcircuit.com
**Browser / Version**: Chrome 79.0.3932
**Operating System**: Windows 10
**Tested Another Browser**: Yes
**Problem type**: Design is broken
**Description**: Content is breaking into a 2nd line when image tag is directly followed by a non breaking space
**Steps to Reproduce**:
Consider a span with some text an image and some more text.
The image is separated from the text by a non breaking space.
<span> some random text <img ... /> some more text</span>
The span also uses the following CSS:
white-space: nowrap
overflow: hidden
text-overflow: ellipsis
The span should be displayed in a single line, the content that doesn't fit should be hidden and the ellipsis added to the end of line.
However with Chrome and Edge (based on Chromium 78 or higher), the content is breaking into a second line at the non breaking space directly after the image.
This only happens under some circumstances (multiple div layers around the content)
The issue is not reproducible in Chrome 77 or lower and on Firefox
[](https://webcompat.com/uploads/2019/10/bfb1e8bf-00cc-4ac1-baa4-2be3f0ec48b0.jpg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | yourcircuit.com - Content is breaking into a 2nd line when image tag is directly followed by a non breaking space - <!-- @browser: Chrome 79.0.3932 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3932.0 Safari/537.36 Edg/79.0.300.0 -->
<!-- @reported_with: web -->
**URL**: https://yourcircuit.com
**Browser / Version**: Chrome 79.0.3932
**Operating System**: Windows 10
**Tested Another Browser**: Yes
**Problem type**: Design is broken
**Description**: Content is breaking into a 2nd line when image tag is directly followed by a non breaking space
**Steps to Reproduce**:
Consider a span with some text an image and some more text.
The image is separated from the text by a non breaking space.
<span> some random text <img ... /> some more text</span>
The span also uses the following CSS:
white-space: nowrap
overflow: hidden
text-overflow: ellipsis
The span should be displayed in a single line, the content that doesn't fit should be hidden and the ellipsis added to the end of line.
However with Chrome and Edge (based on Chromium 78 or higher), the content is breaking into a second line at the non breaking space directly after the image.
This only happens under some circumstances (multiple div layers around the content)
The issue is not reproducible in Chrome 77 or lower and on Firefox
[](https://webcompat.com/uploads/2019/10/bfb1e8bf-00cc-4ac1-baa4-2be3f0ec48b0.jpg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_test | yourcircuit com content is breaking into a line when image tag is directly followed by a non breaking space url browser version chrome operating system windows tested another browser yes problem type design is broken description content is breaking into a line when image tag is directly followed by a non breaking space steps to reproduce consider a span with some text an image and some more text the image is separated from the text by a non breaking space some random text nbsp some more text the span also uses the following css white space nowrap overflow hidden text overflow ellipsis the span should be displayed in a single line the content that doesn t fit should be hidden and the ellipsis added to the end of line however with chrome and edge based on chromium or higher the content is breaking into a second line at the non breaking space directly after the image this only happens under some circumstances multiple div layers around the content the issue is not reproducible in chrome or lower and on firefox browser configuration none from with ❤️ | 0 |
344,254 | 10,341,953,773 | IssuesEvent | 2019-09-04 04:30:12 | ucb-bar/hammer | https://api.github.com/repos/ucb-bar/hammer | closed | Remove references to SAED32 | Chipyard medium priority | We should scrub all the references to SAED32- we don't necessarily need to do a history rewrite, but ASAP7 should be what we reference going forward. | 1.0 | Remove references to SAED32 - We should scrub all the references to SAED32- we don't necessarily need to do a history rewrite, but ASAP7 should be what we reference going forward. | non_test | remove references to we should scrub all the references to we don t necessarily need to do a history rewrite but should be what we reference going forward | 0 |
125,706 | 12,266,861,345 | IssuesEvent | 2020-05-07 09:40:22 | JuliaDocs/Documenter.jl | https://api.github.com/repos/JuliaDocs/Documenter.jl | closed | /stable is always linking to the previous version | Type: Documentation Type: Question | I'm building the documentation using github actions, and after TagBot creates a new tag + release, the stable version of the documentation is still the previous one - for example:
https://github.com/EcoJulia/GBIF.jl/runs/550632506?check_suite_focus=true#step:5:32
Is there something I am missing? I have no added options besides `push_preview=true` to the `make.jl`
file. | 1.0 | /stable is always linking to the previous version - I'm building the documentation using github actions, and after TagBot creates a new tag + release, the stable version of the documentation is still the previous one - for example:
https://github.com/EcoJulia/GBIF.jl/runs/550632506?check_suite_focus=true#step:5:32
Is there something I am missing? I have no added options besides `push_preview=true` to the `make.jl`
file. | non_test | stable is always linking to the previous version i m building the documentation using github actions and after tagbot creates a new tag release the stable version of the documentation is still the previous one for example is there something i am missing i have no added options besides push preview true to the make jl file | 0 |
3,385 | 13,629,943,832 | IssuesEvent | 2020-09-24 15:45:53 | submariner-io/submariner | https://api.github.com/repos/submariner-io/submariner | closed | Epic: Improve linting coverage, fix Go Report Card validations | automation cncf | There are a number of validations run against submariner-io/submariner and reported on the main README as an icon. While we do have an A+, there are a number of failing tests. We should dig through the test results and clean them up.
https://goreportcard.com/report/github.com/submariner-io/submariner
Update: Also using this as an epic to track the subsequent addition of linters through golangci-lint. | 1.0 | Epic: Improve linting coverage, fix Go Report Card validations - There are a number of validations run against submariner-io/submariner and reported on the main README as an icon. While we do have an A+, there are a number of failing tests. We should dig through the test results and clean them up.
https://goreportcard.com/report/github.com/submariner-io/submariner
Update: Also using this as an epic to track the subsequent addition of linters through golangci-lint. | non_test | epic improve linting coverage fix go report card validations there are a number of validations run against submariner io submariner and reported on the main readme as an icon while we do have an a there are a number of failing tests we should dig through the test results and clean them up update also using this as an epic to track the subsequent addition of linters through golangci lint | 0 |
62,374 | 17,023,909,003 | IssuesEvent | 2021-07-03 04:30:12 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Duplicate HTML ID on login page | Component: website Priority: minor Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 11.54am, Sunday, 17th August 2014]**
On the login page, the remember_me_openid ID exists twice:
```
<div class="form-row" id="remember_me_openid">
<input type="checkbox" value="yes" tabindex="5" name="remember_me_openid" id="remember_me_openid">
<label for="remember_me_openid" class="standard-label">Se souvenir de moi</label>
</div>
```
This is not valid HTML and due to this a click on the label does not check the associated checkbox (at least on Firefox). | 1.0 | Duplicate HTML ID on login page - **[Submitted to the original trac issue database at 11.54am, Sunday, 17th August 2014]**
On the login page, the remember_me_openid ID exists twice:
```
<div class="form-row" id="remember_me_openid">
<input type="checkbox" value="yes" tabindex="5" name="remember_me_openid" id="remember_me_openid">
<label for="remember_me_openid" class="standard-label">Se souvenir de moi</label>
</div>
```
This is not valid HTML and due to this a click on the label does not check the associated checkbox (at least on Firefox). | non_test | duplicate html id on login page on the login page the remember me openid id exists twice se souvenir de moi this is not valid html and due to this a click on the label does not check the associated checkbox at least on firefox | 0 |
123,801 | 4,876,179,325 | IssuesEvent | 2016-11-16 11:58:14 | tsauvine/rubyric | https://api.github.com/repos/tsauvine/rubyric | closed | Preview comments in submission wall | priority:high | In collaborative mode (submission wall), the user should be able to easily view comments in a popup (hovering or preview button). | 1.0 | Preview comments in submission wall - In collaborative mode (submission wall), the user should be able to easily view comments in a popup (hovering or preview button). | non_test | preview comments in submission wall in collaborative mode submission wall the user should be able to easily view comments in a popup hovering or preview button | 0 |
108,484 | 16,777,920,265 | IssuesEvent | 2021-06-15 01:19:24 | renfei/GitPub | https://api.github.com/repos/renfei/GitPub | opened | CVE-2020-36181 (High) detected in jackson-databind-2.9.2.jar | security vulnerability | ## CVE-2020-36181 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /GitPub/pom.xml</p>
<p>Path to vulnerable library: 2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.2/jackson-databind-2.9.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.2.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.cpdsadapter.DriverAdapterCPDS.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36181>CVE-2020-36181</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/3004">https://github.com/FasterXML/jackson-databind/issues/3004</a></p>
<p>Release Date: 2021-01-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-36181 (High) detected in jackson-databind-2.9.2.jar - ## CVE-2020-36181 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /GitPub/pom.xml</p>
<p>Path to vulnerable library: 2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.2/jackson-databind-2.9.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.2.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.cpdsadapter.DriverAdapterCPDS.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36181>CVE-2020-36181</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/3004">https://github.com/FasterXML/jackson-databind/issues/3004</a></p>
<p>Release Date: 2021-01-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file gitpub pom xml path to vulnerable library repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp dbcp cpdsadapter driveradaptercpds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource | 0 |
274,576 | 23,850,601,887 | IssuesEvent | 2022-09-06 17:32:50 | guynir42/virtualobserver | https://api.github.com/repos/guynir42/virtualobserver | opened | Simulator class | testing | Need to add a simulator that injects events into lightcurves (for starters).
Need to test that lightcurves with and without the simulator injection look right. | 1.0 | Simulator class - Need to add a simulator that injects events into lightcurves (for starters).
Need to test that lightcurves with and without the simulator injection look right. | test | simulator class need to add a simulator that injects events into lightcurves for starters need to test that lightcurves with and without the simulator injection look right | 1 |
40,851 | 5,318,807,454 | IssuesEvent | 2017-02-14 03:39:53 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | make tests sometimes doesn't work for some reason | area/test-infra | Error message is following:
```
~/go/src/k8s.io/kubernetes$ make test WHAT=pkg/controller/node
Running tests for APIVersion: v1,autoscaling/v1,batch/v1,batch/v2alpha1,extensions/v1beta1,apps/v1alpha1,federation/v1beta1,policy/v1alpha1,rbac.authorization.k8s.io/v1alpha1,certificates/v1alpha1
+++ [0713 16:25:23] Running tests without code coverage
Binary file (standard input) matches
!!! Error in hack/make-rules/test.sh:192
'return ${rc}' exited with status 1
Call stack:
1: hack/make-rules/test.sh:192 main(...)
Exiting with status 1
make: *** [test] Error 1
```
I can successfully run (or rather fail) the test using `godep go test`
cc @wojtek-t
| 1.0 | make tests sometimes doesn't work for some reason - Error message is following:
```
~/go/src/k8s.io/kubernetes$ make test WHAT=pkg/controller/node
Running tests for APIVersion: v1,autoscaling/v1,batch/v1,batch/v2alpha1,extensions/v1beta1,apps/v1alpha1,federation/v1beta1,policy/v1alpha1,rbac.authorization.k8s.io/v1alpha1,certificates/v1alpha1
+++ [0713 16:25:23] Running tests without code coverage
Binary file (standard input) matches
!!! Error in hack/make-rules/test.sh:192
'return ${rc}' exited with status 1
Call stack:
1: hack/make-rules/test.sh:192 main(...)
Exiting with status 1
make: *** [test] Error 1
```
I can successfully run (or rather fail) the test using `godep go test`
cc @wojtek-t
| test | make tests sometimes doesn t work for some reason error message is following go src io kubernetes make test what pkg controller node running tests for apiversion autoscaling batch batch extensions apps federation policy rbac authorization io certificates running tests without code coverage binary file standard input matches error in hack make rules test sh return rc exited with status call stack hack make rules test sh main exiting with status make error i can successfully run or rather fail the test using godep go test cc wojtek t | 1 |
52,806 | 22,392,216,746 | IssuesEvent | 2022-06-17 08:50:34 | ITISFoundation/osparc-simcore | https://api.github.com/repos/ITISFoundation/osparc-simcore | opened | Investigate Dask-Scheduler KeyError | bug a:dask-service | Querrying our deployments with graylog for `"KeyError" AND container_name: /.*dask-scheduler.*/` shows multiple KeyErrors occuring.
These key errors seem to be related to the dask-sidecar getting into a non-working state on dalco on wed 15/6/22.
@sanderegg wrote:
```
ok it looks like the dask scheduler is unhappy and not recovering. I'll restart it for now.
there were also computational jobs with illegal job_ids
I am not sure if that is a reminiscent of the old computational backend or something else. there is a bug entry already for this. I will check in the other deploys anyway
so restarting the dask-scheduler has fixed the logs issue...
I[t] looks like the dask-scheduler is not resilient to disappearing clients in some use-case
```
| 1.0 | Investigate Dask-Scheduler KeyError - Querrying our deployments with graylog for `"KeyError" AND container_name: /.*dask-scheduler.*/` shows multiple KeyErrors occuring.
These key errors seem to be related to the dask-sidecar getting into a non-working state on dalco on wed 15/6/22.
@sanderegg wrote:
```
ok it looks like the dask scheduler is unhappy and not recovering. I'll restart it for now.
there were also computational jobs with illegal job_ids
I am not sure if that is a reminiscent of the old computational backend or something else. there is a bug entry already for this. I will check in the other deploys anyway
so restarting the dask-scheduler has fixed the logs issue...
I[t] looks like the dask-scheduler is not resilient to disappearing clients in some use-case
```
| non_test | investigate dask scheduler keyerror querrying our deployments with graylog for keyerror and container name dask scheduler shows multiple keyerrors occuring these key errors seem to be related to the dask sidecar getting into a non working state on dalco on wed sanderegg wrote ok it looks like the dask scheduler is unhappy and not recovering i ll restart it for now there were also computational jobs with illegal job ids i am not sure if that is a reminiscent of the old computational backend or something else there is a bug entry already for this i will check in the other deploys anyway so restarting the dask scheduler has fixed the logs issue i looks like the dask scheduler is not resilient to disappearing clients in some use case | 0 |
135,708 | 5,257,288,559 | IssuesEvent | 2017-02-02 20:09:16 | duckduckgo/zeroclickinfo-fathead | https://api.github.com/repos/duckduckgo/zeroclickinfo-fathead | opened | NPM CLI: List all topic titles and language features to help measure coverage | Difficulty: Low Low-Hanging Fruit Mission: Programming Priority: High Status: Needs a Developer Suggestion Topic: JavaScript | <!-- Please use the appropriate issue title format:
BUG FIX
{IA Name} Bug: {Short description of bug}
SUGGESTION
{IA Name} Suggestion: {Short description of suggestion}"
OTHER
{IA Name}: {Short description} -->
### Description
For every Fathead, we need to gather the set of articles we wish to provide coverage for.
Create a `cover/` directory containing text files that list all the topic titles and language features to cover.
These lists will then be used as Unit tests to help you measure the Fathead's coverage.
For more info please see [the docs](https://docs.duckduckhack.com/programming-mission/creating-effective-fatheads.html).
## Get Started
- [ ] 1) Claim this issue by commenting below
- [ ] 2) Review our [Contributing Guide](https://github.com/duckduckgo/zeroclickinfo-fathead/blob/master/CONTRIBUTING.md)
- [ ] 3) [Set up your development environment](https://docs.duckduckhack.com/welcome/setup-dev-environment.html), and fork this repository
- [ ] 4) Create a Pull Request
## Resources
- Join [DuckDuckHack Slack](https://quackslack.herokuapp.com/) to ask questions
- Join the [DuckDuckHack Forum](https://forum.duckduckhack.com/) to discuss project planning and Instant Answer metrics
- Read the [DuckDuckHack Documentation](https://docs.duckduckhack.com/) for technical help
<!-- DO NOT REMOVE -->
---
<!-- The Instant Answer ID can be found by clicking the `?` icon beside the Instant Answer result on DuckDuckGo.com -->
Instant Answer Page: https://duck.co/ia/view/npm_cli
<!-- FILL THIS IN: ^^^^ --> | 1.0 | NPM CLI: List all topic titles and language features to help measure coverage - <!-- Please use the appropriate issue title format:
BUG FIX
{IA Name} Bug: {Short description of bug}
SUGGESTION
{IA Name} Suggestion: {Short description of suggestion}"
OTHER
{IA Name}: {Short description} -->
### Description
For every Fathead, we need to gather the set of articles we wish to provide coverage for.
Create a `cover/` directory containing text files that list all the topic titles and language features to cover.
These lists will then be used as Unit tests to help you measure the Fathead's coverage.
For more info please see [the docs](https://docs.duckduckhack.com/programming-mission/creating-effective-fatheads.html).
## Get Started
- [ ] 1) Claim this issue by commenting below
- [ ] 2) Review our [Contributing Guide](https://github.com/duckduckgo/zeroclickinfo-fathead/blob/master/CONTRIBUTING.md)
- [ ] 3) [Set up your development environment](https://docs.duckduckhack.com/welcome/setup-dev-environment.html), and fork this repository
- [ ] 4) Create a Pull Request
## Resources
- Join [DuckDuckHack Slack](https://quackslack.herokuapp.com/) to ask questions
- Join the [DuckDuckHack Forum](https://forum.duckduckhack.com/) to discuss project planning and Instant Answer metrics
- Read the [DuckDuckHack Documentation](https://docs.duckduckhack.com/) for technical help
<!-- DO NOT REMOVE -->
---
<!-- The Instant Answer ID can be found by clicking the `?` icon beside the Instant Answer result on DuckDuckGo.com -->
Instant Answer Page: https://duck.co/ia/view/npm_cli
<!-- FILL THIS IN: ^^^^ --> | non_test | npm cli list all topic titles and language features to help measure coverage please use the appropriate issue title format bug fix ia name bug short description of bug suggestion ia name suggestion short description of suggestion other ia name short description description for every fathead we need to gather the set of articles we wish to provide coverage for create a cover directory containing text files that list all the topic titles and language features to cover these lists will then be used as unit tests to help you measure the fathead s coverage for more info please see get started claim this issue by commenting below review our and fork this repository create a pull request resources join to ask questions join the to discuss project planning and instant answer metrics read the for technical help instant answer page | 0 |
44,881 | 23,802,116,506 | IssuesEvent | 2022-09-03 13:09:35 | nuxt/framework | https://api.github.com/repos/nuxt/framework | opened | improvements to style inlining | enhancement 🍰 p2-nice-to-have performance | There is still some follow-on work to do to improve style-inlining in Nuxt. Context: https://github.com/nuxt/framework/issues/6755 and https://github.com/nuxt/framework/pull/7160.
- [ ] **inline styles with webpack builder**
- [ ] **reduce double-downloading of css**
Possibly we could move CSS resource hints from preload -> prefetch, and/or change stylesheet to load after site is interactive, possibly following one of [these strategies](https://github.com/GoogleChromeLabs/critters#preloadstrategy). | True | improvements to style inlining - There is still some follow-on work to do to improve style-inlining in Nuxt. Context: https://github.com/nuxt/framework/issues/6755 and https://github.com/nuxt/framework/pull/7160.
- [ ] **inline styles with webpack builder**
- [ ] **reduce double-downloading of css**
Possibly we could move CSS resource hints from preload -> prefetch, and/or change stylesheet to load after site is interactive, possibly following one of [these strategies](https://github.com/GoogleChromeLabs/critters#preloadstrategy). | non_test | improvements to style inlining there is still some follow on work to do to improve style inlining in nuxt context and inline styles with webpack builder reduce double downloading of css possibly we could move css resource hints from preload prefetch and or change stylesheet to load after site is interactive possibly following one of | 0 |
74,632 | 7,434,412,181 | IssuesEvent | 2018-03-26 10:56:19 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | UI - Do not show "Default Pod Security Policy" for cluster and "Pod Security Policy" for projects for clusters other than RKE clusters. | area/cluster area/psp area/ui status/resolved status/to-test version/2.0 | Rancher versions: 2.0 build from master
Pod Security Policy is supported only for Amazon EC2, Digital Ocean and Custom clusters.
Do not show "Default Pod Security Policy" for cluster and "Pod Security Policy" for projects for clusters other than RKE clusters. | 1.0 | UI - Do not show "Default Pod Security Policy" for cluster and "Pod Security Policy" for projects for clusters other than RKE clusters. - Rancher versions: 2.0 build from master
Pod Security Policy is supported only for Amazon EC2, Digital Ocean and Custom clusters.
Do not show "Default Pod Security Policy" for cluster and "Pod Security Policy" for projects for clusters other than RKE clusters. | test | ui do not show default pod security policy for cluster and pod security policy for projects for clusters other than rke clusters rancher versions build from master pod security policy is supported only for amazon digital ocean and custom clusters do not show default pod security policy for cluster and pod security policy for projects for clusters other than rke clusters | 1 |
693,748 | 23,788,801,165 | IssuesEvent | 2022-09-02 12:46:01 | tradingstrategy-ai/trade-executor | https://api.github.com/repos/tradingstrategy-ai/trade-executor | closed | Add Logger name to trade-executors | priority: P2 | Change the logger application field to trade-executor[strategy] so we can search it easily.

| 1.0 | Add Logger name to trade-executors - Change the logger application field to trade-executor[strategy] so we can search it easily.

| non_test | add logger name to trade executors change the logger application field to trade executor so we can search it easily | 0 |
316,680 | 27,175,955,779 | IssuesEvent | 2023-02-18 02:00:30 | dbt-labs/dbt-core | https://api.github.com/repos/dbt-labs/dbt-core | closed | [CT-396] [Bug] Generic tests aren't valid in BigQuery when the struct and table name are the same | bug bigquery stale dbt tests Team:Adapters jira | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
In BigQuery if the table name and the struct within both have the same name the unique test is failing due to table aliases not being applied. This is happening due to the way BigQuery parses the ```test_data.id``` portion. BigQuery believes that the ```test_data``` part is the table and then looks for a column called ```id```.

This also affects the accepted_values, not_null, and relationship tests. I'm not sure if this also affects other packages such as dbt_utils etc as I haven't looked at the code for them.
### Expected Behavior
Instead, what it should be is similar to below where the table name is given an alias that is different to the name of any table, column, or struct. In the screenshot, since ```test_data``` is no longer the table name it is looking for a struct with that name. When it finds one, it then looks for a column called ```id```. This now allows for the query to be valid as shown in the top right.

I believe changing it in these fours places will fix it:
- [accepted_values](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/include/global_project/macros/generic_test_sql/accepted_values.sql#L9)
- [not_null](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/include/global_project/macros/generic_test_sql/not_null.sql#L4)
- [relationships](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/include/global_project/macros/generic_test_sql/relationships.sql#L5)
- [unique](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/include/global_project/macros/generic_test_sql/unique.sql#L7)
The alias will need to be random enough that no table or struct will have this name.
I'm not familiar with the intricacies of the code base but if you think it's limited to these four files I would be happy make the changes.
### Steps To Reproduce
This code is the output of a compiled unique test, replacing the source with the test data
```sql
WITH test_data AS (
SELECT
STRUCT(
1 AS id
) AS test_data
)
, dbt_test__target as (
select test_data.id as unique_field
from `test_data` AS some_alias_that_is_unlikely_to_be_a_table
where test_data.id is not null
)
select
unique_field,
count(*) as n_records
from dbt_test__target
group by unique_field
having count(*) > 1;
```
### Relevant log output
_No response_
### Environment
```markdown
- OS: dbt Cloud & Mac OS 11.6.3
- Python: 3.9.10
- dbt: 1.0.3
```
### What database are you using dbt with?
bigquery
### Additional Context
Not sure if this also applies to the other adapters. Looking at the documentation for postgres, redshift, and snowflake they all support the ```AS``` keyword so I don't think this will break them | 1.0 | [CT-396] [Bug] Generic tests aren't valid in BigQuery when the struct and table name are the same - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
In BigQuery if the table name and the struct within both have the same name the unique test is failing due to table aliases not being applied. This is happening due to the way BigQuery parses the ```test_data.id``` portion. BigQuery believes that the ```test_data``` part is the table and then looks for a column called ```id```.

This also affects the accepted_values, not_null, and relationship tests. I'm not sure if this also affects other packages such as dbt_utils etc as I haven't looked at the code for them.
### Expected Behavior
Instead, what it should be is similar to below where the table name is given an alias that is different to the name of any table, column, or struct. In the screenshot, since ```test_data``` is no longer the table name it is looking for a struct with that name. When it finds one, it then looks for a column called ```id```. This now allows for the query to be valid as shown in the top right.

I believe changing it in these fours places will fix it:
- [accepted_values](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/include/global_project/macros/generic_test_sql/accepted_values.sql#L9)
- [not_null](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/include/global_project/macros/generic_test_sql/not_null.sql#L4)
- [relationships](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/include/global_project/macros/generic_test_sql/relationships.sql#L5)
- [unique](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/include/global_project/macros/generic_test_sql/unique.sql#L7)
The alias will need to be random enough that no table or struct will have this name.
I'm not familiar with the intricacies of the code base but if you think it's limited to these four files I would be happy make the changes.
### Steps To Reproduce
This code is the output of a compiled unique test, replacing the source with the test data
```sql
WITH test_data AS (
SELECT
STRUCT(
1 AS id
) AS test_data
)
, dbt_test__target as (
select test_data.id as unique_field
from `test_data` AS some_alias_that_is_unlikely_to_be_a_table
where test_data.id is not null
)
select
unique_field,
count(*) as n_records
from dbt_test__target
group by unique_field
having count(*) > 1;
```
### Relevant log output
_No response_
### Environment
```markdown
- OS: dbt Cloud & Mac OS 11.6.3
- Python: 3.9.10
- dbt: 1.0.3
```
### What database are you using dbt with?
bigquery
### Additional Context
Not sure if this also applies to the other adapters. Looking at the documentation for postgres, redshift, and snowflake they all support the ```AS``` keyword so I don't think this will break them | test | generic tests aren t valid in bigquery when the struct and table name are the same is there an existing issue for this i have searched the existing issues current behavior in bigquery if the table name and the struct within both have the same name the unique test is failing due to table aliases not being applied this is happening due to the way bigquery parses the test data id portion bigquery believes that the test data part is the table and then looks for a column called id this also affects the accepted values not null and relationship tests i m not sure if this also affects other packages such as dbt utils etc as i haven t looked at the code for them expected behavior instead what it should be is similar to below where the table name is given an alias that is different to the name of any table column or struct in the screenshot since test data is no longer the table name it is looking for a struct with that name when it finds one it then looks for a column called id this now allows for the query to be valid as shown in the top right i believe changing it in these fours places will fix it the alias will need to be random enough that no table or struct will have this name i m not familiar with the intricacies of the code base but if you think it s limited to these four files i would be happy make the changes steps to reproduce this code is the output of a compiled unique test replacing the source with the test data sql with test data as select struct as id as test data dbt test target as select test data id as unique field from test data as some alias that is unlikely to be a table where test data id is not null select unique field count as n records from dbt test target group by unique field having count relevant log output no response environment markdown os dbt cloud mac os python dbt what database are you using dbt with bigquery additional context not sure if this also applies to the other adapters looking at the documentation for postgres redshift and snowflake they all support the as keyword so i don t think this will break them | 1 |
251,711 | 21,519,163,309 | IssuesEvent | 2022-04-28 12:49:19 | keycloak/keycloak | https://api.github.com/repos/keycloak/keycloak | opened | Add support for flame graph generation to model testsuite | area/testsuite kind/enhancement team/storage-sig | ### Description
Add support for [async profiler](https://github.com/jvm-profiling-tools/async-profiler/) for generating flame graphs when running model testsuite
### Discussion
_No response_
### Motivation
Async profiler provides a lightweight means to obtain profiling information and present it by means of e.g. flame graphs.
### Details
_No response_ | 1.0 | Add support for flame graph generation to model testsuite - ### Description
Add support for [async profiler](https://github.com/jvm-profiling-tools/async-profiler/) for generating flame graphs when running model testsuite
### Discussion
_No response_
### Motivation
Async profiler provides a lightweight means to obtain profiling information and present it by means of e.g. flame graphs.
### Details
_No response_ | test | add support for flame graph generation to model testsuite description add support for for generating flame graphs when running model testsuite discussion no response motivation async profiler provides a lightweight means to obtain profiling information and present it by means of e g flame graphs details no response | 1 |
40,080 | 5,271,296,367 | IssuesEvent | 2017-02-06 09:14:31 | puikinsh/illdy | https://api.github.com/repos/puikinsh/illdy | closed | Illdy Customizer Improvements: Color / Background controls per section | enhancement tested | - not available for: counter & contact section | 1.0 | Illdy Customizer Improvements: Color / Background controls per section - - not available for: counter & contact section | test | illdy customizer improvements color background controls per section not available for counter contact section | 1 |
51,704 | 6,193,637,991 | IssuesEvent | 2017-07-05 07:49:24 | TEAMMATES/teammates | https://api.github.com/repos/TEAMMATES/teammates | opened | InstructorFeedbackResultsPageUiTest: improve stability of hover-triggered popover image verification | a-Testing | The `testViewPhotoAndAjaxForLargeScaledSession` test in `InstructorFeedbackResultsPageUiTest` is still moderately unstable even after prior work done in #7068 and #7083. This is mainly because popovers that are triggered by hover actions sometimes fail to appear, causing hover-triggered popover image verifications to fail unpredictably.
Rather than retrying the entire test, `RetryManager` can be used to retry just the hover-verify process. This would help improve the overall success rate of the test. | 1.0 | InstructorFeedbackResultsPageUiTest: improve stability of hover-triggered popover image verification - The `testViewPhotoAndAjaxForLargeScaledSession` test in `InstructorFeedbackResultsPageUiTest` is still moderately unstable even after prior work done in #7068 and #7083. This is mainly because popovers that are triggered by hover actions sometimes fail to appear, causing hover-triggered popover image verifications to fail unpredictably.
Rather than retrying the entire test, `RetryManager` can be used to retry just the hover-verify process. This would help improve the overall success rate of the test. | test | instructorfeedbackresultspageuitest improve stability of hover triggered popover image verification the testviewphotoandajaxforlargescaledsession test in instructorfeedbackresultspageuitest is still moderately unstable even after prior work done in and this is mainly because popovers that are triggered by hover actions sometimes fail to appear causing hover triggered popover image verifications to fail unpredictably rather than retrying the entire test retrymanager can be used to retry just the hover verify process this would help improve the overall success rate of the test | 1 |
294,468 | 9,024,304,543 | IssuesEvent | 2019-02-07 10:11:15 | carldata/borsuk | https://api.github.com/repos/carldata/borsuk | opened | Get RDII model faster | Normal Priority | Right now flow-works-http get models by first listing the storms with 12H sessionWindow, then seeks for rdiis with appropriate start and end date and then asks for model. The possible solutions is to have one endpoint for getting model with this startDate/endDate or to return rdii ids in envelope storms response. | 1.0 | Get RDII model faster - Right now flow-works-http get models by first listing the storms with 12H sessionWindow, then seeks for rdiis with appropriate start and end date and then asks for model. The possible solutions is to have one endpoint for getting model with this startDate/endDate or to return rdii ids in envelope storms response. | non_test | get rdii model faster right now flow works http get models by first listing the storms with sessionwindow then seeks for rdiis with appropriate start and end date and then asks for model the possible solutions is to have one endpoint for getting model with this startdate enddate or to return rdii ids in envelope storms response | 0 |
13,160 | 3,315,413,882 | IssuesEvent | 2015-11-06 11:55:36 | SPW-DIG/metawal-core-geonetwork | https://api.github.com/repos/SPW-DIG/metawal-core-geonetwork | closed | Ajout automatique conformité INSPIRE - modifications mineures | criticité.mineur Env test - OK Env valid - OK | - Le texte doit être ajouté dans la langue de la métadonnée.
- Pour passer la validateur, le texte doit être exactement celui qui se trouve dans http://inspire.ec.europa.eu/schemas/common/1.0/enums/
- La date doit aussi être adaptée
A part ça, c'est sacrément pratique... | 1.0 | Ajout automatique conformité INSPIRE - modifications mineures - - Le texte doit être ajouté dans la langue de la métadonnée.
- Pour passer la validateur, le texte doit être exactement celui qui se trouve dans http://inspire.ec.europa.eu/schemas/common/1.0/enums/
- La date doit aussi être adaptée
A part ça, c'est sacrément pratique... | test | ajout automatique conformité inspire modifications mineures le texte doit être ajouté dans la langue de la métadonnée pour passer la validateur le texte doit être exactement celui qui se trouve dans la date doit aussi être adaptée a part ça c est sacrément pratique | 1 |
660,284 | 21,959,893,616 | IssuesEvent | 2022-05-24 14:59:40 | feast-dev/feast | https://api.github.com/repos/feast-dev/feast | opened | Batch transformations | kind/feature priority/p1 Community Contribution Needed kind/project | See also: [Feast RFC-028: Batch Transformations](https://docs.google.com/document/d/1964OkzuBljifDvkV-0fakp2uaijnVzdwWNGdz7Vz50A/edit)
## Problem
Users can today use Feast to track the final view of batch features. An example may be:

### Problem 1: Difficult to iterate on transformation logic within Feast
This requires that users maintain transformation logic outside of Feast. For users to iterate on transformation logic + adapt to different use cases, they need to find the relevant pipeline logic outside of Feast.
### Problem 2: Difficult for data scientists to impact batch scoring pipelines
If a data scientist wants to help author the necessary features for a specific model, today the data scientist needs to:
- Setup transformation logic that outputs the view above
- Register the view as a data source in Feast (+ feature view)
- Register a feature service referencing the feature view
That first step can be problematic, because it implies data scientists are responsible for creating views of data without visibility into what other views already exist and how they are produced.
**Note: ** Feast does not intend to be a full solution for describing complex sequences of transformation DAGs in the near future. This is for simple transformations that can address the 80%.
## Solution
See the RFC above for more details. In short, there should be a way for:
- SQL centric transformations (offline store agnostic)
- Writing SQL transformations that execute in data warehouses to produce views.
```python
@batch_feature_view(
sources=[data_source]
name="project.dataset.view",
mode="snowflake_sql",
timestamp_field="feature_timestamp",
)
def my_feature_view(data_source):
return f"""
SELECT
transaction_count + 100,
user_id,
feature_timestamp
FROM {data_source}
"""
```
- Pythonic transformations (e.g. PySpark)
- The expectation in Spark at least would be that users bring a Spark context
```python
@batch_feature_view(
name="project.dataset.view",
mode="pyspark",
timestamp_field="feature_timestamp",
sources=[credit_scores]
)
def user_has_good_credit(credit_scores):
from pyspark.sql import functions as f
return credit_scores \
.withColumn('user_has_good_credit', when(col('credit_score') > 670, 1).otherwise(0)) \
.select('user_id', 'user_has_good_credit', 'timestamp')
```
- In both cases above, opportunity for wrappers that help data scientists author features (e.g. aggregations or common types of features)
| 1.0 | Batch transformations - See also: [Feast RFC-028: Batch Transformations](https://docs.google.com/document/d/1964OkzuBljifDvkV-0fakp2uaijnVzdwWNGdz7Vz50A/edit)
## Problem
Users can today use Feast to track the final view of batch features. An example may be:

### Problem 1: Difficult to iterate on transformation logic within Feast
This requires that users maintain transformation logic outside of Feast. For users to iterate on transformation logic + adapt to different use cases, they need to find the relevant pipeline logic outside of Feast.
### Problem 2: Difficult for data scientists to impact batch scoring pipelines
If a data scientist wants to help author the necessary features for a specific model, today the data scientist needs to:
- Setup transformation logic that outputs the view above
- Register the view as a data source in Feast (+ feature view)
- Register a feature service referencing the feature view
That first step can be problematic, because it implies data scientists are responsible for creating views of data without visibility into what other views already exist and how they are produced.
**Note: ** Feast does not intend to be a full solution for describing complex sequences of transformation DAGs in the near future. This is for simple transformations that can address the 80%.
## Solution
See the RFC above for more details. In short, there should be a way for:
- SQL centric transformations (offline store agnostic)
- Writing SQL transformations that execute in data warehouses to produce views.
```python
@batch_feature_view(
sources=[data_source]
name="project.dataset.view",
mode="snowflake_sql",
timestamp_field="feature_timestamp",
)
def my_feature_view(data_source):
return f"""
SELECT
transaction_count + 100,
user_id,
feature_timestamp
FROM {data_source}
"""
```
- Pythonic transformations (e.g. PySpark)
- The expectation in Spark at least would be that users bring a Spark context
```python
@batch_feature_view(
name="project.dataset.view",
mode="pyspark",
timestamp_field="feature_timestamp",
sources=[credit_scores]
)
def user_has_good_credit(credit_scores):
from pyspark.sql import functions as f
return credit_scores \
.withColumn('user_has_good_credit', when(col('credit_score') > 670, 1).otherwise(0)) \
.select('user_id', 'user_has_good_credit', 'timestamp')
```
- In both cases above, opportunity for wrappers that help data scientists author features (e.g. aggregations or common types of features)
| non_test | batch transformations see also problem users can today use feast to track the final view of batch features an example may be problem difficult to iterate on transformation logic within feast this requires that users maintain transformation logic outside of feast for users to iterate on transformation logic adapt to different use cases they need to find the relevant pipeline logic outside of feast problem difficult for data scientists to impact batch scoring pipelines if a data scientist wants to help author the necessary features for a specific model today the data scientist needs to setup transformation logic that outputs the view above register the view as a data source in feast feature view register a feature service referencing the feature view that first step can be problematic because it implies data scientists are responsible for creating views of data without visibility into what other views already exist and how they are produced note feast does not intend to be a full solution for describing complex sequences of transformation dags in the near future this is for simple transformations that can address the solution see the rfc above for more details in short there should be a way for sql centric transformations offline store agnostic writing sql transformations that execute in data warehouses to produce views python batch feature view sources name project dataset view mode snowflake sql timestamp field feature timestamp def my feature view data source return f select transaction count user id feature timestamp from data source pythonic transformations e g pyspark the expectation in spark at least would be that users bring a spark context python batch feature view name project dataset view mode pyspark timestamp field feature timestamp sources def user has good credit credit scores from pyspark sql import functions as f return credit scores withcolumn user has good credit when col credit score otherwise select user id user has good credit timestamp in both cases above opportunity for wrappers that help data scientists author features e g aggregations or common types of features | 0 |
242,840 | 20,267,029,973 | IssuesEvent | 2022-02-15 13:03:43 | boku-ilen/landscapelab | https://api.github.com/repos/boku-ilen/landscapelab | closed | dynamically load geodata at runtime | enhancement needs testing geodata UI | - include a open geodata dialog for selecting a file or connection (valid GDAL input)
- display this data as new "layer" (here we probably have to add simple display/symbology options depending on the data type)
- maybe even simple grouping by atribute, aggregations, etc. | 1.0 | dynamically load geodata at runtime - - include a open geodata dialog for selecting a file or connection (valid GDAL input)
- display this data as new "layer" (here we probably have to add simple display/symbology options depending on the data type)
- maybe even simple grouping by atribute, aggregations, etc. | test | dynamically load geodata at runtime include a open geodata dialog for selecting a file or connection valid gdal input display this data as new layer here we probably have to add simple display symbology options depending on the data type maybe even simple grouping by atribute aggregations etc | 1 |
680,726 | 23,283,591,718 | IssuesEvent | 2022-08-05 14:21:56 | googleapis/nodejs-storage | https://api.github.com/repos/googleapis/nodejs-storage | closed | file upload not working. Issue with readable-stream. | type: bug priority: p2 api: storage | I'm trying to upload the file but it's not working. Some issues with dependency related to readable-stream.
<img width="1036" alt="Screenshot 2022-07-27 at 19 13 51" src="https://user-images.githubusercontent.com/93306798/181263255-374b269c-b5f6-48f8-b222-d7380b197655.png">
here is the code snippet
`bucket.upload(filepath,options,function (err, file) { // uploaded})`
My package.json
[package.txt](https://github.com/googleapis/nodejs-storage/files/9199509/package.txt)
My yarn-lock file
[yarn.txt](https://github.com/googleapis/nodejs-storage/files/9199516/yarn.txt)
| 1.0 | file upload not working. Issue with readable-stream. - I'm trying to upload the file but it's not working. Some issues with dependency related to readable-stream.
<img width="1036" alt="Screenshot 2022-07-27 at 19 13 51" src="https://user-images.githubusercontent.com/93306798/181263255-374b269c-b5f6-48f8-b222-d7380b197655.png">
here is the code snippet
`bucket.upload(filepath,options,function (err, file) { // uploaded})`
My package.json
[package.txt](https://github.com/googleapis/nodejs-storage/files/9199509/package.txt)
My yarn-lock file
[yarn.txt](https://github.com/googleapis/nodejs-storage/files/9199516/yarn.txt)
| non_test | file upload not working issue with readable stream i m trying to upload the file but it s not working some issues with dependency related to readable stream img width alt screenshot at src here is the code snippet bucket upload filepath options function err file uploaded my package json my yarn lock file | 0 |
12,380 | 3,071,937,340 | IssuesEvent | 2015-08-19 14:41:33 | Tiendil/the-tale | https://api.github.com/repos/Tiendil/the-tale | closed | новые способности героя | comp_game_logic cont_game_designe est_medium type_task | - ускорение изменения черт (всех, или для каждой по способности)
- увеличение вероятности специальных заданий
- http://the-tale.org/forum/threads/2040
- “незаметность”. Уменьшает вероятность боя с монстром во время путешествия
- «крик» — противник N ходов не может использовать способности.
- после использования герой каждый ход теряет здоровье. но враг теряет его больше, пока действует способность, ничего другого не происходит, каждый ход может прерваться.
- ситское удушение | 1.0 | новые способности героя - - ускорение изменения черт (всех, или для каждой по способности)
- увеличение вероятности специальных заданий
- http://the-tale.org/forum/threads/2040
- “незаметность”. Уменьшает вероятность боя с монстром во время путешествия
- «крик» — противник N ходов не может использовать способности.
- после использования герой каждый ход теряет здоровье. но враг теряет его больше, пока действует способность, ничего другого не происходит, каждый ход может прерваться.
- ситское удушение | non_test | новые способности героя ускорение изменения черт всех или для каждой по способности увеличение вероятности специальных заданий “незаметность” уменьшает вероятность боя с монстром во время путешествия «крик» — противник n ходов не может использовать способности после использования герой каждый ход теряет здоровье но враг теряет его больше пока действует способность ничего другого не происходит каждый ход может прерваться ситское удушение | 0 |
363,001 | 10,736,048,282 | IssuesEvent | 2019-10-29 10:03:49 | wso2-cellery/sdk | https://api.github.com/repos/wso2-cellery/sdk | closed | Move composite sts to cellery-system | Priority/High Severity/Major Type/Improvement | Currently with composites, the sts for composites starts in the default namespace. It would be better to move this to cellery-system given that the composite-sts is not something the user should be aware of. | 1.0 | Move composite sts to cellery-system - Currently with composites, the sts for composites starts in the default namespace. It would be better to move this to cellery-system given that the composite-sts is not something the user should be aware of. | non_test | move composite sts to cellery system currently with composites the sts for composites starts in the default namespace it would be better to move this to cellery system given that the composite sts is not something the user should be aware of | 0 |
657,981 | 21,873,935,561 | IssuesEvent | 2022-05-19 08:27:40 | apache/incubator-kyuubi | https://api.github.com/repos/apache/incubator-kyuubi | closed | [Bug] Fix lock bug if engine initialization timeout | kind:bug priority:major | ### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
### Search before asking
- [X] I have searched in the [issues](https://github.com/apache/incubator-kyuubi/issues?q=is%3Aissue) and found no similar issues.
### Describe the bug
We should throw exception if timeout during acquiring lock.
Let's say we have three clients with same request lock to two kyuubi server instances.
client A ---> kyuubi X -- first acquired \
client B ---> kyuubi X -- second acquired -- zookeeper
client C ---> kyuubi Y -- third acquired /
The first client A acqiured the lock then B and C are blocked until A release the lock,
with the A created engine state:
- SUCCESS
B acquired the lock then get engine ref and release the lock.
C acquired the lock then get engine ref and release the lock.
- FAILED or TIMEOUT
B acquired the lock then try to create engine again.
C should be timeout and throw exception back to client. This fast fail
to avoid client too long to waiting in concurrent.
Also different engine type should use its own lock.
### Affects Version(s)
master/1.5
### Kyuubi Server Log Output
_No response_
### Kyuubi Engine Log Output
_No response_
### Kyuubi Server Configurations
_No response_
### Kyuubi Engine Configurations
_No response_
### Additional context
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR! | 1.0 | [Bug] Fix lock bug if engine initialization timeout - ### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
### Search before asking
- [X] I have searched in the [issues](https://github.com/apache/incubator-kyuubi/issues?q=is%3Aissue) and found no similar issues.
### Describe the bug
We should throw exception if timeout during acquiring lock.
Let's say we have three clients with same request lock to two kyuubi server instances.
client A ---> kyuubi X -- first acquired \
client B ---> kyuubi X -- second acquired -- zookeeper
client C ---> kyuubi Y -- third acquired /
The first client A acqiured the lock then B and C are blocked until A release the lock,
with the A created engine state:
- SUCCESS
B acquired the lock then get engine ref and release the lock.
C acquired the lock then get engine ref and release the lock.
- FAILED or TIMEOUT
B acquired the lock then try to create engine again.
C should be timeout and throw exception back to client. This fast fail
to avoid client too long to waiting in concurrent.
Also different engine type should use its own lock.
### Affects Version(s)
master/1.5
### Kyuubi Server Log Output
_No response_
### Kyuubi Engine Log Output
_No response_
### Kyuubi Server Configurations
_No response_
### Kyuubi Engine Configurations
_No response_
### Additional context
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR! | non_test | fix lock bug if engine initialization timeout code of conduct i agree to follow this project s search before asking i have searched in the and found no similar issues describe the bug we should throw exception if timeout during acquiring lock let s say we have three clients with same request lock to two kyuubi server instances client a kyuubi x first acquired client b kyuubi x second acquired zookeeper client c kyuubi y third acquired the first client a acqiured the lock then b and c are blocked until a release the lock with the a created engine state success b acquired the lock then get engine ref and release the lock c acquired the lock then get engine ref and release the lock failed or timeout b acquired the lock then try to create engine again c should be timeout and throw exception back to client this fast fail to avoid client too long to waiting in concurrent also different engine type should use its own lock affects version s master kyuubi server log output no response kyuubi engine log output no response kyuubi server configurations no response kyuubi engine configurations no response additional context no response are you willing to submit pr yes i am willing to submit a pr | 0 |
61,290 | 14,965,281,723 | IssuesEvent | 2021-01-27 13:10:55 | eventespresso/barista | https://api.github.com/repos/eventespresso/barista | closed | Prevent Duplicate SASS Imports | C: build-process 🔨 D: Packages 📦 P2: HIGH priority 😮 S:1 new 👶🏻 T: bug 🐞 | Seems to be a fairly common issue for ppl using complex SASS setups. Here's a thread regarding the issue with a possible fix:
https://github.com/webpack-contrib/sass-loader/issues/145
# OOPS
that link above is for webpack 🤦🏻♂️ | 1.0 | Prevent Duplicate SASS Imports - Seems to be a fairly common issue for ppl using complex SASS setups. Here's a thread regarding the issue with a possible fix:
https://github.com/webpack-contrib/sass-loader/issues/145
# OOPS
that link above is for webpack 🤦🏻♂️ | non_test | prevent duplicate sass imports seems to be a fairly common issue for ppl using complex sass setups here s a thread regarding the issue with a possible fix oops that link above is for webpack 🤦🏻♂️ | 0 |
54,569 | 13,912,446,426 | IssuesEvent | 2020-10-20 18:53:05 | jgeraigery/LocalCatalogManager | https://api.github.com/repos/jgeraigery/LocalCatalogManager | closed | CVE-2017-7525 (High) detected in jackson-databind-2.8.5.jar - autoclosed | security vulnerability | ## CVE-2017-7525 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: LocalCatalogManager/lcm-server/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.5/jackson-databind-2.8.5.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.5/jackson-databind-2.8.5.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.5/jackson-databind-2.8.5.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.5/jackson-databind-2.8.5.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/LocalCatalogManager/commit/b8c24e199f2d440dea3ce3cc2c66ada102d5d922">b8c24e199f2d440dea3ce3cc2c66ada102d5d922</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A deserialization flaw was discovered in the jackson-databind, versions before 2.6.7.1, 2.7.9.1 and 2.8.9, which could allow an unauthenticated user to perform code execution by sending the maliciously crafted input to the readValue method of the ObjectMapper.
<p>Publish Date: 2018-02-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-7525>CVE-2017-7525</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-7525">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-7525</a></p>
<p>Release Date: 2018-02-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.1,2.7.9.1,2.8.9</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.5","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.6.7.1,2.7.9.1,2.8.9"}],"vulnerabilityIdentifier":"CVE-2017-7525","vulnerabilityDetails":"A deserialization flaw was discovered in the jackson-databind, versions before 2.6.7.1, 2.7.9.1 and 2.8.9, which could allow an unauthenticated user to perform code execution by sending the maliciously crafted input to the readValue method of the ObjectMapper.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-7525","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2017-7525 (High) detected in jackson-databind-2.8.5.jar - autoclosed - ## CVE-2017-7525 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: LocalCatalogManager/lcm-server/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.5/jackson-databind-2.8.5.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.5/jackson-databind-2.8.5.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.5/jackson-databind-2.8.5.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.5/jackson-databind-2.8.5.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/LocalCatalogManager/commit/b8c24e199f2d440dea3ce3cc2c66ada102d5d922">b8c24e199f2d440dea3ce3cc2c66ada102d5d922</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A deserialization flaw was discovered in the jackson-databind, versions before 2.6.7.1, 2.7.9.1 and 2.8.9, which could allow an unauthenticated user to perform code execution by sending the maliciously crafted input to the readValue method of the ObjectMapper.
<p>Publish Date: 2018-02-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-7525>CVE-2017-7525</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-7525">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-7525</a></p>
<p>Release Date: 2018-02-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.1,2.7.9.1,2.8.9</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.5","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.6.7.1,2.7.9.1,2.8.9"}],"vulnerabilityIdentifier":"CVE-2017-7525","vulnerabilityDetails":"A deserialization flaw was discovered in the jackson-databind, versions before 2.6.7.1, 2.7.9.1 and 2.8.9, which could allow an unauthenticated user to perform code execution by sending the maliciously crafted input to the readValue method of the ObjectMapper.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-7525","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_test | cve high detected in jackson databind jar autoclosed cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file localcatalogmanager lcm server pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details a deserialization flaw was discovered in the jackson databind versions before and which could allow an unauthenticated user to perform code execution by sending the maliciously crafted input to the readvalue method of the objectmapper publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails a deserialization flaw was discovered in the jackson databind versions before and which could allow an unauthenticated user to perform code execution by sending the maliciously crafted input to the readvalue method of the objectmapper vulnerabilityurl | 0 |
20,523 | 3,815,766,408 | IssuesEvent | 2016-03-28 19:04:45 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | K8S - kubectl CLI unable to connect | area/kubernetes kind/bug priority/0 status/resolved status/to-test | Rancher Version: 0.63
Docker Version: 1.9.1
OS: Rancher - 4.2.3 Server - OSX local
Steps to Reproduce:
1. Create K8s cluster
2. Copy config from kubectl window to local machine.
3. Attempt to run any kubectl command, e.g ` kubectl version`
Results: `error: couldn't read version from server: an error on the server has prevented the request from succeeding`
Expected: kubectl to show client and server version.,
I am using Github auth, is this preventing the client access?
| 1.0 | K8S - kubectl CLI unable to connect - Rancher Version: 0.63
Docker Version: 1.9.1
OS: Rancher - 4.2.3 Server - OSX local
Steps to Reproduce:
1. Create K8s cluster
2. Copy config from kubectl window to local machine.
3. Attempt to run any kubectl command, e.g ` kubectl version`
Results: `error: couldn't read version from server: an error on the server has prevented the request from succeeding`
Expected: kubectl to show client and server version.,
I am using Github auth, is this preventing the client access?
| test | kubectl cli unable to connect rancher version docker version os rancher server osx local steps to reproduce create cluster copy config from kubectl window to local machine attempt to run any kubectl command e g kubectl version results error couldn t read version from server an error on the server has prevented the request from succeeding expected kubectl to show client and server version i am using github auth is this preventing the client access | 1 |
21,697 | 3,916,270,993 | IssuesEvent | 2016-04-21 00:31:31 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | Master - Can't add machine driver | area/machine area/ui kind/bug status/resolved status/to-test | Version - master 4/15
Steps to Reproduce:
1. Go to settings
2. Click on Add Machine Driver
3. Put vultr as name and http://vtf.me/docker-machine-driver-vultr.zip as download URL
Results: Not Found
Expected: Should find it | 1.0 | Master - Can't add machine driver - Version - master 4/15
Steps to Reproduce:
1. Go to settings
2. Click on Add Machine Driver
3. Put vultr as name and http://vtf.me/docker-machine-driver-vultr.zip as download URL
Results: Not Found
Expected: Should find it | test | master can t add machine driver version master steps to reproduce go to settings click on add machine driver put vultr as name and as download url results not found expected should find it | 1 |
75,462 | 7,472,763,203 | IssuesEvent | 2018-04-03 13:37:54 | prasadtalasila/IRCLogParser | https://api.github.com/repos/prasadtalasila/IRCLogParser | closed | functional tests for networks of network profile | functional test | Create functional tests to verify the network profile. The suggested lines of code to be considered are:
Network profile
* presence networks
* lines 83-85 [ubuntu.py](https://github.com/prasadtalasila/IRCLogParser/blob/master/ubuntu.py#L83-L85)
* message exchange network
* lines 27,28,83,89 of [slack.py](https://github.com/prasadtalasila/IRCLogParser/blob/master/slack.py#L83)
* lines 24,25,80,86 of [scummvm.py](https://github.com/prasadtalasila/IRCLogParser/blob/master/scummvm.py#L80)
* lines 16,17,20,22 of [ubuntu.py](https://github.com/prasadtalasila/IRCLogParser/blob/master/ubuntu.py#L20-L22)
* reduced networks
* lines 27,28,111,112 of [slack.py](https://github.com/prasadtalasila/IRCLogParser/blob/master/slack.py#L111-L112)
* lines 24,25,108,109 of [scummvm.py](https://github.com/prasadtalasila/IRCLogParser/blob/master/scummvm.py#L108-L109)
* lines 127-140 of [ubuntu.py](https://github.com/prasadtalasila/IRCLogParser/blob/master/ubuntu.py#L127-L140)
| 1.0 | functional tests for networks of network profile - Create functional tests to verify the network profile. The suggested lines of code to be considered are:
Network profile
* presence networks
* lines 83-85 [ubuntu.py](https://github.com/prasadtalasila/IRCLogParser/blob/master/ubuntu.py#L83-L85)
* message exchange network
* lines 27,28,83,89 of [slack.py](https://github.com/prasadtalasila/IRCLogParser/blob/master/slack.py#L83)
* lines 24,25,80,86 of [scummvm.py](https://github.com/prasadtalasila/IRCLogParser/blob/master/scummvm.py#L80)
* lines 16,17,20,22 of [ubuntu.py](https://github.com/prasadtalasila/IRCLogParser/blob/master/ubuntu.py#L20-L22)
* reduced networks
* lines 27,28,111,112 of [slack.py](https://github.com/prasadtalasila/IRCLogParser/blob/master/slack.py#L111-L112)
* lines 24,25,108,109 of [scummvm.py](https://github.com/prasadtalasila/IRCLogParser/blob/master/scummvm.py#L108-L109)
* lines 127-140 of [ubuntu.py](https://github.com/prasadtalasila/IRCLogParser/blob/master/ubuntu.py#L127-L140)
| test | functional tests for networks of network profile create functional tests to verify the network profile the suggested lines of code to be considered are network profile presence networks lines message exchange network lines of lines of lines of reduced networks lines of lines of lines of | 1 |
67,849 | 9,102,774,472 | IssuesEvent | 2019-02-20 14:31:49 | nil0x42/phpsploit | https://api.github.com/repos/nil0x42/phpsploit | reopened | Missing doc on pros & cons of PASSKEY & backdooring strategy | documentation | i don't understand what is the point of changing the password of the backdoor if one can still read it on the line of code that i insert inside a php file of a website ? Weevely doesn't show the raw password in the code of the backdoor.
also, without disrespect for your work, i would like to know what is the difference between phpsploit and weevely ? better evasion ? | 1.0 | Missing doc on pros & cons of PASSKEY & backdooring strategy - i don't understand what is the point of changing the password of the backdoor if one can still read it on the line of code that i insert inside a php file of a website ? Weevely doesn't show the raw password in the code of the backdoor.
also, without disrespect for your work, i would like to know what is the difference between phpsploit and weevely ? better evasion ? | non_test | missing doc on pros cons of passkey backdooring strategy i don t understand what is the point of changing the password of the backdoor if one can still read it on the line of code that i insert inside a php file of a website weevely doesn t show the raw password in the code of the backdoor also without disrespect for your work i would like to know what is the difference between phpsploit and weevely better evasion | 0 |
326,284 | 27,981,658,854 | IssuesEvent | 2023-03-26 08:07:25 | uutils/coreutils | https://api.github.com/repos/uutils/coreutils | closed | `test '(' foo --1 ` should not panic | U - test good first issue | Working on a fuzzer for test:
```
$ ./target/debug/coreutils test '(' foo --1
thread 'main' panicked at 'expected ‘)’', src/uu/test/src/parser.rs:145:18
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
GNU returns:
```
$ LANG=C /usr/bin/test '(' foo --1
/usr/bin/test: 'foo': binary operator expected
```
See also:
https://github.com/uutils/coreutils/issues/4555 | 1.0 | `test '(' foo --1 ` should not panic - Working on a fuzzer for test:
```
$ ./target/debug/coreutils test '(' foo --1
thread 'main' panicked at 'expected ‘)’', src/uu/test/src/parser.rs:145:18
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
GNU returns:
```
$ LANG=C /usr/bin/test '(' foo --1
/usr/bin/test: 'foo': binary operator expected
```
See also:
https://github.com/uutils/coreutils/issues/4555 | test | test foo should not panic working on a fuzzer for test target debug coreutils test foo thread main panicked at expected ‘ ’ src uu test src parser rs note run with rust backtrace environment variable to display a backtrace gnu returns lang c usr bin test foo usr bin test foo binary operator expected see also | 1 |
543,358 | 15,880,519,604 | IssuesEvent | 2021-04-09 13:47:45 | AY2021S2-CS2103-T14-1/tp | https://api.github.com/repos/AY2021S2-CS2103-T14-1/tp | closed | Bug(Command History): Update command history to use the updated index parsing rules | priority.Medium type.Bug | Command history should be updated to use the index/count parsing rules that the team agreed upon. | 1.0 | Bug(Command History): Update command history to use the updated index parsing rules - Command history should be updated to use the index/count parsing rules that the team agreed upon. | non_test | bug command history update command history to use the updated index parsing rules command history should be updated to use the index count parsing rules that the team agreed upon | 0 |
251,616 | 21,513,796,714 | IssuesEvent | 2022-04-28 08:02:01 | Tribler/tribler | https://api.github.com/repos/Tribler/tribler | closed | [Tests] RuntimeError: There is no current event loop in thread 'MainThread' | type: bug component: tests | ERROR: type should be string, got "https://github.com/Tribler/tribler/runs/5988589743?check_suite_focus=true\r\n\r\n```python\r\n____________________ test_start_tribler_core_no_exceptions _____________________\r\n\r\nmocked_core_session = <AsyncMock name='core_session' id='4803055920'>\r\n\r\n @patch('tribler.core.logger.logger.load_logger_config', new=MagicMock())\r\n @patch('tribler.core.start_core.set_process_priority', new=MagicMock())\r\n @patch('tribler.core.start_core.check_and_enable_code_tracing', new=MagicMock())\r\n @patch('asyncio.get_event_loop', new=MagicMock())\r\n @patch('tribler.core.start_core.TriblerConfig.load', new=MagicMock())\r\n @patch('tribler.core.start_core.core_session')\r\n def test_start_tribler_core_no_exceptions(mocked_core_session):\r\n # test that base logic of tribler core runs without exceptions\r\n> run_tribler_core_session(1, 'key', Path('.'), False)\r\n\r\nsrc/tribler/core/tests/test_start_core.py:18: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\nsrc/tribler/core/start_core.py:155: in run_tribler_core_session\r\n loop.run_until_complete(core_session(config, components=list(components_gen(config))))\r\nsrc/tribler/core/start_core.py:51: in components_gen\r\n yield ReporterComponent()\r\nsrc/tribler/core/components/base.py:204: in __init__\r\n self.started_event = Event()\r\n../../../hostedtoolcache/Python/3.8.12/x64/lib/python3.8/asyncio/locks.py:[260](https://github.com/Tribler/tribler/runs/5988589743?check_suite_focus=true#step:7:260): in __init__\r\n self._loop = events.get_event_loop()\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <asyncio.unix_events._UnixDefaultEventLoopPolicy object at 0x1203b5a30>\r\n\r\n def get_event_loop(self):\r\n \"\"\"Get the event loop for the current context.\r\n \r\n Returns an instance of EventLoop or raises an exception.\r\n \"\"\"\r\n if (self._local._loop is None and\r\n not self._local._set_called and\r\n isinstance(threading.current_thread(), threading._MainThread)):\r\n self.set_event_loop(self.new_event_loop())\r\n \r\n if self._local._loop is None:\r\n> raise RuntimeError('There is no current event loop in thread %r.'\r\n % threading.current_thread().name)\r\nE RuntimeError: There is no current event loop in thread 'MainThread'.\r\n\r\n../../../hostedtoolcache/Python/3.8.12/x64/lib/python3.8/asyncio/events.py:639: RuntimeError\r\n```\r\n\r\nProbably related to #6176" | 1.0 | [Tests] RuntimeError: There is no current event loop in thread 'MainThread' - https://github.com/Tribler/tribler/runs/5988589743?check_suite_focus=true
```python
____________________ test_start_tribler_core_no_exceptions _____________________
mocked_core_session = <AsyncMock name='core_session' id='4803055920'>
@patch('tribler.core.logger.logger.load_logger_config', new=MagicMock())
@patch('tribler.core.start_core.set_process_priority', new=MagicMock())
@patch('tribler.core.start_core.check_and_enable_code_tracing', new=MagicMock())
@patch('asyncio.get_event_loop', new=MagicMock())
@patch('tribler.core.start_core.TriblerConfig.load', new=MagicMock())
@patch('tribler.core.start_core.core_session')
def test_start_tribler_core_no_exceptions(mocked_core_session):
# test that base logic of tribler core runs without exceptions
> run_tribler_core_session(1, 'key', Path('.'), False)
src/tribler/core/tests/test_start_core.py:18:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/tribler/core/start_core.py:155: in run_tribler_core_session
loop.run_until_complete(core_session(config, components=list(components_gen(config))))
src/tribler/core/start_core.py:51: in components_gen
yield ReporterComponent()
src/tribler/core/components/base.py:204: in __init__
self.started_event = Event()
../../../hostedtoolcache/Python/3.8.12/x64/lib/python3.8/asyncio/locks.py:[260](https://github.com/Tribler/tribler/runs/5988589743?check_suite_focus=true#step:7:260): in __init__
self._loop = events.get_event_loop()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <asyncio.unix_events._UnixDefaultEventLoopPolicy object at 0x1203b5a30>
def get_event_loop(self):
"""Get the event loop for the current context.
Returns an instance of EventLoop or raises an exception.
"""
if (self._local._loop is None and
not self._local._set_called and
isinstance(threading.current_thread(), threading._MainThread)):
self.set_event_loop(self.new_event_loop())
if self._local._loop is None:
> raise RuntimeError('There is no current event loop in thread %r.'
% threading.current_thread().name)
E RuntimeError: There is no current event loop in thread 'MainThread'.
../../../hostedtoolcache/Python/3.8.12/x64/lib/python3.8/asyncio/events.py:639: RuntimeError
```
Probably related to #6176 | test | runtimeerror there is no current event loop in thread mainthread python test start tribler core no exceptions mocked core session patch tribler core logger logger load logger config new magicmock patch tribler core start core set process priority new magicmock patch tribler core start core check and enable code tracing new magicmock patch asyncio get event loop new magicmock patch tribler core start core triblerconfig load new magicmock patch tribler core start core core session def test start tribler core no exceptions mocked core session test that base logic of tribler core runs without exceptions run tribler core session key path false src tribler core tests test start core py src tribler core start core py in run tribler core session loop run until complete core session config components list components gen config src tribler core start core py in components gen yield reportercomponent src tribler core components base py in init self started event event hostedtoolcache python lib asyncio locks py in init self loop events get event loop self def get event loop self get the event loop for the current context returns an instance of eventloop or raises an exception if self local loop is none and not self local set called and isinstance threading current thread threading mainthread self set event loop self new event loop if self local loop is none raise runtimeerror there is no current event loop in thread r threading current thread name e runtimeerror there is no current event loop in thread mainthread hostedtoolcache python lib asyncio events py runtimeerror probably related to | 1 |
168,039 | 13,056,618,216 | IssuesEvent | 2020-07-30 05:17:51 | amanzi/amanzi | https://api.github.com/repos/amanzi/amanzi | closed | Add MPC test for coupled transport with fracture BC | testing | Add MPC test for coupled transport with fracture BC | 1.0 | Add MPC test for coupled transport with fracture BC - Add MPC test for coupled transport with fracture BC | test | add mpc test for coupled transport with fracture bc add mpc test for coupled transport with fracture bc | 1 |
94,141 | 15,962,341,732 | IssuesEvent | 2021-04-16 01:05:48 | RG4421/nucleus | https://api.github.com/repos/RG4421/nucleus | opened | CVE-2019-6283 (Medium) detected in node-sassv4.13.1, node-sass-4.14.1.tgz | security vulnerability | ## CVE-2019-6283 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sassv4.13.1</b>, <b>node-sass-4.14.1.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.14.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz</a></p>
<p>Path to dependency file: nucleus/packages/@nucleus/package.json</p>
<p>Path to vulnerable library: nucleus/packages/@nucleus/node_modules/node-sass/package.json,nucleus/packages/table/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- ember-table-2.2.3.tgz (Root Library)
- ember-cli-sass-7.2.0.tgz
- broccoli-sass-source-maps-2.2.0.tgz
- :x: **node-sass-4.14.1.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::parenthese_scope in prelexer.hpp.
<p>Publish Date: 2019-01-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-6283>CVE-2019-6283</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284</a></p>
<p>Release Date: 2019-08-06</p>
<p>Fix Resolution: LibSass - 3.6.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-sass","packageVersion":"4.14.1","packageFilePaths":["/packages/@nucleus/package.json","/packages/table/package.json"],"isTransitiveDependency":true,"dependencyTree":"ember-table:2.2.3;ember-cli-sass:7.2.0;broccoli-sass-source-maps:2.2.0;node-sass:4.14.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"LibSass - 3.6.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-6283","vulnerabilityDetails":"In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::parenthese_scope in prelexer.hpp.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-6283","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-6283 (Medium) detected in node-sassv4.13.1, node-sass-4.14.1.tgz - ## CVE-2019-6283 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sassv4.13.1</b>, <b>node-sass-4.14.1.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.14.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz</a></p>
<p>Path to dependency file: nucleus/packages/@nucleus/package.json</p>
<p>Path to vulnerable library: nucleus/packages/@nucleus/node_modules/node-sass/package.json,nucleus/packages/table/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- ember-table-2.2.3.tgz (Root Library)
- ember-cli-sass-7.2.0.tgz
- broccoli-sass-source-maps-2.2.0.tgz
- :x: **node-sass-4.14.1.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::parenthese_scope in prelexer.hpp.
<p>Publish Date: 2019-01-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-6283>CVE-2019-6283</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284</a></p>
<p>Release Date: 2019-08-06</p>
<p>Fix Resolution: LibSass - 3.6.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-sass","packageVersion":"4.14.1","packageFilePaths":["/packages/@nucleus/package.json","/packages/table/package.json"],"isTransitiveDependency":true,"dependencyTree":"ember-table:2.2.3;ember-cli-sass:7.2.0;broccoli-sass-source-maps:2.2.0;node-sass:4.14.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"LibSass - 3.6.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-6283","vulnerabilityDetails":"In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::parenthese_scope in prelexer.hpp.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-6283","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_test | cve medium detected in node node sass tgz cve medium severity vulnerability vulnerable libraries node node sass tgz node sass tgz wrapper around libsass library home page a href path to dependency file nucleus packages nucleus package json path to vulnerable library nucleus packages nucleus node modules node sass package json nucleus packages table node modules node sass package json dependency hierarchy ember table tgz root library ember cli sass tgz broccoli sass source maps tgz x node sass tgz vulnerable library found in base branch master vulnerability details in libsass a heap based buffer over read exists in sass prelexer parenthese scope in prelexer hpp publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree ember table ember cli sass broccoli sass source maps node sass isminimumfixversionavailable true minimumfixversion libsass basebranches vulnerabilityidentifier cve vulnerabilitydetails in libsass a heap based buffer over read exists in sass prelexer parenthese scope in prelexer hpp vulnerabilityurl | 0 |
451,466 | 32,029,583,224 | IssuesEvent | 2023-09-22 11:19:43 | OpenDataManchester/PPP | https://api.github.com/repos/OpenDataManchester/PPP | closed | Data formats page | documentation | @Julianlstar @northernjamie I think the [data formats page](http://standard.open3p.org/2.0/2_Data_Formats/2_1_Data_Formats/) needs a complete overhaul.
_Originally posted by @DsposalTom in https://github.com/OpenDataManchester/PPP/issues/62#issuecomment-1623241543_
| 1.0 | Data formats page - @Julianlstar @northernjamie I think the [data formats page](http://standard.open3p.org/2.0/2_Data_Formats/2_1_Data_Formats/) needs a complete overhaul.
_Originally posted by @DsposalTom in https://github.com/OpenDataManchester/PPP/issues/62#issuecomment-1623241543_
| non_test | data formats page julianlstar northernjamie i think the needs a complete overhaul originally posted by dsposaltom in | 0 |
277,986 | 30,702,071,575 | IssuesEvent | 2023-07-27 01:00:05 | billmcchesney1/strelka | https://api.github.com/repos/billmcchesney1/strelka | reopened | CVE-2022-41721 (High) detected in github.com/golang/net/http2-986b41b23924a168277bf3df55a4fd462154f916 | Mend: dependency security vulnerability | ## CVE-2022-41721 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/golang/net/http2-986b41b23924a168277bf3df55a4fd462154f916</b></p></summary>
<p>[mirror] Go supplementary network libraries</p>
<p>
Dependency Hierarchy:
- github.com/target/strelka/src/go/api/health (Root Library)
- google.golang.org/grpc-v1.35.0-dev.0.20201218190559-666aea1fb34c
- github.com/grpc/grpc-go/internal-v1.36.0-dev
- :x: **github.com/golang/net/http2-986b41b23924a168277bf3df55a4fd462154f916** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A request smuggling attack is possible when using MaxBytesHandler. When using MaxBytesHandler, the body of an HTTP request is not fully consumed. When the server attempts to read HTTP2 frames from the connection, it will instead be reading the body of the HTTP request, which could be attacker-manipulated to represent arbitrary HTTP2 requests.
<p>Publish Date: 2023-01-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-41721>CVE-2022-41721</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2023-01-13</p>
<p>Fix Resolution: v0.2.0</p>
</p>
</details>
<p></p>
| True | CVE-2022-41721 (High) detected in github.com/golang/net/http2-986b41b23924a168277bf3df55a4fd462154f916 - ## CVE-2022-41721 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/golang/net/http2-986b41b23924a168277bf3df55a4fd462154f916</b></p></summary>
<p>[mirror] Go supplementary network libraries</p>
<p>
Dependency Hierarchy:
- github.com/target/strelka/src/go/api/health (Root Library)
- google.golang.org/grpc-v1.35.0-dev.0.20201218190559-666aea1fb34c
- github.com/grpc/grpc-go/internal-v1.36.0-dev
- :x: **github.com/golang/net/http2-986b41b23924a168277bf3df55a4fd462154f916** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A request smuggling attack is possible when using MaxBytesHandler. When using MaxBytesHandler, the body of an HTTP request is not fully consumed. When the server attempts to read HTTP2 frames from the connection, it will instead be reading the body of the HTTP request, which could be attacker-manipulated to represent arbitrary HTTP2 requests.
<p>Publish Date: 2023-01-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-41721>CVE-2022-41721</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2023-01-13</p>
<p>Fix Resolution: v0.2.0</p>
</p>
</details>
<p></p>
| non_test | cve high detected in github com golang net cve high severity vulnerability vulnerable library github com golang net go supplementary network libraries dependency hierarchy github com target strelka src go api health root library google golang org grpc dev github com grpc grpc go internal dev x github com golang net vulnerable library found in base branch master vulnerability details a request smuggling attack is possible when using maxbyteshandler when using maxbyteshandler the body of an http request is not fully consumed when the server attempts to read frames from the connection it will instead be reading the body of the http request which could be attacker manipulated to represent arbitrary requests publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution | 0 |
119,810 | 10,073,024,593 | IssuesEvent | 2019-07-24 08:42:02 | haxeui/hxWidgets | https://api.github.com/repos/haxeui/hxWidgets | closed | Sizer.hx: Int should be wx.widgets.SizerFlag | retest required | Haxe version: 4.0.0-rc2
OS: Linux Mint 19.1
I get the following error when I try to compile a project with hxWidgets:
hxWidgets/samples/00-Showcase/.haxelib/hxWidgets/1,0,0/src/wx/widgets/Sizer.hx:17: characters 135-136 : Int should be wx.widgets.SizerFlag
hxWidgets/samples/00-Showcase/.haxelib/hxWidgets/1,0,0/src/wx/widgets/Sizer.hx:15: characters 121-122 : Int should be wx.widgets.SizerFlag
If I change the lines in Sizer.hx to use SizerFlag.None instead of the integer value 0, the compiler error goes away.
```haxe
package wx.widgets;
import cpp.RawPointer;
@:include("wx/sizer.h")
@:unreflective
@:native("wxSizer")
@:structAccess
extern class Sizer extends Object {
//////////////////////////////////////////////////////////////////////////////////////////////////////////
// Instance functions
//////////////////////////////////////////////////////////////////////////////////////////////////////////
@:native("Add") @:overload(function(sizer:RawPointer<Sizer>, proportion:Int = 0, flag:SizerFlag = 0, border:Int = 0):RawPointer<SizerItem> {})
- @:native("Add") public function add(window:RawPointer<Window>, proportion:Int = 0, flag:SizerFlag = 0, border:Int = 0):RawPointer<SizerItem>;
+ @:native("Add") public function add(window:RawPointer<Window>, proportion:Int = 0, flag:SizerFlag = SizerFlag.NONE, border:Int = 0):RawPointer<SizerItem>;
@:native("Insert") @:overload(function(index:Int, sizer:RawPointer<Sizer>, proportion:Int = 0, flag:SizerFlag = 0, border:Int = 0):RawPointer<SizerItem> {})
- @:native("Insert") public function insert(index:Int, window:RawPointer<Window>, proportion:Int = 0, flag:SizerFlag = 0, border:Int = 0):RawPointer<SizerItem>;
+ @:native("Insert") public function insert(index:Int, window:RawPointer<Window>, proportion:Int = 0, flag:SizerFlag = SizerFlag.NONE, border:Int = 0):RawPointer<SizerItem>;
@:native("AddSpacer") public function addSpacer(size:Int):RawPointer<SizerItem>;
@:native("Remove") public function remove(index:Int):Bool;
@:native("Layout") public function layout():Void;
}
``` | 1.0 | Sizer.hx: Int should be wx.widgets.SizerFlag - Haxe version: 4.0.0-rc2
OS: Linux Mint 19.1
I get the following error when I try to compile a project with hxWidgets:
hxWidgets/samples/00-Showcase/.haxelib/hxWidgets/1,0,0/src/wx/widgets/Sizer.hx:17: characters 135-136 : Int should be wx.widgets.SizerFlag
hxWidgets/samples/00-Showcase/.haxelib/hxWidgets/1,0,0/src/wx/widgets/Sizer.hx:15: characters 121-122 : Int should be wx.widgets.SizerFlag
If I change the lines in Sizer.hx to use SizerFlag.None instead of the integer value 0, the compiler error goes away.
```haxe
package wx.widgets;
import cpp.RawPointer;
@:include("wx/sizer.h")
@:unreflective
@:native("wxSizer")
@:structAccess
extern class Sizer extends Object {
//////////////////////////////////////////////////////////////////////////////////////////////////////////
// Instance functions
//////////////////////////////////////////////////////////////////////////////////////////////////////////
@:native("Add") @:overload(function(sizer:RawPointer<Sizer>, proportion:Int = 0, flag:SizerFlag = 0, border:Int = 0):RawPointer<SizerItem> {})
- @:native("Add") public function add(window:RawPointer<Window>, proportion:Int = 0, flag:SizerFlag = 0, border:Int = 0):RawPointer<SizerItem>;
+ @:native("Add") public function add(window:RawPointer<Window>, proportion:Int = 0, flag:SizerFlag = SizerFlag.NONE, border:Int = 0):RawPointer<SizerItem>;
@:native("Insert") @:overload(function(index:Int, sizer:RawPointer<Sizer>, proportion:Int = 0, flag:SizerFlag = 0, border:Int = 0):RawPointer<SizerItem> {})
- @:native("Insert") public function insert(index:Int, window:RawPointer<Window>, proportion:Int = 0, flag:SizerFlag = 0, border:Int = 0):RawPointer<SizerItem>;
+ @:native("Insert") public function insert(index:Int, window:RawPointer<Window>, proportion:Int = 0, flag:SizerFlag = SizerFlag.NONE, border:Int = 0):RawPointer<SizerItem>;
@:native("AddSpacer") public function addSpacer(size:Int):RawPointer<SizerItem>;
@:native("Remove") public function remove(index:Int):Bool;
@:native("Layout") public function layout():Void;
}
``` | test | sizer hx int should be wx widgets sizerflag haxe version os linux mint i get the following error when i try to compile a project with hxwidgets hxwidgets samples showcase haxelib hxwidgets src wx widgets sizer hx characters int should be wx widgets sizerflag hxwidgets samples showcase haxelib hxwidgets src wx widgets sizer hx characters int should be wx widgets sizerflag if i change the lines in sizer hx to use sizerflag none instead of the integer value the compiler error goes away haxe package wx widgets import cpp rawpointer include wx sizer h unreflective native wxsizer structaccess extern class sizer extends object instance functions native add overload function sizer rawpointer proportion int flag sizerflag border int rawpointer native add public function add window rawpointer proportion int flag sizerflag border int rawpointer native add public function add window rawpointer proportion int flag sizerflag sizerflag none border int rawpointer native insert overload function index int sizer rawpointer proportion int flag sizerflag border int rawpointer native insert public function insert index int window rawpointer proportion int flag sizerflag border int rawpointer native insert public function insert index int window rawpointer proportion int flag sizerflag sizerflag none border int rawpointer native addspacer public function addspacer size int rawpointer native remove public function remove index int bool native layout public function layout void | 1 |
15,125 | 9,480,235,156 | IssuesEvent | 2019-04-20 16:04:25 | AOSC-Dev/aosc-os-abbs | https://api.github.com/repos/AOSC-Dev/aosc-os-abbs | opened | sysstat: security update to ^12.0.3 | security to-stable upgrade | <!-- Please remove items do not apply. -->
**CVE IDs:** CVE-2018-19416, CVE-2018-19517
**Other security advisory IDs:** openSUSE-SU-2019:1176-1
**Descriptions:**
- CVE-2018-19416: Fixed out-of-bounds read during a memmove call inside
the remap_struct function (bsc#1117001).
- CVE-2018-19517: Fixed out-of-bounds read during a memset call inside the
remap_struct function (bsc#1117260).
https://github.com/sysstat/sysstat/blob/v12.0.3/CHANGES
**PoC(s):** `https://github.com/sysstat/sysstat/issues/196`, `https://github.com/sysstat/sysstat/issues/199`
**Architectural progress:**
<!-- Please remove any architecture to which the security vulnerabilities do not apply. -->
- [ ] AMD64 `amd64`
- [ ] 32-bit Optional Environment `optenv32`
- [ ] AArch64 `arm64`
- [ ] ARMv7 `armel`
- [ ] PowerPC 64-bit BE `ppc64`
- [ ] PowerPC 32-bit BE `powerpc`
- [ ] RISC-V 64-bit `riscv64`
| True | sysstat: security update to ^12.0.3 - <!-- Please remove items do not apply. -->
**CVE IDs:** CVE-2018-19416, CVE-2018-19517
**Other security advisory IDs:** openSUSE-SU-2019:1176-1
**Descriptions:**
- CVE-2018-19416: Fixed out-of-bounds read during a memmove call inside
the remap_struct function (bsc#1117001).
- CVE-2018-19517: Fixed out-of-bounds read during a memset call inside the
remap_struct function (bsc#1117260).
https://github.com/sysstat/sysstat/blob/v12.0.3/CHANGES
**PoC(s):** `https://github.com/sysstat/sysstat/issues/196`, `https://github.com/sysstat/sysstat/issues/199`
**Architectural progress:**
<!-- Please remove any architecture to which the security vulnerabilities do not apply. -->
- [ ] AMD64 `amd64`
- [ ] 32-bit Optional Environment `optenv32`
- [ ] AArch64 `arm64`
- [ ] ARMv7 `armel`
- [ ] PowerPC 64-bit BE `ppc64`
- [ ] PowerPC 32-bit BE `powerpc`
- [ ] RISC-V 64-bit `riscv64`
| non_test | sysstat security update to cve ids cve cve other security advisory ids opensuse su descriptions cve fixed out of bounds read during a memmove call inside the remap struct function bsc cve fixed out of bounds read during a memset call inside the remap struct function bsc poc s architectural progress bit optional environment armel powerpc bit be powerpc bit be powerpc risc v bit | 0 |
25,304 | 12,541,173,912 | IssuesEvent | 2020-06-05 11:49:49 | jitsi/jitsi-meet | https://api.github.com/repos/jitsi/jitsi-meet | closed | Provide some way to display server load problems to end users | feature-request performance web | First of kudos to all to people that made jitsi-meet possible, I use it in personal and professional use cases and it's great !
Sorry if this feature request is already somewhere on the tracker of the forums, couldn't find it.. sorry for the noise if it is.
**Is your feature request related to a problem you are facing?**
Yes. On some jitsi instances where the specifications and size of server is not public or where usage may vary, the quality of a conference with the same number of participants (which same network conditions) may vary. Many issues and forum posts talk about the potential limitations of running a jisti server on limited hardware and the impact it has on users when they are in conferences or when number of participants reach more than X people, which is technically fully understandable.
**Describe the solution you'd like**
It would be really nice to have (maybe an optional) way of displaying to the end users a warning about the state of the server load when it might have an impact on their performance. For example I would imagine a low-end server that one would use for 1-to-1 call or small meetings which works fine and then when 5 people are on it and it is too loaded that it would suggest in the browser for the users to maybe move to a bigger server (which could be indicated by the admin of the instance).
In a lot of cases varying experience on the same server causes the users to think they are having problems with their client, their ISP, etc. This type of feature would help them identify the problem.
**Describe alternatives you've considered**
Not sure what this section if for. Maybe there could be a setting to automatically disable some features such as video with a warning such as "server overload disabling some features" ?
| True | Provide some way to display server load problems to end users - First of kudos to all to people that made jitsi-meet possible, I use it in personal and professional use cases and it's great !
Sorry if this feature request is already somewhere on the tracker of the forums, couldn't find it.. sorry for the noise if it is.
**Is your feature request related to a problem you are facing?**
Yes. On some jitsi instances where the specifications and size of server is not public or where usage may vary, the quality of a conference with the same number of participants (which same network conditions) may vary. Many issues and forum posts talk about the potential limitations of running a jisti server on limited hardware and the impact it has on users when they are in conferences or when number of participants reach more than X people, which is technically fully understandable.
**Describe the solution you'd like**
It would be really nice to have (maybe an optional) way of displaying to the end users a warning about the state of the server load when it might have an impact on their performance. For example I would imagine a low-end server that one would use for 1-to-1 call or small meetings which works fine and then when 5 people are on it and it is too loaded that it would suggest in the browser for the users to maybe move to a bigger server (which could be indicated by the admin of the instance).
In a lot of cases varying experience on the same server causes the users to think they are having problems with their client, their ISP, etc. This type of feature would help them identify the problem.
**Describe alternatives you've considered**
Not sure what this section if for. Maybe there could be a setting to automatically disable some features such as video with a warning such as "server overload disabling some features" ?
| non_test | provide some way to display server load problems to end users first of kudos to all to people that made jitsi meet possible i use it in personal and professional use cases and it s great sorry if this feature request is already somewhere on the tracker of the forums couldn t find it sorry for the noise if it is is your feature request related to a problem you are facing yes on some jitsi instances where the specifications and size of server is not public or where usage may vary the quality of a conference with the same number of participants which same network conditions may vary many issues and forum posts talk about the potential limitations of running a jisti server on limited hardware and the impact it has on users when they are in conferences or when number of participants reach more than x people which is technically fully understandable describe the solution you d like it would be really nice to have maybe an optional way of displaying to the end users a warning about the state of the server load when it might have an impact on their performance for example i would imagine a low end server that one would use for to call or small meetings which works fine and then when people are on it and it is too loaded that it would suggest in the browser for the users to maybe move to a bigger server which could be indicated by the admin of the instance in a lot of cases varying experience on the same server causes the users to think they are having problems with their client their isp etc this type of feature would help them identify the problem describe alternatives you ve considered not sure what this section if for maybe there could be a setting to automatically disable some features such as video with a warning such as server overload disabling some features | 0 |
243,568 | 26,283,615,295 | IssuesEvent | 2023-01-07 15:35:12 | ForgeRock/ds-operator | https://api.github.com/repos/ForgeRock/ds-operator | closed | sigs.k8s.io/Controller-runtime-v0.9.2: 1 vulnerabilities (highest severity is: 7.5) - autoclosed | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sigs.k8s.io/Controller-runtime-v0.9.2</b></p></summary>
<p></p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/ForgeRock/ds-operator/commit/8f0573cd4136e25910b19e834292128da3e232d8">8f0573cd4136e25910b19e834292128da3e232d8</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (sigs.k8s.io/Controller-runtime-v0.9.2 version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-21698](https://www.mend.io/vulnerability-database/CVE-2022-21698) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | github.com/prometheus/Client_golang-v1.11.0 | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-21698</summary>
### Vulnerable Library - <b>github.com/prometheus/Client_golang-v1.11.0</b></p>
<p>Prometheus instrumentation library for Go applications</p>
<p>Library home page: <a href="https://proxy.golang.org/github.com/prometheus/client_golang/@v/v1.11.0.zip">https://proxy.golang.org/github.com/prometheus/client_golang/@v/v1.11.0.zip</a></p>
<p>
Dependency Hierarchy:
- sigs.k8s.io/Controller-runtime-v0.9.2 (Root Library)
- :x: **github.com/prometheus/Client_golang-v1.11.0** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ForgeRock/ds-operator/commit/8f0573cd4136e25910b19e834292128da3e232d8">8f0573cd4136e25910b19e834292128da3e232d8</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
client_golang is the instrumentation library for Go applications in Prometheus, and the promhttp package in client_golang provides tooling around HTTP servers and clients. In client_golang prior to version 1.11.1, HTTP server is susceptible to a Denial of Service through unbounded cardinality, and potential memory exhaustion, when handling requests with non-standard HTTP methods. In order to be affected, an instrumented software must use any of `promhttp.InstrumentHandler*` middleware except `RequestsInFlight`; not filter any specific methods (e.g GET) before middleware; pass metric with `method` label name to our middleware; and not have any firewall/LB/proxy that filters away requests with unknown `method`. client_golang version 1.11.1 contains a patch for this issue. Several workarounds are available, including removing the `method` label name from counter/gauge used in the InstrumentHandler; turning off affected promhttp handlers; adding custom middleware before promhttp handler that will sanitize the request method given by Go http.Request; and using a reverse proxy or web application firewall, configured to only allow a limited set of methods.
<p>Publish Date: 2022-02-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-21698>CVE-2022-21698</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/prometheus/client_golang/security/advisories/GHSA-cg3q-j54f-5p7p">https://github.com/prometheus/client_golang/security/advisories/GHSA-cg3q-j54f-5p7p</a></p>
<p>Release Date: 2022-02-15</p>
<p>Fix Resolution: v1.11.1</p>
</p>
<p></p>
</details> | True | sigs.k8s.io/Controller-runtime-v0.9.2: 1 vulnerabilities (highest severity is: 7.5) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sigs.k8s.io/Controller-runtime-v0.9.2</b></p></summary>
<p></p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/ForgeRock/ds-operator/commit/8f0573cd4136e25910b19e834292128da3e232d8">8f0573cd4136e25910b19e834292128da3e232d8</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (sigs.k8s.io/Controller-runtime-v0.9.2 version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-21698](https://www.mend.io/vulnerability-database/CVE-2022-21698) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | github.com/prometheus/Client_golang-v1.11.0 | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-21698</summary>
### Vulnerable Library - <b>github.com/prometheus/Client_golang-v1.11.0</b></p>
<p>Prometheus instrumentation library for Go applications</p>
<p>Library home page: <a href="https://proxy.golang.org/github.com/prometheus/client_golang/@v/v1.11.0.zip">https://proxy.golang.org/github.com/prometheus/client_golang/@v/v1.11.0.zip</a></p>
<p>
Dependency Hierarchy:
- sigs.k8s.io/Controller-runtime-v0.9.2 (Root Library)
- :x: **github.com/prometheus/Client_golang-v1.11.0** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ForgeRock/ds-operator/commit/8f0573cd4136e25910b19e834292128da3e232d8">8f0573cd4136e25910b19e834292128da3e232d8</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
client_golang is the instrumentation library for Go applications in Prometheus, and the promhttp package in client_golang provides tooling around HTTP servers and clients. In client_golang prior to version 1.11.1, HTTP server is susceptible to a Denial of Service through unbounded cardinality, and potential memory exhaustion, when handling requests with non-standard HTTP methods. In order to be affected, an instrumented software must use any of `promhttp.InstrumentHandler*` middleware except `RequestsInFlight`; not filter any specific methods (e.g GET) before middleware; pass metric with `method` label name to our middleware; and not have any firewall/LB/proxy that filters away requests with unknown `method`. client_golang version 1.11.1 contains a patch for this issue. Several workarounds are available, including removing the `method` label name from counter/gauge used in the InstrumentHandler; turning off affected promhttp handlers; adding custom middleware before promhttp handler that will sanitize the request method given by Go http.Request; and using a reverse proxy or web application firewall, configured to only allow a limited set of methods.
<p>Publish Date: 2022-02-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-21698>CVE-2022-21698</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/prometheus/client_golang/security/advisories/GHSA-cg3q-j54f-5p7p">https://github.com/prometheus/client_golang/security/advisories/GHSA-cg3q-j54f-5p7p</a></p>
<p>Release Date: 2022-02-15</p>
<p>Fix Resolution: v1.11.1</p>
</p>
<p></p>
</details> | non_test | sigs io controller runtime vulnerabilities highest severity is autoclosed vulnerable library sigs io controller runtime found in head commit a href vulnerabilities cve severity cvss dependency type fixed in sigs io controller runtime version remediation available high github com prometheus client golang transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the section details below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library github com prometheus client golang prometheus instrumentation library for go applications library home page a href dependency hierarchy sigs io controller runtime root library x github com prometheus client golang vulnerable library found in head commit a href found in base branch master vulnerability details client golang is the instrumentation library for go applications in prometheus and the promhttp package in client golang provides tooling around http servers and clients in client golang prior to version http server is susceptible to a denial of service through unbounded cardinality and potential memory exhaustion when handling requests with non standard http methods in order to be affected an instrumented software must use any of promhttp instrumenthandler middleware except requestsinflight not filter any specific methods e g get before middleware pass metric with method label name to our middleware and not have any firewall lb proxy that filters away requests with unknown method client golang version contains a patch for this issue several workarounds are available including removing the method label name from counter gauge used in the instrumenthandler turning off affected promhttp handlers adding custom middleware before promhttp handler that will sanitize the request method given by go http request and using a reverse proxy or web application firewall configured to only allow a limited set of methods publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution | 0 |
115,427 | 11,874,141,901 | IssuesEvent | 2020-03-26 18:28:41 | aguirre-lab/icu | https://api.github.com/repos/aguirre-lab/icu | opened | Document procedure to match BM files with patient's ID | documentation | **What**
Document on the Wiki the procedure to obtain the .csv file in which BM files are matched with patients.
**Why**
Understand this process will help us to do it by ourselves. If more BM files are needed in the future, we will be able to cross-reference them with patients' ID.
**How**
Talk to Yuping to get more information about the process. Document it on the Wiki with relevant information and diagram of the general workflow.
**Acceptance Criteria**
Improve the "Cross-Referencing Bedmaster data with EDW data" section to have a well structured explanation of this process.
| 1.0 | Document procedure to match BM files with patient's ID - **What**
Document on the Wiki the procedure to obtain the .csv file in which BM files are matched with patients.
**Why**
Understand this process will help us to do it by ourselves. If more BM files are needed in the future, we will be able to cross-reference them with patients' ID.
**How**
Talk to Yuping to get more information about the process. Document it on the Wiki with relevant information and diagram of the general workflow.
**Acceptance Criteria**
Improve the "Cross-Referencing Bedmaster data with EDW data" section to have a well structured explanation of this process.
| non_test | document procedure to match bm files with patient s id what document on the wiki the procedure to obtain the csv file in which bm files are matched with patients why understand this process will help us to do it by ourselves if more bm files are needed in the future we will be able to cross reference them with patients id how talk to yuping to get more information about the process document it on the wiki with relevant information and diagram of the general workflow acceptance criteria improve the cross referencing bedmaster data with edw data section to have a well structured explanation of this process | 0 |
152,647 | 12,123,339,396 | IssuesEvent | 2020-04-22 12:33:59 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | VMWare VM Inventory Plugin Host Filters | affects_2.10 cloud feature has_pr inventory support:core test vmware | <!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Add ability to filter down results when using the vmware_vm_inventory plugin
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_vm_inventory
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
The older vmware_inventory.py script had the ability to filter down hosts based on "host_filters". Please implement a similar feature into the vmware_vm_inventory plugin.
| 1.0 | VMWare VM Inventory Plugin Host Filters - <!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Add ability to filter down results when using the vmware_vm_inventory plugin
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_vm_inventory
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
The older vmware_inventory.py script had the ability to filter down hosts based on "host_filters". Please implement a similar feature into the vmware_vm_inventory plugin.
| test | vmware vm inventory plugin host filters summary add ability to filter down results when using the vmware vm inventory plugin issue type feature idea component name vmware vm inventory additional information the older vmware inventory py script had the ability to filter down hosts based on host filters please implement a similar feature into the vmware vm inventory plugin | 1 |
1,657 | 4,214,840,430 | IssuesEvent | 2016-06-30 00:12:52 | MJRLegends/Space-Astronomy-Feedback- | https://api.github.com/repos/MJRLegends/Space-Astronomy-Feedback- | closed | Using Tinkers LumberAxe crashes client | compatibility issue fixed in next update/version/on the server mod bug MUST SEE | If you use the Tinkers LumberAxe the client crashes as soon as the "Chopping" is done, and the tree is disappearing.
Stacktrace from Client crash
`Stacktrace:
at portablejim.veinminer.core.CoreEvents.blockBreakEvent(CoreEvents.java:25)
at cpw.mods.fml.common.eventhandler.ASMEventHandler_624_CoreEvents_blockBreakEvent_BreakEvent.invoke(.dynamic)
at cpw.mods.fml.common.eventhandler.ASMEventHandler.invoke(ASMEventHandler.java:54)
at cpw.mods.fml.common.eventhandler.EventBus.post(EventBus.java:140)
at tconstruct.items.tools.LumberAxe.breakTree(LumberAxe.java:173)
at tconstruct.items.tools.LumberAxe.onBlockStartBreak(LumberAxe.java:101)
at net.minecraft.client.multiplayer.PlayerControllerMP.func_78751_a(PlayerControllerMP.java:96)
at net.minecraft.client.multiplayer.PlayerControllerMP.func_78759_c(PlayerControllerMP.java:247)
at net.minecraft.client.Minecraft.func_147115_a(Minecraft.java:1357)`
Full crash report attached
[crash-2016-06-29_16.52.59-client.txt](https://github.com/MJRLegends/Space-Astronomy-Feedback-/files/339634/crash-2016-06-29_16.52.59-client.txt)
| True | Using Tinkers LumberAxe crashes client - If you use the Tinkers LumberAxe the client crashes as soon as the "Chopping" is done, and the tree is disappearing.
Stacktrace from Client crash
`Stacktrace:
at portablejim.veinminer.core.CoreEvents.blockBreakEvent(CoreEvents.java:25)
at cpw.mods.fml.common.eventhandler.ASMEventHandler_624_CoreEvents_blockBreakEvent_BreakEvent.invoke(.dynamic)
at cpw.mods.fml.common.eventhandler.ASMEventHandler.invoke(ASMEventHandler.java:54)
at cpw.mods.fml.common.eventhandler.EventBus.post(EventBus.java:140)
at tconstruct.items.tools.LumberAxe.breakTree(LumberAxe.java:173)
at tconstruct.items.tools.LumberAxe.onBlockStartBreak(LumberAxe.java:101)
at net.minecraft.client.multiplayer.PlayerControllerMP.func_78751_a(PlayerControllerMP.java:96)
at net.minecraft.client.multiplayer.PlayerControllerMP.func_78759_c(PlayerControllerMP.java:247)
at net.minecraft.client.Minecraft.func_147115_a(Minecraft.java:1357)`
Full crash report attached
[crash-2016-06-29_16.52.59-client.txt](https://github.com/MJRLegends/Space-Astronomy-Feedback-/files/339634/crash-2016-06-29_16.52.59-client.txt)
| non_test | using tinkers lumberaxe crashes client if you use the tinkers lumberaxe the client crashes as soon as the chopping is done and the tree is disappearing stacktrace from client crash stacktrace at portablejim veinminer core coreevents blockbreakevent coreevents java at cpw mods fml common eventhandler asmeventhandler coreevents blockbreakevent breakevent invoke dynamic at cpw mods fml common eventhandler asmeventhandler invoke asmeventhandler java at cpw mods fml common eventhandler eventbus post eventbus java at tconstruct items tools lumberaxe breaktree lumberaxe java at tconstruct items tools lumberaxe onblockstartbreak lumberaxe java at net minecraft client multiplayer playercontrollermp func a playercontrollermp java at net minecraft client multiplayer playercontrollermp func c playercontrollermp java at net minecraft client minecraft func a minecraft java full crash report attached | 0 |
280,456 | 24,306,541,814 | IssuesEvent | 2022-09-29 17:57:14 | microsoft/vscode | https://api.github.com/repos/microsoft/vscode | closed | Test extension debugging in a clean environment | debug testplan-item | Refs: #159572
- [x] macOS @rzhao271
- [x] windows @hediet
- [x] linux @andreamah
Complexity: 2
Authors: @weinand, @sandy081
[Create Issue](https://github.com/microsoft/vscode/issues/new?body=Testing+%23160905%0A%0A&assignees=weinand,sandy081)
---
When debugging an extension, there was always the problem that the extension was running in the development environment (user settings and installed extensions) of the author of the extension and not in an environment that was more appropriate for the target user of the extension.
With the recently introduced "profiles" feature it is now possible to run the extension under development in a different environment by specifying a profile in the extension's debug configuration.
Two scenarios are supported:
- "debugging in a clean environment" by using an unnamed "empty" profile that gets automatically deleted when extension debugging has stopped.
- "debugging in a controlled environment" by using a named profile that has been created specifically for the extension under development, and that contains specific user settings and extensions.
This debug configuration shows how to "debug in a clean environment":
```json
{
"name": "Extension",
"type": "extensionHost",
"request": "launch",
"args": [
"--profile-temp",
"--extensionDevelopmentPath=${workspaceFolder}"
],
"outFiles": [
"${workspaceFolder}/dist/**/*.js"
],
"preLaunchTask": "npm: watch"
}
```
And here is a debug configuration for "debugging in a controlled environment" that uses a previously created profile named "extensionContext":
```json
{
"name": "Extension",
"type": "extensionHost",
"request": "launch",
"args": [
"--profile=extensionContext",
"--extensionDevelopmentPath=${workspaceFolder}"
],
"outFiles": [
"${workspaceFolder}/dist/**/*.js"
],
"preLaunchTask": "npm: watch"
}
```
Known limitation:
when debugging an extension in a remote location (via the "Remote Development" extensions "Containers", "SSL", or "WSL"), using the `--profile-temp` flag will result in this status message:

This is expected because the temporary profile does not include any extensions, which means that the "Remote Development" extensions are missing too. For remote scenarios it is recommended to create an empty named profile, add the "Remote Development" extensions to it, and then use the `--profile=....` command line option.
---
Please verify for your favourite extension...
- that the "--profile-temp" and "--profile=<profile name>" command lines flags work for the two scenarios from above.
- that the "--profile=<profile name>" flag works in a **remote setup** of your choice (SSH, WSL, Container).
| 1.0 | Test extension debugging in a clean environment - Refs: #159572
- [x] macOS @rzhao271
- [x] windows @hediet
- [x] linux @andreamah
Complexity: 2
Authors: @weinand, @sandy081
[Create Issue](https://github.com/microsoft/vscode/issues/new?body=Testing+%23160905%0A%0A&assignees=weinand,sandy081)
---
When debugging an extension, there was always the problem that the extension was running in the development environment (user settings and installed extensions) of the author of the extension and not in an environment that was more appropriate for the target user of the extension.
With the recently introduced "profiles" feature it is now possible to run the extension under development in a different environment by specifying a profile in the extension's debug configuration.
Two scenarios are supported:
- "debugging in a clean environment" by using an unnamed "empty" profile that gets automatically deleted when extension debugging has stopped.
- "debugging in a controlled environment" by using a named profile that has been created specifically for the extension under development, and that contains specific user settings and extensions.
This debug configuration shows how to "debug in a clean environment":
```json
{
"name": "Extension",
"type": "extensionHost",
"request": "launch",
"args": [
"--profile-temp",
"--extensionDevelopmentPath=${workspaceFolder}"
],
"outFiles": [
"${workspaceFolder}/dist/**/*.js"
],
"preLaunchTask": "npm: watch"
}
```
And here is a debug configuration for "debugging in a controlled environment" that uses a previously created profile named "extensionContext":
```json
{
"name": "Extension",
"type": "extensionHost",
"request": "launch",
"args": [
"--profile=extensionContext",
"--extensionDevelopmentPath=${workspaceFolder}"
],
"outFiles": [
"${workspaceFolder}/dist/**/*.js"
],
"preLaunchTask": "npm: watch"
}
```
Known limitation:
when debugging an extension in a remote location (via the "Remote Development" extensions "Containers", "SSL", or "WSL"), using the `--profile-temp` flag will result in this status message:

This is expected because the temporary profile does not include any extensions, which means that the "Remote Development" extensions are missing too. For remote scenarios it is recommended to create an empty named profile, add the "Remote Development" extensions to it, and then use the `--profile=....` command line option.
---
Please verify for your favourite extension...
- that the "--profile-temp" and "--profile=<profile name>" command lines flags work for the two scenarios from above.
- that the "--profile=<profile name>" flag works in a **remote setup** of your choice (SSH, WSL, Container).
| test | test extension debugging in a clean environment refs macos windows hediet linux andreamah complexity authors weinand when debugging an extension there was always the problem that the extension was running in the development environment user settings and installed extensions of the author of the extension and not in an environment that was more appropriate for the target user of the extension with the recently introduced profiles feature it is now possible to run the extension under development in a different environment by specifying a profile in the extension s debug configuration two scenarios are supported debugging in a clean environment by using an unnamed empty profile that gets automatically deleted when extension debugging has stopped debugging in a controlled environment by using a named profile that has been created specifically for the extension under development and that contains specific user settings and extensions this debug configuration shows how to debug in a clean environment json name extension type extensionhost request launch args profile temp extensiondevelopmentpath workspacefolder outfiles workspacefolder dist js prelaunchtask npm watch and here is a debug configuration for debugging in a controlled environment that uses a previously created profile named extensioncontext json name extension type extensionhost request launch args profile extensioncontext extensiondevelopmentpath workspacefolder outfiles workspacefolder dist js prelaunchtask npm watch known limitation when debugging an extension in a remote location via the remote development extensions containers ssl or wsl using the profile temp flag will result in this status message this is expected because the temporary profile does not include any extensions which means that the remote development extensions are missing too for remote scenarios it is recommended to create an empty named profile add the remote development extensions to it and then use the profile command line option please verify for your favourite extension that the profile temp and profile command lines flags work for the two scenarios from above that the profile flag works in a remote setup of your choice ssh wsl container | 1 |
126,105 | 17,868,842,121 | IssuesEvent | 2021-09-06 12:58:27 | fasttrack-solutions/jQuery-QueryBuilder | https://api.github.com/repos/fasttrack-solutions/jQuery-QueryBuilder | opened | CVE-2021-23383 (High) detected in handlebars-4.1.2.tgz | security vulnerability | ## CVE-2021-23383 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>Path to dependency file: jQuery-QueryBuilder/package.json</p>
<p>Path to vulnerable library: jQuery-QueryBuilder/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- foodoc-0.0.9.tgz (Root Library)
- :x: **handlebars-4.1.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fasttrack-solutions/jQuery-QueryBuilder/commit/9291825bff1e01bb64535f99d7badac198ddbca0">9291825bff1e01bb64535f99d7badac198ddbca0</a></p>
<p>Found in base branch: <b>dev</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handlebars before 4.7.7 are vulnerable to Prototype Pollution when selecting certain compiling options to compile templates coming from an untrusted source.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23383>CVE-2021-23383</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383</a></p>
<p>Release Date: 2021-05-04</p>
<p>Fix Resolution: handlebars - 4.7.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-23383 (High) detected in handlebars-4.1.2.tgz - ## CVE-2021-23383 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>Path to dependency file: jQuery-QueryBuilder/package.json</p>
<p>Path to vulnerable library: jQuery-QueryBuilder/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- foodoc-0.0.9.tgz (Root Library)
- :x: **handlebars-4.1.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fasttrack-solutions/jQuery-QueryBuilder/commit/9291825bff1e01bb64535f99d7badac198ddbca0">9291825bff1e01bb64535f99d7badac198ddbca0</a></p>
<p>Found in base branch: <b>dev</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handlebars before 4.7.7 are vulnerable to Prototype Pollution when selecting certain compiling options to compile templates coming from an untrusted source.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23383>CVE-2021-23383</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383</a></p>
<p>Release Date: 2021-05-04</p>
<p>Fix Resolution: handlebars - 4.7.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in handlebars tgz cve high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file jquery querybuilder package json path to vulnerable library jquery querybuilder node modules handlebars package json dependency hierarchy foodoc tgz root library x handlebars tgz vulnerable library found in head commit a href found in base branch dev vulnerability details the package handlebars before are vulnerable to prototype pollution when selecting certain compiling options to compile templates coming from an untrusted source publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource | 0 |
223,396 | 17,598,141,674 | IssuesEvent | 2021-08-17 08:26:36 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed |
tests-ci :coredump.logging_backend : test failed
| bug priority: low platform: NXP area: Tests |
**Describe the bug**
coredump.logging_backend test is failed on zephyr-v2.6.0-734-gc1cff7558928 on lpcxpresso55s28
see logs for details
**To Reproduce**
1.
```
scripts/twister --device-testing --device-serial /dev/ttyACM0 -p lpcxpresso55s28 --sub-test coredump.logging_backend
```
2. See error
**Expected behavior**
test pass
**Impact**
**Logs and console output**
```
*** Booting Zephyr OS build zephyr-v2.6.0-734-gc1cff7558928 ***
Coredump: lpcxpresso55S28
ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993
ESF could not be retrieved successfully. Shall never occur.
ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993
ESF could not be retrieved successfully. Shall never occur.
```
**Environment (please complete the following information):**
- OS: (e.g. Linux )
- Toolchain (e.g Zephyr SDK)
- Commit SHA or Version used: zephyr-v2.6.0-734-gc1cff7558928
| 1.0 |
tests-ci :coredump.logging_backend : test failed
-
**Describe the bug**
coredump.logging_backend test is failed on zephyr-v2.6.0-734-gc1cff7558928 on lpcxpresso55s28
see logs for details
**To Reproduce**
1.
```
scripts/twister --device-testing --device-serial /dev/ttyACM0 -p lpcxpresso55s28 --sub-test coredump.logging_backend
```
2. See error
**Expected behavior**
test pass
**Impact**
**Logs and console output**
```
*** Booting Zephyr OS build zephyr-v2.6.0-734-gc1cff7558928 ***
Coredump: lpcxpresso55S28
ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993
ESF could not be retrieved successfully. Shall never occur.
ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993
ESF could not be retrieved successfully. Shall never occur.
```
**Environment (please complete the following information):**
- OS: (e.g. Linux )
- Toolchain (e.g Zephyr SDK)
- Commit SHA or Version used: zephyr-v2.6.0-734-gc1cff7558928
| test | tests ci coredump logging backend test failed describe the bug coredump logging backend test is failed on zephyr on see logs for details to reproduce scripts twister device testing device serial dev p sub test coredump logging backend see error expected behavior test pass impact logs and console output booting zephyr os build zephyr coredump assertion fail west topdir zephyr arch arm core cortex m fault c esf could not be retrieved successfully shall never occur assertion fail west topdir zephyr arch arm core cortex m fault c esf could not be retrieved successfully shall never occur environment please complete the following information os e g linux toolchain e g zephyr sdk commit sha or version used zephyr | 1 |
714,075 | 24,549,487,333 | IssuesEvent | 2022-10-12 11:27:08 | alan-turing-institute/AutisticaCitizenScience | https://api.github.com/repos/alan-turing-institute/AutisticaCitizenScience | closed | Gather feedback on videos each sprint | community priority-high fujitsu | - [x] Define user groups who need to review and how many people
- [x] Post videos online (link to them from GitHub)
- [x] Writeprompting. questions for feedback required
- [x] Put questions publicly so they can also be reviewed
- [ ] Write explanation of the video so it can be easily followed
- [x] Add explanatory voiceover
- [x] Circulate the video to the groups who need to review it
- [x] Link the questions to the Google form: https://bit.ly/AutisticaTuringCitSciForm so that people can it add it here for research - also give option to email
- [x] Gather feedback iterate on that basis check metrics where possible
- [x] Add documentation so that someone else could follow the exact process | 1.0 | Gather feedback on videos each sprint - - [x] Define user groups who need to review and how many people
- [x] Post videos online (link to them from GitHub)
- [x] Writeprompting. questions for feedback required
- [x] Put questions publicly so they can also be reviewed
- [ ] Write explanation of the video so it can be easily followed
- [x] Add explanatory voiceover
- [x] Circulate the video to the groups who need to review it
- [x] Link the questions to the Google form: https://bit.ly/AutisticaTuringCitSciForm so that people can it add it here for research - also give option to email
- [x] Gather feedback iterate on that basis check metrics where possible
- [x] Add documentation so that someone else could follow the exact process | non_test | gather feedback on videos each sprint define user groups who need to review and how many people post videos online link to them from github writeprompting questions for feedback required put questions publicly so they can also be reviewed write explanation of the video so it can be easily followed add explanatory voiceover circulate the video to the groups who need to review it link the questions to the google form so that people can it add it here for research also give option to email gather feedback iterate on that basis check metrics where possible add documentation so that someone else could follow the exact process | 0 |
122,206 | 16,092,742,262 | IssuesEvent | 2021-04-26 18:52:25 | elastic/kibana | https://api.github.com/repos/elastic/kibana | opened | [Fleet] Easier editing and display for long URLs in Fleet settings | Team:Fleet design | The Fleet settings flyout displays well for most long URLs, but for extra long ones the URLs get truncated. In addition, it's not possible to edit existing URLs - the user will need to delete and re-add it. Can we improve this UX so that full URLs can be displayed and edited?

| 1.0 | [Fleet] Easier editing and display for long URLs in Fleet settings - The Fleet settings flyout displays well for most long URLs, but for extra long ones the URLs get truncated. In addition, it's not possible to edit existing URLs - the user will need to delete and re-add it. Can we improve this UX so that full URLs can be displayed and edited?

| non_test | easier editing and display for long urls in fleet settings the fleet settings flyout displays well for most long urls but for extra long ones the urls get truncated in addition it s not possible to edit existing urls the user will need to delete and re add it can we improve this ux so that full urls can be displayed and edited | 0 |
22,085 | 18,524,007,893 | IssuesEvent | 2021-10-20 18:08:26 | lethal-guitar/RigelEngine | https://api.github.com/repos/lethal-guitar/RigelEngine | closed | Allow Bonus Screen at End of Level to Be Skippable by Pressing Enter | enhancement usability | @lethal-guitar
At the bonus screen at the end of the level, allow the player to skip the screen by pressing enter.
Also, on the bonus screen, maybe put some text that says `You can skip this screen by pressing Enter.` | True | Allow Bonus Screen at End of Level to Be Skippable by Pressing Enter - @lethal-guitar
At the bonus screen at the end of the level, allow the player to skip the screen by pressing enter.
Also, on the bonus screen, maybe put some text that says `You can skip this screen by pressing Enter.` | non_test | allow bonus screen at end of level to be skippable by pressing enter lethal guitar at the bonus screen at the end of the level allow the player to skip the screen by pressing enter also on the bonus screen maybe put some text that says you can skip this screen by pressing enter | 0 |
52,952 | 6,287,454,566 | IssuesEvent | 2017-07-19 14:59:51 | apache/couchdb | https://api.github.com/repos/apache/couchdb | closed | undefined error in couch_mrview_changes_since_tests:test_compact/1 | testsuite | ```
changes_since tests
couch_mrview_changes_since_tests:109: test_basic...ok
couch_mrview_changes_since_tests:132: test_basic_since...ok
couch_mrview_changes_since_tests:145: test_basic_count...ok
couch_mrview_changes_since_tests:154: test_basic_count_since...ok
undefined
*** instantiation of subtests failed ***
**in function gen_server:call/3 (gen_server.erl, line 188)
in call from couch_mrview_util:get_view_index_state/5 (src/couch_mrview_util.erl, line 83)
in call from couch_mrview_util:get_view/4 (src/couch_mrview_util.erl, line 45)
in call from couch_mrview:count_view_changes_since/5 (src/couch_mrview.erl, line 304)
in call from couch_mrview_changes_since_tests:test_compact/1 (test/couch_mrview_changes_since_tests.erl, line 163)
**exit:{unknown_info,{gen_server,call,[<0.23284.1>,{get_state,11},infinity]}}
```
https://couchdb-vm2.apache.org/ci_errorlogs/travis-couchdb-254596045-2017-07-17T20%3A28%3A00.475245/couchlog.tar.gz | 1.0 | undefined error in couch_mrview_changes_since_tests:test_compact/1 - ```
changes_since tests
couch_mrview_changes_since_tests:109: test_basic...ok
couch_mrview_changes_since_tests:132: test_basic_since...ok
couch_mrview_changes_since_tests:145: test_basic_count...ok
couch_mrview_changes_since_tests:154: test_basic_count_since...ok
undefined
*** instantiation of subtests failed ***
**in function gen_server:call/3 (gen_server.erl, line 188)
in call from couch_mrview_util:get_view_index_state/5 (src/couch_mrview_util.erl, line 83)
in call from couch_mrview_util:get_view/4 (src/couch_mrview_util.erl, line 45)
in call from couch_mrview:count_view_changes_since/5 (src/couch_mrview.erl, line 304)
in call from couch_mrview_changes_since_tests:test_compact/1 (test/couch_mrview_changes_since_tests.erl, line 163)
**exit:{unknown_info,{gen_server,call,[<0.23284.1>,{get_state,11},infinity]}}
```
https://couchdb-vm2.apache.org/ci_errorlogs/travis-couchdb-254596045-2017-07-17T20%3A28%3A00.475245/couchlog.tar.gz | test | undefined error in couch mrview changes since tests test compact changes since tests couch mrview changes since tests test basic ok couch mrview changes since tests test basic since ok couch mrview changes since tests test basic count ok couch mrview changes since tests test basic count since ok undefined instantiation of subtests failed in function gen server call gen server erl line in call from couch mrview util get view index state src couch mrview util erl line in call from couch mrview util get view src couch mrview util erl line in call from couch mrview count view changes since src couch mrview erl line in call from couch mrview changes since tests test compact test couch mrview changes since tests erl line exit unknown info gen server call | 1 |
284,854 | 24,624,585,581 | IssuesEvent | 2022-10-16 10:55:23 | dotnet/efcore | https://api.github.com/repos/dotnet/efcore | closed | Don't build EFCore.Benchmarks.EF6 on non-Windows | area-test | Travis CI is failing trying to build `EFCore.Benchmarks.EF6`. For tests, we exclude `net452`; however this project is special in that it *only* targets `net452`. | 1.0 | Don't build EFCore.Benchmarks.EF6 on non-Windows - Travis CI is failing trying to build `EFCore.Benchmarks.EF6`. For tests, we exclude `net452`; however this project is special in that it *only* targets `net452`. | test | don t build efcore benchmarks on non windows travis ci is failing trying to build efcore benchmarks for tests we exclude however this project is special in that it only targets | 1 |
71,869 | 8,687,042,172 | IssuesEvent | 2018-12-03 12:38:36 | BeerLiftersAssociation/infrastructure | https://api.github.com/repos/BeerLiftersAssociation/infrastructure | closed | Team select architectural approach for project 1 | design | Go over proposed architectural approaches for project 1, as suggested by team.
https://beerliftersassociation.github.io/design-project1
Target 12/3
| 1.0 | Team select architectural approach for project 1 - Go over proposed architectural approaches for project 1, as suggested by team.
https://beerliftersassociation.github.io/design-project1
Target 12/3
| non_test | team select architectural approach for project go over proposed architectural approaches for project as suggested by team target | 0 |
38,260 | 10,163,915,641 | IssuesEvent | 2019-08-07 10:21:15 | weaveworks/scope | https://api.github.com/repos/weaveworks/scope | closed | Remove references to quay.io | chore component/build | We've moved away from publishing Scope images in quay.io, but there are still places in the code referring to it, e.g.
* `.circleci/config.yml`
* `bin/release` script
We should remove all the code that tries to push to quay.io as we're unlikely to get back to it any time soon.
| 1.0 | Remove references to quay.io - We've moved away from publishing Scope images in quay.io, but there are still places in the code referring to it, e.g.
* `.circleci/config.yml`
* `bin/release` script
We should remove all the code that tries to push to quay.io as we're unlikely to get back to it any time soon.
| non_test | remove references to quay io we ve moved away from publishing scope images in quay io but there are still places in the code referring to it e g circleci config yml bin release script we should remove all the code that tries to push to quay io as we re unlikely to get back to it any time soon | 0 |
209,400 | 7,175,258,771 | IssuesEvent | 2018-01-31 04:15:21 | ChalkyBrush/roshpit-bug-tracker | https://api.github.com/repos/ChalkyBrush/roshpit-bug-tracker | closed | Red general + chitinous lobster claw | bug: math or logic priority: low | For some reason (i suppose it is because of E1, maybe i'm wrong) stacks of chitinous lobster claw decrising each second or so. You still can get 96 stacks, but seems like it takes more time and it visually jumps between 95 and 96 even if you don't take damage. | 1.0 | Red general + chitinous lobster claw - For some reason (i suppose it is because of E1, maybe i'm wrong) stacks of chitinous lobster claw decrising each second or so. You still can get 96 stacks, but seems like it takes more time and it visually jumps between 95 and 96 even if you don't take damage. | non_test | red general chitinous lobster claw for some reason i suppose it is because of maybe i m wrong stacks of chitinous lobster claw decrising each second or so you still can get stacks but seems like it takes more time and it visually jumps between and even if you don t take damage | 0 |
261,219 | 22,706,881,770 | IssuesEvent | 2022-07-05 15:21:03 | MohistMC/Mohist | https://api.github.com/repos/MohistMC/Mohist | closed | 1.18.2 farmers delight stove bug | Wait Needs Testing | <!-- ISSUE_TEMPLATE_1 -> IMPORTANT: DO NOT DELETE THIS LINE.-->
<!-- Thank you for reporting ! Please note that issues can take a lot of time to be fixed and there is no eta.-->
<!-- If you don't know where to upload your logs and crash reports, you can use these websites : -->
<!-- https://gist.github.com (recommended) -->
<!-- https://mclo.gs -->
<!-- https://haste.mohistmc.com -->
<!-- https://pastebin.com -->
<!-- TO FILL THIS TEMPLATE, YOU NEED TO REPLACE THE {} BY WHAT YOU WANT -->
**Minecraft Version :** 1.18.2 forge 40.1.52
**Mohist Version :** 1.18.2-DEV (60)
**Operating System :** windows 10
**Concerned mod / plugin** : https://www.curseforge.com/minecraft/mc-mods/farmers-delight
**Logs :** https://paste.ee/p/tlEXk
**Steps to Reproduce :**
1. Do this
2. Then do that
3. ...
**Description of issue :**
i put farmers delight on my server
and when i and other harmful mobs and players step on a stove
i and others get instantly kicked
and when a mob steps on it and takes damage
my server dies
| 1.0 | 1.18.2 farmers delight stove bug - <!-- ISSUE_TEMPLATE_1 -> IMPORTANT: DO NOT DELETE THIS LINE.-->
<!-- Thank you for reporting ! Please note that issues can take a lot of time to be fixed and there is no eta.-->
<!-- If you don't know where to upload your logs and crash reports, you can use these websites : -->
<!-- https://gist.github.com (recommended) -->
<!-- https://mclo.gs -->
<!-- https://haste.mohistmc.com -->
<!-- https://pastebin.com -->
<!-- TO FILL THIS TEMPLATE, YOU NEED TO REPLACE THE {} BY WHAT YOU WANT -->
**Minecraft Version :** 1.18.2 forge 40.1.52
**Mohist Version :** 1.18.2-DEV (60)
**Operating System :** windows 10
**Concerned mod / plugin** : https://www.curseforge.com/minecraft/mc-mods/farmers-delight
**Logs :** https://paste.ee/p/tlEXk
**Steps to Reproduce :**
1. Do this
2. Then do that
3. ...
**Description of issue :**
i put farmers delight on my server
and when i and other harmful mobs and players step on a stove
i and others get instantly kicked
and when a mob steps on it and takes damage
my server dies
| test | farmers delight stove bug important do not delete this line minecraft version forge mohist version dev operating system windows concerned mod plugin logs steps to reproduce do this then do that description of issue i put farmers delight on my server and when i and other harmful mobs and players step on a stove i and others get instantly kicked and when a mob steps on it and takes damage my server dies | 1 |
206,456 | 15,731,680,168 | IssuesEvent | 2021-03-29 17:22:13 | celo-org/celo-monorepo | https://api.github.com/repos/celo-org/celo-monorepo | closed | [FLAKEY TEST] cli-test -> cli -> account:authorize cmd -> can authorize validator signer after validator is registered | FLAKEY cli cli-test devX | FlakeTracker closed this issue after commit fb8da80fbe7ff0444d3f8137c935154dd3677313
Discovered at commit 75fcfc517cf280b84548c18ac0697de13d582a36
Attempt No. 1:
Error: Invalid JSON RPC response: ""
at Object.InvalidResponse (/home/circleci/app/node_modules/web3-core-helpers/lib/errors.js:43:16)
at XMLHttpRequest.request.onreadystatechange (/home/circleci/app/node_modules/web3-providers-http/lib/index.js:95:32)
at XMLHttpRequest.Object.<anonymous>.XMLHttpRequestEventTarget.dispatchEvent (/home/circleci/app/node_modules/xhr2-cookies/xml-http-request-event-target.ts:44:13)
at XMLHttpRequest.Object.<anonymous>.XMLHttpRequest._setReadyState (/home/circleci/app/node_modules/xhr2-cookies/xml-http-request.ts:219:8)
at XMLHttpRequest.Object.<anonymous>.XMLHttpRequest._onHttpRequestError (/home/circleci/app/node_modules/xhr2-cookies/xml-http-request.ts:379:8)
at ClientRequest.<anonymous> (/home/circleci/app/node_modules/xhr2-cookies/xml-http-request.ts:266:37)
at ClientRequest.emit (events.js:198:13)
at Socket.socketOnEnd (_http_client.js:426:9)
at Socket.emit (events.js:203:15)
at endReadableNT (_stream_readable.js:1145:12)
at process._tickCallback (internal/process/next_tick.js:63:19)
Attempt No. 2:
Test Passed!
| 1.0 | [FLAKEY TEST] cli-test -> cli -> account:authorize cmd -> can authorize validator signer after validator is registered - FlakeTracker closed this issue after commit fb8da80fbe7ff0444d3f8137c935154dd3677313
Discovered at commit 75fcfc517cf280b84548c18ac0697de13d582a36
Attempt No. 1:
Error: Invalid JSON RPC response: ""
at Object.InvalidResponse (/home/circleci/app/node_modules/web3-core-helpers/lib/errors.js:43:16)
at XMLHttpRequest.request.onreadystatechange (/home/circleci/app/node_modules/web3-providers-http/lib/index.js:95:32)
at XMLHttpRequest.Object.<anonymous>.XMLHttpRequestEventTarget.dispatchEvent (/home/circleci/app/node_modules/xhr2-cookies/xml-http-request-event-target.ts:44:13)
at XMLHttpRequest.Object.<anonymous>.XMLHttpRequest._setReadyState (/home/circleci/app/node_modules/xhr2-cookies/xml-http-request.ts:219:8)
at XMLHttpRequest.Object.<anonymous>.XMLHttpRequest._onHttpRequestError (/home/circleci/app/node_modules/xhr2-cookies/xml-http-request.ts:379:8)
at ClientRequest.<anonymous> (/home/circleci/app/node_modules/xhr2-cookies/xml-http-request.ts:266:37)
at ClientRequest.emit (events.js:198:13)
at Socket.socketOnEnd (_http_client.js:426:9)
at Socket.emit (events.js:203:15)
at endReadableNT (_stream_readable.js:1145:12)
at process._tickCallback (internal/process/next_tick.js:63:19)
Attempt No. 2:
Test Passed!
| test | cli test cli account authorize cmd can authorize validator signer after validator is registered flaketracker closed this issue after commit discovered at commit attempt no error invalid json rpc response at object invalidresponse home circleci app node modules core helpers lib errors js at xmlhttprequest request onreadystatechange home circleci app node modules providers http lib index js at xmlhttprequest object xmlhttprequesteventtarget dispatchevent home circleci app node modules cookies xml http request event target ts at xmlhttprequest object xmlhttprequest setreadystate home circleci app node modules cookies xml http request ts at xmlhttprequest object xmlhttprequest onhttprequesterror home circleci app node modules cookies xml http request ts at clientrequest home circleci app node modules cookies xml http request ts at clientrequest emit events js at socket socketonend http client js at socket emit events js at endreadablent stream readable js at process tickcallback internal process next tick js attempt no test passed | 1 |
4,296 | 3,344,742,716 | IssuesEvent | 2015-11-16 07:43:46 | Starcounter/Starcounter | https://api.github.com/repos/Starcounter/Starcounter | opened | Nightly build failed with no disk space | Build System Infrastructure P/SHOWSTOPPER | > Error in response: System.IO.IOException: There is not enough space on the disk. | 1.0 | Nightly build failed with no disk space - > Error in response: System.IO.IOException: There is not enough space on the disk. | non_test | nightly build failed with no disk space error in response system io ioexception there is not enough space on the disk | 0 |
246,198 | 20,828,356,600 | IssuesEvent | 2022-03-19 02:34:12 | project-chip/connectedhomeip | https://api.github.com/repos/project-chip/connectedhomeip | closed | iOS chip tool - Scanning QR code doesn't always start pairing process (CHIP Error 0x00000003: Incorrect state at ../../../../../../../../../../../connectedhomeip/src/controller/SetUpCodePairer) | darwin V1.0 smoke test | #### Problem
Scanning QR code using the iOS chip tool app doesn't always start pairing process
SHA: 983fedfcbc38909d73b80f8b5c1d2d8ac163d0ff
1. Build iOS chip tool app using Xcode + load on iOS device
2. Use QACode scanner option to scan in a QR code eg. from ESP32 board
The step 2, occasionally eg. 25% of times, App gets into a state where it doesn't start the pairing process. Following error is seen:
2022-02-09 09:51:23.904861-0800 localhost CHIPTool[6690]: (CHIP) [com.zigbee.chip:all] ð´ [1644429083904] [6690:247785] CHIP: [-] ../../../../../../../../../../../connectedhomeip/src/controller/CHIPDeviceController.cpp:789: CHIP Error 0x00000003: Incorrect state at ../../../../../../../../../../../connectedhomeip/src/controller/SetUpCodePairer
2022-02-09 09:51:23.905582-0800 localhost CHIPTool[6690]: (CoreBluetooth) [com.apple.bluetooth:CoreBluetooth] Received XPC message 5: (null)
2022-02-09 09:51:23.905632-0800 localhost CHIPTool[6690]: (CoreBluetooth) [com.apple.bluetooth:CoreBluetooth] XPC connection finalized
2022-02-09 09:51:23.905719-0800 localhost CHIPTool[6690]: (CoreBluetooth) [com.apple.bluetooth:CoreBluetooth] XPC connection invalid
2022-02-09 09:51:25.353261-0800 localhost CHIPTool[6690]: (CHIP) PersistentStorageDelegate Set Key StartKeyID
2022-02-09 09:51:25.353699-0800 localhost CHIPTool[6690]: (CoreFoundation) [com.apple.defaults:User Defaults] setting {
StartKeyID = "AgA=";
} in CFPrefsPlistSource<0x28171eb80> (Domain: com.apple.chiptool, User: kCFPreferencesCurrentUser, ByHost: No, Container: (null), Contents Need Refresh: No)
2022-02-09 09:51:25.355357-0800 localhost CHIPTool[6690]: (CHIP) DevicePairingDelegate Pairing complete. Status ../../../../../../../../../../../connectedhomeip/src/controller/CHIPDeviceController.cpp:1399: CHIP Error 0x00000032: Timeout
2022-02-09 09:51:25.355769-0800 localhost CHIPTool[6690]: (CoreFoundation) [com.apple.CFBundle:resources] Resource lookup at CFBundle 0x10cd04b90 </private/var/containers/Bundle/Application/BA5EB36A-1B92-4C79-8430-BADCFA9EB7F6/CHIPTool.app> (executable, loaded)
Request : Localizable type: strings
Result : None
2022-02-09 09:51:25.355922-0800 localhost CHIPTool[6690]: (CoreFoundation) [com.apple.CFBundle:resources] Resource lookup at CFBundle 0x10cd04b90 </private/var/containers/Bundle/Application/BA5EB36A-1B92-4C79-8430-BADCFA9EB7F6/CHIPTool.app> (executable, loaded)
Request : Localizable type: stringsdict
Result : None
2022-02-09 09:51:25.356006-0800 localhost CHIPTool[6690]: (CoreFoundation) [com.apple.CFBundle:strings] Hit last resort and creating empty strings table
2022-02-09 09:51:25.356123-0800 localhost CHIPTool[6690]: (CoreFoundation) [com.apple.defaults:User Defaults] found no value for key NSShowNonLocalizedStrings in CFPrefsSearchListSource<0x281711380> (Domain: com.chip.CHIPTool-Kean, Container: (null))
2022-02-09 09:51:25.356238-0800 localhost CHIPTool[6690]: (CoreFoundation) [com.apple.CFBundle:strings] Bundle: CFBundle 0x10cd04b90 </private/var/containers/Bundle/Application/BA5EB36A-1B92-4C79-8430-BADCFA9EB7F6/CHIPTool.app> (executable, loaded), key: Undefined error:%u., value: , table: Localizable, localizationName: (null), result: Undefined error:%u.
2022-02-09 09:51:25.356476-0800 localhost CHIPTool[6690]: Got pairing error back Error Domain=CHIPErrorDomain Code=1 "Undefined error:50."
To recover from this state, you need to force close the chip tool app
| 1.0 | iOS chip tool - Scanning QR code doesn't always start pairing process (CHIP Error 0x00000003: Incorrect state at ../../../../../../../../../../../connectedhomeip/src/controller/SetUpCodePairer) - #### Problem
Scanning QR code using the iOS chip tool app doesn't always start pairing process
SHA: 983fedfcbc38909d73b80f8b5c1d2d8ac163d0ff
1. Build iOS chip tool app using Xcode + load on iOS device
2. Use QACode scanner option to scan in a QR code eg. from ESP32 board
The step 2, occasionally eg. 25% of times, App gets into a state where it doesn't start the pairing process. Following error is seen:
2022-02-09 09:51:23.904861-0800 localhost CHIPTool[6690]: (CHIP) [com.zigbee.chip:all] ð´ [1644429083904] [6690:247785] CHIP: [-] ../../../../../../../../../../../connectedhomeip/src/controller/CHIPDeviceController.cpp:789: CHIP Error 0x00000003: Incorrect state at ../../../../../../../../../../../connectedhomeip/src/controller/SetUpCodePairer
2022-02-09 09:51:23.905582-0800 localhost CHIPTool[6690]: (CoreBluetooth) [com.apple.bluetooth:CoreBluetooth] Received XPC message 5: (null)
2022-02-09 09:51:23.905632-0800 localhost CHIPTool[6690]: (CoreBluetooth) [com.apple.bluetooth:CoreBluetooth] XPC connection finalized
2022-02-09 09:51:23.905719-0800 localhost CHIPTool[6690]: (CoreBluetooth) [com.apple.bluetooth:CoreBluetooth] XPC connection invalid
2022-02-09 09:51:25.353261-0800 localhost CHIPTool[6690]: (CHIP) PersistentStorageDelegate Set Key StartKeyID
2022-02-09 09:51:25.353699-0800 localhost CHIPTool[6690]: (CoreFoundation) [com.apple.defaults:User Defaults] setting {
StartKeyID = "AgA=";
} in CFPrefsPlistSource<0x28171eb80> (Domain: com.apple.chiptool, User: kCFPreferencesCurrentUser, ByHost: No, Container: (null), Contents Need Refresh: No)
2022-02-09 09:51:25.355357-0800 localhost CHIPTool[6690]: (CHIP) DevicePairingDelegate Pairing complete. Status ../../../../../../../../../../../connectedhomeip/src/controller/CHIPDeviceController.cpp:1399: CHIP Error 0x00000032: Timeout
2022-02-09 09:51:25.355769-0800 localhost CHIPTool[6690]: (CoreFoundation) [com.apple.CFBundle:resources] Resource lookup at CFBundle 0x10cd04b90 </private/var/containers/Bundle/Application/BA5EB36A-1B92-4C79-8430-BADCFA9EB7F6/CHIPTool.app> (executable, loaded)
Request : Localizable type: strings
Result : None
2022-02-09 09:51:25.355922-0800 localhost CHIPTool[6690]: (CoreFoundation) [com.apple.CFBundle:resources] Resource lookup at CFBundle 0x10cd04b90 </private/var/containers/Bundle/Application/BA5EB36A-1B92-4C79-8430-BADCFA9EB7F6/CHIPTool.app> (executable, loaded)
Request : Localizable type: stringsdict
Result : None
2022-02-09 09:51:25.356006-0800 localhost CHIPTool[6690]: (CoreFoundation) [com.apple.CFBundle:strings] Hit last resort and creating empty strings table
2022-02-09 09:51:25.356123-0800 localhost CHIPTool[6690]: (CoreFoundation) [com.apple.defaults:User Defaults] found no value for key NSShowNonLocalizedStrings in CFPrefsSearchListSource<0x281711380> (Domain: com.chip.CHIPTool-Kean, Container: (null))
2022-02-09 09:51:25.356238-0800 localhost CHIPTool[6690]: (CoreFoundation) [com.apple.CFBundle:strings] Bundle: CFBundle 0x10cd04b90 </private/var/containers/Bundle/Application/BA5EB36A-1B92-4C79-8430-BADCFA9EB7F6/CHIPTool.app> (executable, loaded), key: Undefined error:%u., value: , table: Localizable, localizationName: (null), result: Undefined error:%u.
2022-02-09 09:51:25.356476-0800 localhost CHIPTool[6690]: Got pairing error back Error Domain=CHIPErrorDomain Code=1 "Undefined error:50."
To recover from this state, you need to force close the chip tool app
| test | ios chip tool scanning qr code doesn t always start pairing process chip error incorrect state at connectedhomeip src controller setupcodepairer problem scanning qr code using the ios chip tool app doesn t always start pairing process sha build ios chip tool app using xcode load on ios device use qacode scanner option to scan in a qr code eg from board the step occasionally eg of times app gets into a state where it doesn t start the pairing process following error is seen localhost chiptool chip ð´ chip connectedhomeip src controller chipdevicecontroller cpp chip error incorrect state at connectedhomeip src controller setupcodepairer localhost chiptool corebluetooth received xpc message null localhost chiptool corebluetooth xpc connection finalized localhost chiptool corebluetooth xpc connection invalid localhost chiptool chip persistentstoragedelegate set key startkeyid localhost chiptool corefoundation setting startkeyid aga in cfprefsplistsource domain com apple chiptool user kcfpreferencescurrentuser byhost no container null contents need refresh no localhost chiptool chip devicepairingdelegate pairing complete status connectedhomeip src controller chipdevicecontroller cpp chip error timeout localhost chiptool corefoundation resource lookup at cfbundle executable loaded request localizable type strings result none localhost chiptool corefoundation resource lookup at cfbundle executable loaded request localizable type stringsdict result none localhost chiptool corefoundation hit last resort and creating empty strings table localhost chiptool corefoundation found no value for key nsshownonlocalizedstrings in cfprefssearchlistsource domain com chip chiptool kean container null localhost chiptool corefoundation bundle cfbundle executable loaded key undefined error u value table localizable localizationname null result undefined error u localhost chiptool got pairing error back error domain chiperrordomain code undefined error to recover from this state you need to force close the chip tool app | 1 |
58,220 | 24,371,256,593 | IssuesEvent | 2022-10-03 19:30:26 | hashicorp/terraform-provider-aws | https://api.github.com/repos/hashicorp/terraform-provider-aws | closed | [New Data Source]: aws_vpc_ipam_pool_cidrs | new-data-source service/ipam | ### Description
Requesting a data source that will basically run `get-ipam-pool-cidrs` on a particular pool and return a list of all provisioned CIDRs. Then you could use this in your child accounts to attach to the route table etc.
### Requested Resource(s) and/or Data Source(s)
data aws_vpc_ipam_pool_cidrs
### Potential Terraform Configuration
_No response_
### References
_No response_
### Would you like to implement a fix?
_No response_ | 1.0 | [New Data Source]: aws_vpc_ipam_pool_cidrs - ### Description
Requesting a data source that will basically run `get-ipam-pool-cidrs` on a particular pool and return a list of all provisioned CIDRs. Then you could use this in your child accounts to attach to the route table etc.
### Requested Resource(s) and/or Data Source(s)
data aws_vpc_ipam_pool_cidrs
### Potential Terraform Configuration
_No response_
### References
_No response_
### Would you like to implement a fix?
_No response_ | non_test | aws vpc ipam pool cidrs description requesting a data source that will basically run get ipam pool cidrs on a particular pool and return a list of all provisioned cidrs then you could use this in your child accounts to attach to the route table etc requested resource s and or data source s data aws vpc ipam pool cidrs potential terraform configuration no response references no response would you like to implement a fix no response | 0 |
112,784 | 11,779,366,821 | IssuesEvent | 2020-03-16 17:53:24 | BrentApps/Gay-Men | https://api.github.com/repos/BrentApps/Gay-Men | closed | App is too slow | bug documentation duplicate enhancement good first issue | Bare with us as we looking into the issue and will be working shortly. Should you have further issues? Contact us | 1.0 | App is too slow - Bare with us as we looking into the issue and will be working shortly. Should you have further issues? Contact us | non_test | app is too slow bare with us as we looking into the issue and will be working shortly should you have further issues contact us | 0 |
257,309 | 22,155,033,131 | IssuesEvent | 2022-06-03 21:24:42 | FuelLabs/sway | https://api.github.com/repos/FuelLabs/sway | closed | In-language Sway Testing | bikeshedding compiler forc testing P: low | Here I propose a simple test harness plugin, allowing tests in Sway.
Essentially you have a simple library, which contains a few asserts (assert_eq, assert_neq, etc.). Call it `swest`: Sway+Test.
Tests are written in a Sway script using the asserts. A plugin is installed, say `swest`. What does `swest` plugin do? Simple, it runs the script, picks up the logs, and interprets the log data as a test output (similar too many other harnesses).
The benefit of this: tests can be written in Sway without knowing any rust or having to configure a Rust test harness.
Example:
```rust
script;
use swest::*;
fn main() {
let k = 1;
let v = 2;
assert_eq(k, v, "k == v");
}
```
CLI:
```bash
forc swest
```
Output:
```
Sway Testing
1 Test Passing, 1 Test Total
-----
1. "k == v" pass
```
Log construction:
`assert_eq` would use something like `logd` and put all the data in memory, then would encode the appropriate test values. The test harness itself would do the comparison.
```memory
[ test type, 2 bytes ] [ first value ] [ second value ] [ message ]
```
Eventually with annotations we could define the test methods better, and also the harness could pick up what section of tests it's running.
For now this is an idea for a conversation only.
Cavettes:
- Unlike Ethereum, Sway cannot create another contract in-language. So the deploying and contracts would need to be done first, then the deployment details would need to be fed in as transaction data, which is non-trivial. | 1.0 | In-language Sway Testing - Here I propose a simple test harness plugin, allowing tests in Sway.
Essentially you have a simple library, which contains a few asserts (assert_eq, assert_neq, etc.). Call it `swest`: Sway+Test.
Tests are written in a Sway script using the asserts. A plugin is installed, say `swest`. What does `swest` plugin do? Simple, it runs the script, picks up the logs, and interprets the log data as a test output (similar too many other harnesses).
The benefit of this: tests can be written in Sway without knowing any rust or having to configure a Rust test harness.
Example:
```rust
script;
use swest::*;
fn main() {
let k = 1;
let v = 2;
assert_eq(k, v, "k == v");
}
```
CLI:
```bash
forc swest
```
Output:
```
Sway Testing
1 Test Passing, 1 Test Total
-----
1. "k == v" pass
```
Log construction:
`assert_eq` would use something like `logd` and put all the data in memory, then would encode the appropriate test values. The test harness itself would do the comparison.
```memory
[ test type, 2 bytes ] [ first value ] [ second value ] [ message ]
```
Eventually with annotations we could define the test methods better, and also the harness could pick up what section of tests it's running.
For now this is an idea for a conversation only.
Cavettes:
- Unlike Ethereum, Sway cannot create another contract in-language. So the deploying and contracts would need to be done first, then the deployment details would need to be fed in as transaction data, which is non-trivial. | test | in language sway testing here i propose a simple test harness plugin allowing tests in sway essentially you have a simple library which contains a few asserts assert eq assert neq etc call it swest sway test tests are written in a sway script using the asserts a plugin is installed say swest what does swest plugin do simple it runs the script picks up the logs and interprets the log data as a test output similar too many other harnesses the benefit of this tests can be written in sway without knowing any rust or having to configure a rust test harness example rust script use swest fn main let k let v assert eq k v k v cli bash forc swest output sway testing test passing test total k v pass log construction assert eq would use something like logd and put all the data in memory then would encode the appropriate test values the test harness itself would do the comparison memory eventually with annotations we could define the test methods better and also the harness could pick up what section of tests it s running for now this is an idea for a conversation only cavettes unlike ethereum sway cannot create another contract in language so the deploying and contracts would need to be done first then the deployment details would need to be fed in as transaction data which is non trivial | 1 |
323,571 | 27,736,905,419 | IssuesEvent | 2023-03-15 11:52:39 | NationalSecurityAgency/skills-service | https://api.github.com/repos/NationalSecurityAgency/skills-service | closed | Add ability to configure custom warning message when files are uploaded to a description | enhancement review test | - add ability to configure server-side custom message
- when message is configured and a file is added to a description display the configured warning
- one idea is to display the warning right below the "Insert images..." messages in the footer:

| 1.0 | Add ability to configure custom warning message when files are uploaded to a description - - add ability to configure server-side custom message
- when message is configured and a file is added to a description display the configured warning
- one idea is to display the warning right below the "Insert images..." messages in the footer:

| test | add ability to configure custom warning message when files are uploaded to a description add ability to configure server side custom message when message is configured and a file is added to a description display the configured warning one idea is to display the warning right below the insert images messages in the footer | 1 |
45,784 | 5,732,413,410 | IssuesEvent | 2017-04-21 14:48:11 | appcelerator/amp | https://api.github.com/repos/appcelerator/amp | closed | README in examples/webclient-demo-function incorrect. | area/test kind/docs | Webclient-demo-function README's "testing locally" section is incorrect. It should be:
```
cat Dockerfile | webclient-demo-function
```
Even after this change, the output is incorrect, as it outputs:
```
The organisation or user # golang:alpine provides an up to date go build environment has 0 repositories on the Docker hub.
``` | 1.0 | README in examples/webclient-demo-function incorrect. - Webclient-demo-function README's "testing locally" section is incorrect. It should be:
```
cat Dockerfile | webclient-demo-function
```
Even after this change, the output is incorrect, as it outputs:
```
The organisation or user # golang:alpine provides an up to date go build environment has 0 repositories on the Docker hub.
``` | test | readme in examples webclient demo function incorrect webclient demo function readme s testing locally section is incorrect it should be cat dockerfile webclient demo function even after this change the output is incorrect as it outputs the organisation or user golang alpine provides an up to date go build environment has repositories on the docker hub | 1 |
27,116 | 4,875,557,813 | IssuesEvent | 2016-11-16 09:58:10 | TNGSB/eWallet | https://api.github.com/repos/TNGSB/eWallet | closed | eWallet_MobileApp_Android (Registration) #02 | Defect - Low (Sev-4) | [Defect_Mobile App #02.xlsx](https://github.com/TNGSB/eWallet/files/565943/Defect_Mobile.App.02.xlsx)
Test Description : To validate the field values for "Login ID" - validate field length
Expected Result : "System should not allow input more than 36 characters and stop input when user reach 36 characters
If user to insert more than 36 characters, system to prompt error message"
Actual Result : System allowed user to key in more than 36 characters
Refer attached document for screenshot
*Apply to both android & IOS | 1.0 | eWallet_MobileApp_Android (Registration) #02 - [Defect_Mobile App #02.xlsx](https://github.com/TNGSB/eWallet/files/565943/Defect_Mobile.App.02.xlsx)
Test Description : To validate the field values for "Login ID" - validate field length
Expected Result : "System should not allow input more than 36 characters and stop input when user reach 36 characters
If user to insert more than 36 characters, system to prompt error message"
Actual Result : System allowed user to key in more than 36 characters
Refer attached document for screenshot
*Apply to both android & IOS | non_test | ewallet mobileapp android registration test description to validate the field values for login id validate field length expected result system should not allow input more than characters and stop input when user reach characters if user to insert more than characters system to prompt error message actual result system allowed user to key in more than characters refer attached document for screenshot apply to both android ios | 0 |
24,535 | 4,006,832,501 | IssuesEvent | 2016-05-12 16:04:12 | Supmenow/sup-issues | https://api.github.com/repos/Supmenow/sup-issues | closed | My friends disappear when I hide | defect | When I click show it takes time for them to come back.
Friends are not in sync with the hide button. I can sometimes see friends in dark mode and not in friends mode. | 1.0 | My friends disappear when I hide - When I click show it takes time for them to come back.
Friends are not in sync with the hide button. I can sometimes see friends in dark mode and not in friends mode. | non_test | my friends disappear when i hide when i click show it takes time for them to come back friends are not in sync with the hide button i can sometimes see friends in dark mode and not in friends mode | 0 |
707,944 | 24,324,825,280 | IssuesEvent | 2022-09-30 13:57:02 | Together-Java/TJ-Bot | https://api.github.com/repos/Together-Java/TJ-Bot | opened | Add user-context-commands for Moderation | enhancement enhance command priority: normal | ## Overview
We just got hands on `UserContextCommand`s, they are cool. And ideal for most moderation commands. We should add duplicates of the existing slash commands for them:
* [ ] ban
* [ ] unban
* [ ] kick
* [ ] warn
* [ ] mute
* [ ] unmute
* [ ] audit
* [ ] note
* [ ] quarantine
* [ ] unquarantine
* [ ] whois
It is totally okay (and to be expected) if this addition is done step by step, PR by PR.
## Slashcommands
This is not supposed to replace the existing slash commands, we still need them. Moderators must still be able to ban users (and similar) by just knowing the ID of the user.
## Duplication
How to tackle the obvious code duplication is up to the implementor. But in most cases, it is probably easiest to simply implement `UserContextCommand` in the existing slashcommand class, such as `BanCommand extends SlashCommandAdapter implements UserContextCommand`. Then the new flow can reuse all the existing logic and duplication is minimized. | 1.0 | Add user-context-commands for Moderation - ## Overview
We just got hands on `UserContextCommand`s, they are cool. And ideal for most moderation commands. We should add duplicates of the existing slash commands for them:
* [ ] ban
* [ ] unban
* [ ] kick
* [ ] warn
* [ ] mute
* [ ] unmute
* [ ] audit
* [ ] note
* [ ] quarantine
* [ ] unquarantine
* [ ] whois
It is totally okay (and to be expected) if this addition is done step by step, PR by PR.
## Slashcommands
This is not supposed to replace the existing slash commands, we still need them. Moderators must still be able to ban users (and similar) by just knowing the ID of the user.
## Duplication
How to tackle the obvious code duplication is up to the implementor. But in most cases, it is probably easiest to simply implement `UserContextCommand` in the existing slashcommand class, such as `BanCommand extends SlashCommandAdapter implements UserContextCommand`. Then the new flow can reuse all the existing logic and duplication is minimized. | non_test | add user context commands for moderation overview we just got hands on usercontextcommand s they are cool and ideal for most moderation commands we should add duplicates of the existing slash commands for them ban unban kick warn mute unmute audit note quarantine unquarantine whois it is totally okay and to be expected if this addition is done step by step pr by pr slashcommands this is not supposed to replace the existing slash commands we still need them moderators must still be able to ban users and similar by just knowing the id of the user duplication how to tackle the obvious code duplication is up to the implementor but in most cases it is probably easiest to simply implement usercontextcommand in the existing slashcommand class such as bancommand extends slashcommandadapter implements usercontextcommand then the new flow can reuse all the existing logic and duplication is minimized | 0 |
2,833 | 2,717,322,572 | IssuesEvent | 2015-04-11 05:22:30 | linkedin/dustjs | https://api.github.com/repos/linkedin/dustjs | closed | Add loadSource example to Getting Started Guide | documentation | An example of how to get Dust to work (without precompiling templates) in the Getting Started Guide will help new users who want to try out Dust without having to set up a server environment.
We should note that compiling templates on the client is not optimal for production use cases. | 1.0 | Add loadSource example to Getting Started Guide - An example of how to get Dust to work (without precompiling templates) in the Getting Started Guide will help new users who want to try out Dust without having to set up a server environment.
We should note that compiling templates on the client is not optimal for production use cases. | non_test | add loadsource example to getting started guide an example of how to get dust to work without precompiling templates in the getting started guide will help new users who want to try out dust without having to set up a server environment we should note that compiling templates on the client is not optimal for production use cases | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.