Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 4
112
| repo_url
stringlengths 33
141
| action
stringclasses 3
values | title
stringlengths 1
1.02k
| labels
stringlengths 4
1.54k
| body
stringlengths 1
262k
| index
stringclasses 17
values | text_combine
stringlengths 95
262k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
224,624
| 17,762,281,746
|
IssuesEvent
|
2021-08-29 22:56:01
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
TEST: audit uses of maybeWarnsRegex
|
module: tests triaged
|
Follow on for #47624, which created an `assertWarnsOnceRegex` context manager to test C-level `TORCH_WARN_ONCE`. All the places that currently use `maybeWarnsRegex` should be replaced with the `assrtWarnsOnceRegex`, and any untested `TORCH_WARN_ONCE` code should be covered by a test.
cc @mruberry @VitalyFedyunin @walterddr
|
1.0
|
TEST: audit uses of maybeWarnsRegex - Follow on for #47624, which created an `assertWarnsOnceRegex` context manager to test C-level `TORCH_WARN_ONCE`. All the places that currently use `maybeWarnsRegex` should be replaced with the `assrtWarnsOnceRegex`, and any untested `TORCH_WARN_ONCE` code should be covered by a test.
cc @mruberry @VitalyFedyunin @walterddr
|
test
|
test audit uses of maybewarnsregex follow on for which created an assertwarnsonceregex context manager to test c level torch warn once all the places that currently use maybewarnsregex should be replaced with the assrtwarnsonceregex and any untested torch warn once code should be covered by a test cc mruberry vitalyfedyunin walterddr
| 1
|
265,789
| 23,198,241,726
|
IssuesEvent
|
2022-08-01 18:36:36
|
metrico/qryn
|
https://api.github.com/repos/metrico/qryn
|
closed
|
qryn managed alert doesn't get triggered.
|
help wanted needs testing
|
Hello again and good day to you.
I've created an alerting rule managed by `qryn` via [this guide](https://github.com/metrico/qryn/wiki/Ruler---Alerts#-qryn-ruler--alert-manager) and it looks like this:

The problem is when I wanted to test it by manually inserting some logs to match the condition, the alert didn't change state from normal to pending and then firing even though when you click on the graph it shows that it has definitely matched the condition as it can be seen in the below picture:

I've tested some other expressions and log queries but the state doesn't change at all. As a matter of fact, the `normal` string in the first picture looks a bit weird since it doesn't have the green box around it.
In addition, I tried to see if the rule is actually inserted into qryn and it was:
```bash
curl -i -XGET -H "Content-Type: application/json" http://<SOME_IP>:<SOME_PORT>/api/prom/rules
HTTP/1.1 200 OK
vary: Origin
access-control-allow-origin: *
content-type: yaml
content-length: 288
Date: Sun, 31 Jul 2022 13:36:20 GMT
Connection: keep-alive
Keep-Alive: timeout=5
fake:
- interval: 1s
name: foobar
rules:
- alert: foo-app-error
expr: rate({appname="foo-app"} |~ ".*error.*" [10s]) > 1
for: 1m
annotations:
description: component foo is in an error state
summary: foo error
labels: {}
```
My Grafana notifications and routing are configured correctly, in fact, I tried the same expression but with a Grafana managed alerting rule and it worked just fine.
- Am I doing it right? Maybe I've misconfigured something (I'm open to sharing more information if needed.)
- Isn't there any other way to insert and add `qryn` managed alerting rules other than Grafana GUI and the API? (not related to this issue just asking).
|
1.0
|
qryn managed alert doesn't get triggered. - Hello again and good day to you.
I've created an alerting rule managed by `qryn` via [this guide](https://github.com/metrico/qryn/wiki/Ruler---Alerts#-qryn-ruler--alert-manager) and it looks like this:

The problem is when I wanted to test it by manually inserting some logs to match the condition, the alert didn't change state from normal to pending and then firing even though when you click on the graph it shows that it has definitely matched the condition as it can be seen in the below picture:

I've tested some other expressions and log queries but the state doesn't change at all. As a matter of fact, the `normal` string in the first picture looks a bit weird since it doesn't have the green box around it.
In addition, I tried to see if the rule is actually inserted into qryn and it was:
```bash
curl -i -XGET -H "Content-Type: application/json" http://<SOME_IP>:<SOME_PORT>/api/prom/rules
HTTP/1.1 200 OK
vary: Origin
access-control-allow-origin: *
content-type: yaml
content-length: 288
Date: Sun, 31 Jul 2022 13:36:20 GMT
Connection: keep-alive
Keep-Alive: timeout=5
fake:
- interval: 1s
name: foobar
rules:
- alert: foo-app-error
expr: rate({appname="foo-app"} |~ ".*error.*" [10s]) > 1
for: 1m
annotations:
description: component foo is in an error state
summary: foo error
labels: {}
```
My Grafana notifications and routing are configured correctly, in fact, I tried the same expression but with a Grafana managed alerting rule and it worked just fine.
- Am I doing it right? Maybe I've misconfigured something (I'm open to sharing more information if needed.)
- Isn't there any other way to insert and add `qryn` managed alerting rules other than Grafana GUI and the API? (not related to this issue just asking).
|
test
|
qryn managed alert doesn t get triggered hello again and good day to you i ve created an alerting rule managed by qryn via and it looks like this the problem is when i wanted to test it by manually inserting some logs to match the condition the alert didn t change state from normal to pending and then firing even though when you click on the graph it shows that it has definitely matched the condition as it can be seen in the below picture i ve tested some other expressions and log queries but the state doesn t change at all as a matter of fact the normal string in the first picture looks a bit weird since it doesn t have the green box around it in addition i tried to see if the rule is actually inserted into qryn and it was bash curl i xget h content type application json http ok vary origin access control allow origin content type yaml content length date sun jul gmt connection keep alive keep alive timeout fake interval name foobar rules alert foo app error expr rate appname foo app error for annotations description component foo is in an error state summary foo error labels my grafana notifications and routing are configured correctly in fact i tried the same expression but with a grafana managed alerting rule and it worked just fine am i doing it right maybe i ve misconfigured something i m open to sharing more information if needed isn t there any other way to insert and add qryn managed alerting rules other than grafana gui and the api not related to this issue just asking
| 1
|
678,769
| 23,210,135,937
|
IssuesEvent
|
2022-08-02 09:24:50
|
testomatio/app
|
https://api.github.com/repos/testomatio/app
|
closed
|
Add the ability to choose where to send the results of the wound run
|
enhancement reporting ci\cd priority medium
|
**Is your feature request related to a problem? Please describe.**
sometimes the tests fail not because they are bad, but because there are problems on the test bench. In this regard, when you retrieve tests that fail, it is not always necessary to create a new run, clogging the run table
**Describe the solution you'd like**
add the ability or choice of where to send the results of the wound, whether to create a new use the same
|
1.0
|
Add the ability to choose where to send the results of the wound run - **Is your feature request related to a problem? Please describe.**
sometimes the tests fail not because they are bad, but because there are problems on the test bench. In this regard, when you retrieve tests that fail, it is not always necessary to create a new run, clogging the run table
**Describe the solution you'd like**
add the ability or choice of where to send the results of the wound, whether to create a new use the same
|
non_test
|
add the ability to choose where to send the results of the wound run is your feature request related to a problem please describe sometimes the tests fail not because they are bad but because there are problems on the test bench in this regard when you retrieve tests that fail it is not always necessary to create a new run clogging the run table describe the solution you d like add the ability or choice of where to send the results of the wound whether to create a new use the same
| 0
|
81,832
| 7,805,226,455
|
IssuesEvent
|
2018-06-11 10:05:55
|
ODIQueensland/data-curator
|
https://api.github.com/repos/ODIQueensland/data-curator
|
closed
|
UAT v0.17.0
|
i:User-Acceptance-Test
|
Sponsor to user acceptance test Data Curator
- review [acceptance tests](https://app.cucumber.pro/projects/data-curator/documents/branch/master)
- [download](https://github.com/ODIQueensland/data-curator/releases), install, and test Data Curator
- [report issues](https://github.com/ODIQueensland/data-curator/issues/new?template=bug.md&labels=problem:Bug&assignee=Stephen-Gates)
cc: @louisjasek
|
1.0
|
UAT v0.17.0 - Sponsor to user acceptance test Data Curator
- review [acceptance tests](https://app.cucumber.pro/projects/data-curator/documents/branch/master)
- [download](https://github.com/ODIQueensland/data-curator/releases), install, and test Data Curator
- [report issues](https://github.com/ODIQueensland/data-curator/issues/new?template=bug.md&labels=problem:Bug&assignee=Stephen-Gates)
cc: @louisjasek
|
test
|
uat sponsor to user acceptance test data curator review install and test data curator cc louisjasek
| 1
|
237,745
| 19,671,112,662
|
IssuesEvent
|
2022-01-11 07:23:10
|
purefun/today-i-learned
|
https://api.github.com/repos/purefun/today-i-learned
|
closed
|
t.Helper() for assertion helper function
|
golang testing
|
`t.Helper()` will report `assertCorrectMessage` callers line number instead of `t.Errorf`'s.
```go
func TestHello(t *testing.T) {
assertCorrectMessage := func(t testing.TB, got, want string) {
t.Helper()
if got != want {
t.Errorf("got %q want %q", got, want)
}
}
// test 1
t.Run("saying hello to people", func(t *testing.T) {
got := Hello("Chris")
want := "Hello, Chris"
assertCorrectMessage(t, got, want)
})
// test 2
t.Run("empty string defaults to 'World'", func(t *testing.T) {
got := Hello("")
want := "Hello, World"
assertCorrectMessage(t, got, want)
})
}
```
|
1.0
|
t.Helper() for assertion helper function - `t.Helper()` will report `assertCorrectMessage` callers line number instead of `t.Errorf`'s.
```go
func TestHello(t *testing.T) {
assertCorrectMessage := func(t testing.TB, got, want string) {
t.Helper()
if got != want {
t.Errorf("got %q want %q", got, want)
}
}
// test 1
t.Run("saying hello to people", func(t *testing.T) {
got := Hello("Chris")
want := "Hello, Chris"
assertCorrectMessage(t, got, want)
})
// test 2
t.Run("empty string defaults to 'World'", func(t *testing.T) {
got := Hello("")
want := "Hello, World"
assertCorrectMessage(t, got, want)
})
}
```
|
test
|
t helper for assertion helper function t helper will report assertcorrectmessage callers line number instead of t errorf s go func testhello t testing t assertcorrectmessage func t testing tb got want string t helper if got want t errorf got q want q got want test t run saying hello to people func t testing t got hello chris want hello chris assertcorrectmessage t got want test t run empty string defaults to world func t testing t got hello want hello world assertcorrectmessage t got want
| 1
|
679,934
| 23,250,913,267
|
IssuesEvent
|
2022-08-04 03:33:34
|
MuntashirAkon/AppManager
|
https://api.github.com/repos/MuntashirAkon/AppManager
|
closed
|
Support for market://search
|
Feature Priority: 3 Status: Accepted
|
Add support for `market://search?q=<package-name>` so that people can access searching facility directly from the launcher's search engine (if they support it).
|
1.0
|
Support for market://search - Add support for `market://search?q=<package-name>` so that people can access searching facility directly from the launcher's search engine (if they support it).
|
non_test
|
support for market search add support for market search q so that people can access searching facility directly from the launcher s search engine if they support it
| 0
|
298,720
| 25,851,638,687
|
IssuesEvent
|
2022-12-13 10:47:23
|
mozilla-mobile/fenix
|
https://api.github.com/repos/mozilla-mobile/fenix
|
opened
|
[UITests] Track ignored tests from #27262
|
eng:disabled-test eng:ui-test
|
In https://github.com/mozilla-mobile/fenix/pull/27262 we disabled some failing tests after changing the interaction with the homescreen from `SearchDialogFragment`.
Failing tests were ignored with:
`@Ignore("Failing after changing SearchDialog homescreen interaction. See: https://github.com/mozilla-mobile/fenix/issues/??")`
cc @AndiAJ @sv-ohorvath
|
2.0
|
[UITests] Track ignored tests from #27262 - In https://github.com/mozilla-mobile/fenix/pull/27262 we disabled some failing tests after changing the interaction with the homescreen from `SearchDialogFragment`.
Failing tests were ignored with:
`@Ignore("Failing after changing SearchDialog homescreen interaction. See: https://github.com/mozilla-mobile/fenix/issues/??")`
cc @AndiAJ @sv-ohorvath
|
test
|
track ignored tests from in we disabled some failing tests after changing the interaction with the homescreen from searchdialogfragment failing tests were ignored with ignore failing after changing searchdialog homescreen interaction see cc andiaj sv ohorvath
| 1
|
84,709
| 7,930,151,671
|
IssuesEvent
|
2018-07-06 17:39:42
|
brave/browser-laptop
|
https://api.github.com/repos/brave/browser-laptop
|
closed
|
Lazy load Tor instead of at startup
|
QA/test-plan-specified feature/tor release-notes/include release/blocking
|
## Test plan
See https://github.com/brave/browser-laptop/pull/14668
### Description
Currently Tor is loaded always at startup, but not everyone wants Tor.
This negatively affects startup time and also bypasses the ability to not use Tor which is important especially for corporate users.
### Steps to Reproduce
1. npm start
**Actual result:**
You can see Tor logging
**Expected result:**
You only see Tor logging once the first private tab is opened.
**Reproduces how often:**
Always at startup
### Brave Version
**about:brave info:**
Brave: 0.23.19
V8: 6.7.288.46
rev: 178c3fb
Muon: 7.1.3
OS Release: 17.5.0
Update Channel: Release
OS Architecture: x64
OS Platform: macOS
Node.js: 7.9.0
Tor: 0.3.3.7 (git-035a35178c92da94)
Brave Sync: v1.4.2
libchromiumcontent: 67.0.3396.87
**Reproducible on current live release:**
Yes
### Additional Information
<!--
Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue.
-->
|
1.0
|
Lazy load Tor instead of at startup - ## Test plan
See https://github.com/brave/browser-laptop/pull/14668
### Description
Currently Tor is loaded always at startup, but not everyone wants Tor.
This negatively affects startup time and also bypasses the ability to not use Tor which is important especially for corporate users.
### Steps to Reproduce
1. npm start
**Actual result:**
You can see Tor logging
**Expected result:**
You only see Tor logging once the first private tab is opened.
**Reproduces how often:**
Always at startup
### Brave Version
**about:brave info:**
Brave: 0.23.19
V8: 6.7.288.46
rev: 178c3fb
Muon: 7.1.3
OS Release: 17.5.0
Update Channel: Release
OS Architecture: x64
OS Platform: macOS
Node.js: 7.9.0
Tor: 0.3.3.7 (git-035a35178c92da94)
Brave Sync: v1.4.2
libchromiumcontent: 67.0.3396.87
**Reproducible on current live release:**
Yes
### Additional Information
<!--
Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue.
-->
|
test
|
lazy load tor instead of at startup test plan see description currently tor is loaded always at startup but not everyone wants tor this negatively affects startup time and also bypasses the ability to not use tor which is important especially for corporate users steps to reproduce npm start actual result you can see tor logging expected result you only see tor logging once the first private tab is opened reproduces how often always at startup brave version about brave info brave rev muon os release update channel release os architecture os platform macos node js tor git brave sync libchromiumcontent reproducible on current live release yes additional information any additional information related issues extra qa steps configuration or data that might be necessary to reproduce the issue
| 1
|
591,335
| 17,837,203,340
|
IssuesEvent
|
2021-09-03 04:04:56
|
bleachbit/bleachbit
|
https://api.github.com/repos/bleachbit/bleachbit
|
closed
|
Persistent error when deleting (Windows Defender backups)
|
bug priority:high
|
Bleachbit 4.0.0 on Windows 10.
Receive the following error during execution of clean, no errors show up during preview.
[WinError 5] Access is denied.: Command to delete C:\ProgramData\Microsoft\Windows Defender\Definition Updates\Backup\mpasbase.vdm
[WinError 5] Access is denied.: Command to delete C:\ProgramData\Microsoft\Windows Defender\Definition Updates\Backup\mpasdlta.vdm
[WinError 5] Access is denied.: Command to delete C:\ProgramData\Microsoft\Windows Defender\Definition Updates\Backup\mpavbase.vdm
[WinError 5] Access is denied.: Command to delete C:\ProgramData\Microsoft\Windows Defender\Definition Updates\Backup\mpavdlta.vdm
[WinError 5] Access is denied.: Command to delete C:\ProgramData\Microsoft\Windows Defender\Definition Updates\Backup\mpengine.dll
[WinError 5] Access is denied.: Command to delete C:\ProgramData\Microsoft\Windows Defender\Definition Updates\Backup\mpengine.lkg
|
1.0
|
Persistent error when deleting (Windows Defender backups) - Bleachbit 4.0.0 on Windows 10.
Receive the following error during execution of clean, no errors show up during preview.
[WinError 5] Access is denied.: Command to delete C:\ProgramData\Microsoft\Windows Defender\Definition Updates\Backup\mpasbase.vdm
[WinError 5] Access is denied.: Command to delete C:\ProgramData\Microsoft\Windows Defender\Definition Updates\Backup\mpasdlta.vdm
[WinError 5] Access is denied.: Command to delete C:\ProgramData\Microsoft\Windows Defender\Definition Updates\Backup\mpavbase.vdm
[WinError 5] Access is denied.: Command to delete C:\ProgramData\Microsoft\Windows Defender\Definition Updates\Backup\mpavdlta.vdm
[WinError 5] Access is denied.: Command to delete C:\ProgramData\Microsoft\Windows Defender\Definition Updates\Backup\mpengine.dll
[WinError 5] Access is denied.: Command to delete C:\ProgramData\Microsoft\Windows Defender\Definition Updates\Backup\mpengine.lkg
|
non_test
|
persistent error when deleting windows defender backups bleachbit on windows receive the following error during execution of clean no errors show up during preview access is denied command to delete c programdata microsoft windows defender definition updates backup mpasbase vdm access is denied command to delete c programdata microsoft windows defender definition updates backup mpasdlta vdm access is denied command to delete c programdata microsoft windows defender definition updates backup mpavbase vdm access is denied command to delete c programdata microsoft windows defender definition updates backup mpavdlta vdm access is denied command to delete c programdata microsoft windows defender definition updates backup mpengine dll access is denied command to delete c programdata microsoft windows defender definition updates backup mpengine lkg
| 0
|
291,784
| 25,175,086,746
|
IssuesEvent
|
2022-11-11 08:30:42
|
apache/pulsar
|
https://api.github.com/repos/apache/pulsar
|
closed
|
Intermittent test failure: DispatcherBlockConsumerTest.testConsumerBlockingWithUnAckedMessagesAndRedelivery
|
type/bug help-wanted component/test triage/week-43 lifecycle/stale
|
[build](https://builds.apache.org/job/pulsar-pull-request/org.apache.pulsar$pulsar-broker/681/testReport/junit/org.apache.pulsar.client.api/DispatcherBlockConsumerTest/testConsumerBlockingWithUnAckedMessagesAndRedelivery/)
```
Error Message
expected [true] but found [false]
Stacktrace
java.lang.AssertionError: expected [true] but found [false]
at org.testng.Assert.fail(Assert.java:94)
at org.testng.Assert.failNotEquals(Assert.java:494)
at org.testng.Assert.assertTrue(Assert.java:42)
at org.testng.Assert.assertTrue(Assert.java:52)
at org.apache.pulsar.client.api.DispatcherBlockConsumerTest.testConsumerBlockingWithUnAckedMessagesAndRedelivery(DispatcherBlockConsumerTest.java:288)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
at org.testng.internal.InvokeMethodRunnable.runOne(InvokeMethodRunnable.java:46)
at org.testng.internal.InvokeMethodRunnable.run(InvokeMethodRunnable.java:37)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```
|
1.0
|
Intermittent test failure: DispatcherBlockConsumerTest.testConsumerBlockingWithUnAckedMessagesAndRedelivery - [build](https://builds.apache.org/job/pulsar-pull-request/org.apache.pulsar$pulsar-broker/681/testReport/junit/org.apache.pulsar.client.api/DispatcherBlockConsumerTest/testConsumerBlockingWithUnAckedMessagesAndRedelivery/)
```
Error Message
expected [true] but found [false]
Stacktrace
java.lang.AssertionError: expected [true] but found [false]
at org.testng.Assert.fail(Assert.java:94)
at org.testng.Assert.failNotEquals(Assert.java:494)
at org.testng.Assert.assertTrue(Assert.java:42)
at org.testng.Assert.assertTrue(Assert.java:52)
at org.apache.pulsar.client.api.DispatcherBlockConsumerTest.testConsumerBlockingWithUnAckedMessagesAndRedelivery(DispatcherBlockConsumerTest.java:288)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
at org.testng.internal.InvokeMethodRunnable.runOne(InvokeMethodRunnable.java:46)
at org.testng.internal.InvokeMethodRunnable.run(InvokeMethodRunnable.java:37)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```
|
test
|
intermittent test failure dispatcherblockconsumertest testconsumerblockingwithunackedmessagesandredelivery error message expected but found stacktrace java lang assertionerror expected but found at org testng assert fail assert java at org testng assert failnotequals assert java at org testng assert asserttrue assert java at org testng assert asserttrue assert java at org apache pulsar client api dispatcherblockconsumertest testconsumerblockingwithunackedmessagesandredelivery dispatcherblockconsumertest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org testng internal methodinvocationhelper invokemethod methodinvocationhelper java at org testng internal invokemethodrunnable runone invokemethodrunnable java at org testng internal invokemethodrunnable run invokemethodrunnable java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java
| 1
|
218,102
| 16,943,819,826
|
IssuesEvent
|
2021-06-28 01:55:11
|
alibaba/nacos
|
https://api.github.com/repos/alibaba/nacos
|
closed
|
Add unit tests for package com.alibaba.nacos.client.utils in nacos 2.0
|
area/Test
|
This is a sub-issue of [ISSUE #5011]
|
1.0
|
Add unit tests for package com.alibaba.nacos.client.utils in nacos 2.0 - This is a sub-issue of [ISSUE #5011]
|
test
|
add unit tests for package com alibaba nacos client utils in nacos this is a sub issue of
| 1
|
251,782
| 21,522,977,098
|
IssuesEvent
|
2022-04-28 15:40:54
|
erikpl/SDG-ontology-visualizer
|
https://api.github.com/repos/erikpl/SDG-ontology-visualizer
|
reopened
|
Make usertest
|
i18n testing rollover
|
- [ ] Make testable Figma prototype
- [ ] Decide type of user test
- [ ] Final changes before user test 07/04/22
|
1.0
|
Make usertest - - [ ] Make testable Figma prototype
- [ ] Decide type of user test
- [ ] Final changes before user test 07/04/22
|
test
|
make usertest make testable figma prototype decide type of user test final changes before user test
| 1
|
256,891
| 22,108,942,353
|
IssuesEvent
|
2022-06-01 19:23:23
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: jepsen/g2/start-stop-2 failed
|
C-test-failure O-robot O-roachtest branch-master release-blocker T-kv
|
roachtest.jepsen/g2/start-stop-2 [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/5336174?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/5336174?buildTab=artifacts#/jepsen/g2/start-stop-2) on master @ [1cea73c8a18623949b81705eb5f75179e6cd8d86](https://github.com/cockroachdb/cockroach/commits/1cea73c8a18623949b81705eb5f75179e6cd8d86):
```
| initialize submodules in the clone
| -j, --jobs <n> number of submodules cloned in parallel
| --template <template-directory>
| directory from which templates will be used
| --reference <repo> reference repository
| --reference-if-able <repo>
| reference repository
| --dissociate use --reference only while cloning
| -o, --origin <name> use <name> instead of 'origin' to track upstream
| -b, --branch <branch>
| checkout <branch> instead of the remote's HEAD
| -u, --upload-pack <path>
| path to git-upload-pack on the remote
| --depth <depth> create a shallow clone of that depth
| --shallow-since <time>
| create a shallow clone since a specific time
| --shallow-exclude <revision>
| deepen history of shallow clone, excluding rev
| --single-branch clone only one branch, HEAD or --branch
| --no-tags don't clone any tags, and make later fetches not to follow them
| --shallow-submodules any cloned submodules will be shallow
| --separate-git-dir <gitdir>
| separate git dir from working tree
| -c, --config <key=value>
| set config inside the new repository
| --server-option <server-specific>
| option to transmit
| -4, --ipv4 use IPv4 addresses only
| -6, --ipv6 use IPv6 addresses only
| --filter <args> object filtering
| --remote-submodules any cloned submodules will use their remote-tracking branch
| --sparse initialize sparse-checkout file to include only files at root
|
|
| stdout:
Wraps: (6) COMMAND_PROBLEM
Wraps: (7) Node 6. Command with error:
| ``````
| bash -e -c '
| if ! test -d /mnt/data1/jepsen; then
| git clone -b tc-nightly --depth 1 https://github.com/cockroachdb/jepsen /mnt/data1/jepsen --add safe.directory /mnt/data1/jepsen
| else
| cd /mnt/data1/jepsen
| git fetch origin
| git checkout origin/tc-nightly
| fi
| '
| ``````
Wraps: (8) exit status 129
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *cluster.WithCommandDetails (6) errors.Cmd (7) *hintdetail.withDetail (8) *exec.ExitError
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/kv-triage
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*jepsen/g2/start-stop-2.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
2.0
|
roachtest: jepsen/g2/start-stop-2 failed - roachtest.jepsen/g2/start-stop-2 [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/5336174?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/5336174?buildTab=artifacts#/jepsen/g2/start-stop-2) on master @ [1cea73c8a18623949b81705eb5f75179e6cd8d86](https://github.com/cockroachdb/cockroach/commits/1cea73c8a18623949b81705eb5f75179e6cd8d86):
```
| initialize submodules in the clone
| -j, --jobs <n> number of submodules cloned in parallel
| --template <template-directory>
| directory from which templates will be used
| --reference <repo> reference repository
| --reference-if-able <repo>
| reference repository
| --dissociate use --reference only while cloning
| -o, --origin <name> use <name> instead of 'origin' to track upstream
| -b, --branch <branch>
| checkout <branch> instead of the remote's HEAD
| -u, --upload-pack <path>
| path to git-upload-pack on the remote
| --depth <depth> create a shallow clone of that depth
| --shallow-since <time>
| create a shallow clone since a specific time
| --shallow-exclude <revision>
| deepen history of shallow clone, excluding rev
| --single-branch clone only one branch, HEAD or --branch
| --no-tags don't clone any tags, and make later fetches not to follow them
| --shallow-submodules any cloned submodules will be shallow
| --separate-git-dir <gitdir>
| separate git dir from working tree
| -c, --config <key=value>
| set config inside the new repository
| --server-option <server-specific>
| option to transmit
| -4, --ipv4 use IPv4 addresses only
| -6, --ipv6 use IPv6 addresses only
| --filter <args> object filtering
| --remote-submodules any cloned submodules will use their remote-tracking branch
| --sparse initialize sparse-checkout file to include only files at root
|
|
| stdout:
Wraps: (6) COMMAND_PROBLEM
Wraps: (7) Node 6. Command with error:
| ``````
| bash -e -c '
| if ! test -d /mnt/data1/jepsen; then
| git clone -b tc-nightly --depth 1 https://github.com/cockroachdb/jepsen /mnt/data1/jepsen --add safe.directory /mnt/data1/jepsen
| else
| cd /mnt/data1/jepsen
| git fetch origin
| git checkout origin/tc-nightly
| fi
| '
| ``````
Wraps: (8) exit status 129
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *cluster.WithCommandDetails (6) errors.Cmd (7) *hintdetail.withDetail (8) *exec.ExitError
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/kv-triage
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*jepsen/g2/start-stop-2.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
test
|
roachtest jepsen start stop failed roachtest jepsen start stop with on master initialize submodules in the clone j jobs number of submodules cloned in parallel template directory from which templates will be used reference reference repository reference if able reference repository dissociate use reference only while cloning o origin use instead of origin to track upstream b branch checkout instead of the remote s head u upload pack path to git upload pack on the remote depth create a shallow clone of that depth shallow since create a shallow clone since a specific time shallow exclude deepen history of shallow clone excluding rev single branch clone only one branch head or branch no tags don t clone any tags and make later fetches not to follow them shallow submodules any cloned submodules will be shallow separate git dir separate git dir from working tree c config set config inside the new repository server option option to transmit use addresses only use addresses only filter object filtering remote submodules any cloned submodules will use their remote tracking branch sparse initialize sparse checkout file to include only files at root stdout wraps command problem wraps node command with error bash e c if test d mnt jepsen then git clone b tc nightly depth mnt jepsen add safe directory mnt jepsen else cd mnt jepsen git fetch origin git checkout origin tc nightly fi wraps exit status error types withstack withstack errutil withprefix withstack withstack errutil withprefix cluster withcommanddetails errors cmd hintdetail withdetail exec exiterror help see see cc cockroachdb kv triage
| 1
|
73,218
| 31,991,628,354
|
IssuesEvent
|
2023-09-21 06:22:58
|
elastic/integrations
|
https://api.github.com/repos/elastic/integrations
|
closed
|
CockroachDb TSDB Enablement
|
Team:Service-Integrations
|
## Test Environment Setup
- [x] Creation of CockroachDb Test Environment.
## Datastream : Status
- [x] Add dimension fields
- https://github.com/elastic/integrations/pull/5479
- [x] #6728
- https://github.com/elastic/integrations/pull/7429
#### Verification and validation
- [x] Verification of data in visualisation after enabling TSDB flag in kibana
- [x] Verification of the count of documents (before & after TSDB enablement) in Discover Interface
- [x] Verify if field mapping is correct in the data stream template.
## Issues
- https://github.com/elastic/kibana/issues/155004. [Blocker for metric_type]
Enable TSDB by default: https://github.com/elastic/integrations/pull/6774
|
1.0
|
CockroachDb TSDB Enablement - ## Test Environment Setup
- [x] Creation of CockroachDb Test Environment.
## Datastream : Status
- [x] Add dimension fields
- https://github.com/elastic/integrations/pull/5479
- [x] #6728
- https://github.com/elastic/integrations/pull/7429
#### Verification and validation
- [x] Verification of data in visualisation after enabling TSDB flag in kibana
- [x] Verification of the count of documents (before & after TSDB enablement) in Discover Interface
- [x] Verify if field mapping is correct in the data stream template.
## Issues
- https://github.com/elastic/kibana/issues/155004. [Blocker for metric_type]
Enable TSDB by default: https://github.com/elastic/integrations/pull/6774
|
non_test
|
cockroachdb tsdb enablement test environment setup creation of cockroachdb test environment datastream status add dimension fields verification and validation verification of data in visualisation after enabling tsdb flag in kibana verification of the count of documents before after tsdb enablement in discover interface verify if field mapping is correct in the data stream template issues enable tsdb by default
| 0
|
267,116
| 8,379,269,718
|
IssuesEvent
|
2018-10-06 23:18:00
|
otrv4/pidgin-otrng
|
https://api.github.com/repos/otrv4/pidgin-otrng
|
closed
|
Managing persistent values
|
high priority needs clarification question
|
Some ideas:
- if the private key file is deleted, the client profile, prekey profile and prekey messages should be regenerated as well.
- if the forging key file is deleted, the client profile and (maybe) prekey profile should be regenerated as well.
- if the shared prekey file is deleted, the prekey profile should be regenerated as well.
- if the prekey messages are deleted should we deleted the published ones as well?
|
1.0
|
Managing persistent values - Some ideas:
- if the private key file is deleted, the client profile, prekey profile and prekey messages should be regenerated as well.
- if the forging key file is deleted, the client profile and (maybe) prekey profile should be regenerated as well.
- if the shared prekey file is deleted, the prekey profile should be regenerated as well.
- if the prekey messages are deleted should we deleted the published ones as well?
|
non_test
|
managing persistent values some ideas if the private key file is deleted the client profile prekey profile and prekey messages should be regenerated as well if the forging key file is deleted the client profile and maybe prekey profile should be regenerated as well if the shared prekey file is deleted the prekey profile should be regenerated as well if the prekey messages are deleted should we deleted the published ones as well
| 0
|
177,323
| 13,691,903,361
|
IssuesEvent
|
2020-09-30 16:08:29
|
phetsims/QA
|
https://api.github.com/repos/phetsims/QA
|
closed
|
RC test: Energy Forms and Changes 1.4.0-rc.2
|
QA:rc-test
|
<!---
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~ PhET Release Candidate Test Template ~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Notes and Instructions for Developers:
1. Comments indicate whether something can be omitted or edited.
2. Please check the comments before trying to omit or edit something.
3. Please don't rearrange the sections.
-->
@KatieWoe, @arouinfar, @ariel-phet, @kathy-phet, energy-forms-and-changes/1.4.0-rc.2 is ready for RC testing. The phet-io version of this release will be shared with a client. The publication due date is October 1st. This is the 2nd release from the 1.4 release branch, but the code changes since the previous release were quite significant, so a full retest is needed. There are also several fixed issues that should be checked (these are listed in two of the sections below). Please document issues in https://github.com/phetsims/energy-forms-and-changes/issues and link to this issue.
Assigning to @ariel-phet for prioritization.
<!---
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Section 1: General RC Testing [CAN BE OMITTED, SHOULD BE EDITED IF NOT OMITTED]
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-->
<details>
<summary><b>General RC Test</b></summary>
<!--- [DO NOT OMIT, CAN BE EDITED] -->
<h3>What to Test</h3>
- Click every single button.
- Test all possible forms of input.
- Test all mouse/trackpad inputs.
- Test all touchscreen inputs.
- If there is sound, make sure it works.
- Make sure you can't lose anything.
- Play with the sim normally.
- Try to break the sim.
- Test all query parameters on all platforms. (See [QA Book](https://github.com/phetsims/QA/blob/master/doc/qa-book.md)
for a list of query parameters.)
- Download HTML on Chrome and iOS.
- Make sure the iFrame version of the simulation is working as intended on all platforms.
- Make sure the XHTML version of the simulation is working as intended on all platforms.
- Complete the test matrix.
- Don't forget to make sure the sim works with Legends of Learning.
- Test the Game Up harness on at least one platform.
- Check [this](https://docs.google.com/spreadsheets/d/1umIAmhn89WN1nzcHKhYJcv-n3Oj6ps1wITc-CjWYytE/edit#gid=0) LoL
spreadsheet and notify AR or AM if it not there.
- If this is rc.2 please do a memory test.
- When making an issue, check to see if it was in a previously published version
- Try to include version numbers for browsers
- If there is a console available, check for errors and include them in the Problem Description.
- As an RC begins and ends, check the sim repo. If there is a maintenance issue, check it and notify developers if
there is a problem.
<!--- [CAN BE OMITTED, SHOULD BE EDITED IF NOT OMITTED] -->
<h3>Issues to Verify</h3>
- [x] [Water temp has delayed reset on second screen](https://github.com/phetsims/energy-forms-and-changes/issues/343)
- [x] [Water energy chunks boil far too early when biker runs out of food](https://github.com/phetsims/energy-forms-and-changes/issues/346)
- [x] [Energy chunks don't appear in falling water if faucet is off](https://github.com/phetsims/energy-forms-and-changes/issues/347)
These issues should have the "status:ready-for-qa" label. Check these issues off and close them if they are fixed.
Otherwise, post a comment in the issue saying that it wasn't fixed and link back to this issue. If the label is
"status:ready-for-review" or "status:fixed-pending-deploy" then assign back to the developer when done, even if fixed.
<!--- [DO NOT OMIT, CAN BE EDITED] -->
<h3>Link(s)</h3>
- **[Simulation](https://phet-dev.colorado.edu/html/energy-forms-and-changes/1.4.0-rc.2/phet/energy-forms-and-changes_all_phet.html)**
- **[iFrame](https://phet-dev.colorado.edu/html/energy-forms-and-changes/1.4.0-rc.2/phet/energy-forms-and-changes_all_iframe_phet.html)**
- **[XHTML](https://phet-dev.colorado.edu/html/energy-forms-and-changes/1.4.0-rc.2/phet/xhtml/energy-forms-and-changes_all.xhtml)**
- **[Test Matrix](https://docs.google.com/spreadsheets/d/19WAm2BOsEg1f8XCLo-PT8eQsGgb3uR9Npt6obPrMiA4/edit#gid=1313829856)**
- **[Legends of Learning Harness](https://developers.legendsoflearning.com/public-harness/index.html?startPayload=%7B%22languageCode%22%3A%22en%22%7D)**
<hr>
</details>
<!---
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Section 2: PhET-iO RC Test [CAN BE OMITTED, SHOULD BE EDITED IF NOT OMITTED]
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-->
<details>
<summary><b>PhET-iO RC Test</b></summary>
<!--- [DO NOT OMIT, CAN BE EDITED] -->
<h3>What to Test</h3>
- Make sure that public files do not have password protection. Use a private browser for this.
- Make sure that private files do have password protection. Use a private browser for this.
- Make sure standalone sim is working properly.
- Make sure the wrapper index is working properly.
- Make sure each wrapper is working properly.
- Launch the simulation in Studio with ?stringTest=xss and make sure the sim doesn't navigate to youtube
- For newer PhET-iO wrapper indices, save the "basic example of a functional wrapper" as a .html file and open it. Make
sure the simulation loads without crashing or throwing errors.
- For an update or maintenance release please check the backwards compatibility of the playback wrapper.
[Here's the link to the previous wrapper.](link)
- Load the login wrapper just to make sure it works. Do so by adding this link from the sim deployed root:
```
/wrappers/login/?wrapper=record&validationRule=validateDigits&&numberOfDigits=5&promptText=ENTER_A_5_DIGIT_NUMBER
```
Further instructions in QA Book
- Conduct a recording test to Metacog, further instructions in the QA Book. Do this for iPadOS + Safari and one other random platform.
- Conduct a memory test on the stand alone sim wrapper.
<!--- [CAN BE
OMITTED, SHOULD BE EDITED IF NOT OMITTED] -->
<h3>Focus and Special Instructions</h3>
Please pay close attention to loading/setting state (hitting "launch" in Studio), especially on the second screen. Because of some implementation decisions, the loaded state may not match perfectly for all cases, but please bring any behavior that is unexpected to my attention. I think @KatieWoe is getting pretty familiar with what to expect for state on this sim, but feel free to Slack me any time with questions, or make an issue. Thanks!
<!--- [CAN BE OMITTED, SHOULD BE EDITED IF NOT OMITTED] -->
<h3>Issues to Verify</h3>
- [x] [Tea kettle energy chunk preloading](https://github.com/phetsims/energy-forms-and-changes/issues/336)
- [x] [Second screen state](https://github.com/phetsims/energy-forms-and-changes/issues/306)
- [x] [Feed Me button is pressed when launched](https://github.com/phetsims/energy-forms-and-changes/issues/335)
- [x] [Hiding Buttons on Second Screen Hides Connected Element as Well](https://github.com/phetsims/energy-forms-and-changes/issues/348)
- [x] [screen 1 blocks and beakers emitted chunks don't work in state](https://github.com/phetsims/energy-forms-and-changes/issues/361)
- [x] [Studio launch doesn't restore EnergyChunk state like the state wrapper](https://github.com/phetsims/energy-forms-and-changes/issues/362)
- [x] [Memory leak in the view when setting state](https://github.com/phetsims/energy-forms-and-changes/issues/368)
- [x] [Water from faucet can have gaps and other odd-looking behavior](https://github.com/phetsims/energy-forms-and-changes/issues/369)
These issues should have the "status:ready-for-qa" label. Check these issues off and close them if they are fixed.
Otherwise, post a comment in the issue saying that it wasn't fixed and link back to this issue. If the label is
"status:ready-for-review" or "status:fixed-pending-deploy" then assign back to the developer when done, even if fixed.
<!--- [DO NOT OMIT, CAN BE EDITED] -->
<h3>Link(s)</h3>
- **[Wrapper Index](https://phet-dev.colorado.edu/html/energy-forms-and-changes/1.4.0-rc.2/phet-io/)**
- **[Test Matrix](https://docs.google.com/spreadsheets/d/17FYYm5Halt8VVi4vMIsjGsrtELD317S5f0zNUdWqt-A/edit#gid=1474718953)**
<hr>
</details>
<!---
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Section 4: FAQs for QA Members [DO NOT OMIT, DO NOT EDIT]
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-->
<details>
<summary><b>FAQs for QA Members</b></summary>
<br>
<!--- [DO NOT OMIT, DO NOT EDIT] -->
<details>
<summary><i>There are multiple tests in this issue... Which test should I do first?</i></summary>
Test in order! Test the first thing first, the second thing second, and so on.
</details>
<br>
<!--- [DO NOT OMIT, DO NOT EDIT] -->
<details>
<summary><i>How should I format my issue?</i></summary>
Here's a template for making issues:
<b>Test Device</b>
blah
<b>Operating System</b>
blah
<b>Browser</b>
blah
<b>Problem Description</b>
blah
<b>Steps to Reproduce</b>
blah
<b>Visuals</b>
blah
<details>
<summary><b>Troubleshooting Information</b></summary>
blah
</details>
</details>
<br>
<!--- [DO NOT OMIT, DO NOT EDIT] -->
<details>
<summary><i>Who should I assign?</i></summary>
We typically assign the developer who opened the issue in the QA repository.
</details>
<br>
<!--- [DO NOT OMIT, DO NOT EDIT] -->
<details>
<summary><i>My question isn't in here... What should I do?</i></summary>
You should:
1. Consult the [QA Book](https://github.com/phetsims/QA/blob/master/doc/qa-book.md).
2. Google it.
3. Ask Katie.
4. Ask a developer.
5. Google it again.
6. Cry.
</details>
<br>
<hr>
</details>
|
1.0
|
RC test: Energy Forms and Changes 1.4.0-rc.2 - <!---
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~ PhET Release Candidate Test Template ~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Notes and Instructions for Developers:
1. Comments indicate whether something can be omitted or edited.
2. Please check the comments before trying to omit or edit something.
3. Please don't rearrange the sections.
-->
@KatieWoe, @arouinfar, @ariel-phet, @kathy-phet, energy-forms-and-changes/1.4.0-rc.2 is ready for RC testing. The phet-io version of this release will be shared with a client. The publication due date is October 1st. This is the 2nd release from the 1.4 release branch, but the code changes since the previous release were quite significant, so a full retest is needed. There are also several fixed issues that should be checked (these are listed in two of the sections below). Please document issues in https://github.com/phetsims/energy-forms-and-changes/issues and link to this issue.
Assigning to @ariel-phet for prioritization.
<!---
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Section 1: General RC Testing [CAN BE OMITTED, SHOULD BE EDITED IF NOT OMITTED]
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-->
<details>
<summary><b>General RC Test</b></summary>
<!--- [DO NOT OMIT, CAN BE EDITED] -->
<h3>What to Test</h3>
- Click every single button.
- Test all possible forms of input.
- Test all mouse/trackpad inputs.
- Test all touchscreen inputs.
- If there is sound, make sure it works.
- Make sure you can't lose anything.
- Play with the sim normally.
- Try to break the sim.
- Test all query parameters on all platforms. (See [QA Book](https://github.com/phetsims/QA/blob/master/doc/qa-book.md)
for a list of query parameters.)
- Download HTML on Chrome and iOS.
- Make sure the iFrame version of the simulation is working as intended on all platforms.
- Make sure the XHTML version of the simulation is working as intended on all platforms.
- Complete the test matrix.
- Don't forget to make sure the sim works with Legends of Learning.
- Test the Game Up harness on at least one platform.
- Check [this](https://docs.google.com/spreadsheets/d/1umIAmhn89WN1nzcHKhYJcv-n3Oj6ps1wITc-CjWYytE/edit#gid=0) LoL
spreadsheet and notify AR or AM if it not there.
- If this is rc.2 please do a memory test.
- When making an issue, check to see if it was in a previously published version
- Try to include version numbers for browsers
- If there is a console available, check for errors and include them in the Problem Description.
- As an RC begins and ends, check the sim repo. If there is a maintenance issue, check it and notify developers if
there is a problem.
<!--- [CAN BE OMITTED, SHOULD BE EDITED IF NOT OMITTED] -->
<h3>Issues to Verify</h3>
- [x] [Water temp has delayed reset on second screen](https://github.com/phetsims/energy-forms-and-changes/issues/343)
- [x] [Water energy chunks boil far too early when biker runs out of food](https://github.com/phetsims/energy-forms-and-changes/issues/346)
- [x] [Energy chunks don't appear in falling water if faucet is off](https://github.com/phetsims/energy-forms-and-changes/issues/347)
These issues should have the "status:ready-for-qa" label. Check these issues off and close them if they are fixed.
Otherwise, post a comment in the issue saying that it wasn't fixed and link back to this issue. If the label is
"status:ready-for-review" or "status:fixed-pending-deploy" then assign back to the developer when done, even if fixed.
<!--- [DO NOT OMIT, CAN BE EDITED] -->
<h3>Link(s)</h3>
- **[Simulation](https://phet-dev.colorado.edu/html/energy-forms-and-changes/1.4.0-rc.2/phet/energy-forms-and-changes_all_phet.html)**
- **[iFrame](https://phet-dev.colorado.edu/html/energy-forms-and-changes/1.4.0-rc.2/phet/energy-forms-and-changes_all_iframe_phet.html)**
- **[XHTML](https://phet-dev.colorado.edu/html/energy-forms-and-changes/1.4.0-rc.2/phet/xhtml/energy-forms-and-changes_all.xhtml)**
- **[Test Matrix](https://docs.google.com/spreadsheets/d/19WAm2BOsEg1f8XCLo-PT8eQsGgb3uR9Npt6obPrMiA4/edit#gid=1313829856)**
- **[Legends of Learning Harness](https://developers.legendsoflearning.com/public-harness/index.html?startPayload=%7B%22languageCode%22%3A%22en%22%7D)**
<hr>
</details>
<!---
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Section 2: PhET-iO RC Test [CAN BE OMITTED, SHOULD BE EDITED IF NOT OMITTED]
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-->
<details>
<summary><b>PhET-iO RC Test</b></summary>
<!--- [DO NOT OMIT, CAN BE EDITED] -->
<h3>What to Test</h3>
- Make sure that public files do not have password protection. Use a private browser for this.
- Make sure that private files do have password protection. Use a private browser for this.
- Make sure standalone sim is working properly.
- Make sure the wrapper index is working properly.
- Make sure each wrapper is working properly.
- Launch the simulation in Studio with ?stringTest=xss and make sure the sim doesn't navigate to youtube
- For newer PhET-iO wrapper indices, save the "basic example of a functional wrapper" as a .html file and open it. Make
sure the simulation loads without crashing or throwing errors.
- For an update or maintenance release please check the backwards compatibility of the playback wrapper.
[Here's the link to the previous wrapper.](link)
- Load the login wrapper just to make sure it works. Do so by adding this link from the sim deployed root:
```
/wrappers/login/?wrapper=record&validationRule=validateDigits&&numberOfDigits=5&promptText=ENTER_A_5_DIGIT_NUMBER
```
Further instructions in QA Book
- Conduct a recording test to Metacog, further instructions in the QA Book. Do this for iPadOS + Safari and one other random platform.
- Conduct a memory test on the stand alone sim wrapper.
<!--- [CAN BE
OMITTED, SHOULD BE EDITED IF NOT OMITTED] -->
<h3>Focus and Special Instructions</h3>
Please pay close attention to loading/setting state (hitting "launch" in Studio), especially on the second screen. Because of some implementation decisions, the loaded state may not match perfectly for all cases, but please bring any behavior that is unexpected to my attention. I think @KatieWoe is getting pretty familiar with what to expect for state on this sim, but feel free to Slack me any time with questions, or make an issue. Thanks!
<!--- [CAN BE OMITTED, SHOULD BE EDITED IF NOT OMITTED] -->
<h3>Issues to Verify</h3>
- [x] [Tea kettle energy chunk preloading](https://github.com/phetsims/energy-forms-and-changes/issues/336)
- [x] [Second screen state](https://github.com/phetsims/energy-forms-and-changes/issues/306)
- [x] [Feed Me button is pressed when launched](https://github.com/phetsims/energy-forms-and-changes/issues/335)
- [x] [Hiding Buttons on Second Screen Hides Connected Element as Well](https://github.com/phetsims/energy-forms-and-changes/issues/348)
- [x] [screen 1 blocks and beakers emitted chunks don't work in state](https://github.com/phetsims/energy-forms-and-changes/issues/361)
- [x] [Studio launch doesn't restore EnergyChunk state like the state wrapper](https://github.com/phetsims/energy-forms-and-changes/issues/362)
- [x] [Memory leak in the view when setting state](https://github.com/phetsims/energy-forms-and-changes/issues/368)
- [x] [Water from faucet can have gaps and other odd-looking behavior](https://github.com/phetsims/energy-forms-and-changes/issues/369)
These issues should have the "status:ready-for-qa" label. Check these issues off and close them if they are fixed.
Otherwise, post a comment in the issue saying that it wasn't fixed and link back to this issue. If the label is
"status:ready-for-review" or "status:fixed-pending-deploy" then assign back to the developer when done, even if fixed.
<!--- [DO NOT OMIT, CAN BE EDITED] -->
<h3>Link(s)</h3>
- **[Wrapper Index](https://phet-dev.colorado.edu/html/energy-forms-and-changes/1.4.0-rc.2/phet-io/)**
- **[Test Matrix](https://docs.google.com/spreadsheets/d/17FYYm5Halt8VVi4vMIsjGsrtELD317S5f0zNUdWqt-A/edit#gid=1474718953)**
<hr>
</details>
<!---
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Section 4: FAQs for QA Members [DO NOT OMIT, DO NOT EDIT]
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-->
<details>
<summary><b>FAQs for QA Members</b></summary>
<br>
<!--- [DO NOT OMIT, DO NOT EDIT] -->
<details>
<summary><i>There are multiple tests in this issue... Which test should I do first?</i></summary>
Test in order! Test the first thing first, the second thing second, and so on.
</details>
<br>
<!--- [DO NOT OMIT, DO NOT EDIT] -->
<details>
<summary><i>How should I format my issue?</i></summary>
Here's a template for making issues:
<b>Test Device</b>
blah
<b>Operating System</b>
blah
<b>Browser</b>
blah
<b>Problem Description</b>
blah
<b>Steps to Reproduce</b>
blah
<b>Visuals</b>
blah
<details>
<summary><b>Troubleshooting Information</b></summary>
blah
</details>
</details>
<br>
<!--- [DO NOT OMIT, DO NOT EDIT] -->
<details>
<summary><i>Who should I assign?</i></summary>
We typically assign the developer who opened the issue in the QA repository.
</details>
<br>
<!--- [DO NOT OMIT, DO NOT EDIT] -->
<details>
<summary><i>My question isn't in here... What should I do?</i></summary>
You should:
1. Consult the [QA Book](https://github.com/phetsims/QA/blob/master/doc/qa-book.md).
2. Google it.
3. Ask Katie.
4. Ask a developer.
5. Google it again.
6. Cry.
</details>
<br>
<hr>
</details>
|
test
|
rc test energy forms and changes rc phet release candidate test template notes and instructions for developers comments indicate whether something can be omitted or edited please check the comments before trying to omit or edit something please don t rearrange the sections katiewoe arouinfar ariel phet kathy phet energy forms and changes rc is ready for rc testing the phet io version of this release will be shared with a client the publication due date is october this is the release from the release branch but the code changes since the previous release were quite significant so a full retest is needed there are also several fixed issues that should be checked these are listed in two of the sections below please document issues in and link to this issue assigning to ariel phet for prioritization section general rc testing general rc test what to test click every single button test all possible forms of input test all mouse trackpad inputs test all touchscreen inputs if there is sound make sure it works make sure you can t lose anything play with the sim normally try to break the sim test all query parameters on all platforms see for a list of query parameters download html on chrome and ios make sure the iframe version of the simulation is working as intended on all platforms make sure the xhtml version of the simulation is working as intended on all platforms complete the test matrix don t forget to make sure the sim works with legends of learning test the game up harness on at least one platform check lol spreadsheet and notify ar or am if it not there if this is rc please do a memory test when making an issue check to see if it was in a previously published version try to include version numbers for browsers if there is a console available check for errors and include them in the problem description as an rc begins and ends check the sim repo if there is a maintenance issue check it and notify developers if there is a problem issues to verify these issues should have the status ready for qa label check these issues off and close them if they are fixed otherwise post a comment in the issue saying that it wasn t fixed and link back to this issue if the label is status ready for review or status fixed pending deploy then assign back to the developer when done even if fixed link s section phet io rc test phet io rc test what to test make sure that public files do not have password protection use a private browser for this make sure that private files do have password protection use a private browser for this make sure standalone sim is working properly make sure the wrapper index is working properly make sure each wrapper is working properly launch the simulation in studio with stringtest xss and make sure the sim doesn t navigate to youtube for newer phet io wrapper indices save the basic example of a functional wrapper as a html file and open it make sure the simulation loads without crashing or throwing errors for an update or maintenance release please check the backwards compatibility of the playback wrapper link load the login wrapper just to make sure it works do so by adding this link from the sim deployed root wrappers login wrapper record validationrule validatedigits numberofdigits prompttext enter a digit number further instructions in qa book conduct a recording test to metacog further instructions in the qa book do this for ipados safari and one other random platform conduct a memory test on the stand alone sim wrapper can be omitted should be edited if not omitted focus and special instructions please pay close attention to loading setting state hitting launch in studio especially on the second screen because of some implementation decisions the loaded state may not match perfectly for all cases but please bring any behavior that is unexpected to my attention i think katiewoe is getting pretty familiar with what to expect for state on this sim but feel free to slack me any time with questions or make an issue thanks issues to verify these issues should have the status ready for qa label check these issues off and close them if they are fixed otherwise post a comment in the issue saying that it wasn t fixed and link back to this issue if the label is status ready for review or status fixed pending deploy then assign back to the developer when done even if fixed link s section faqs for qa members faqs for qa members there are multiple tests in this issue which test should i do first test in order test the first thing first the second thing second and so on how should i format my issue here s a template for making issues test device blah operating system blah browser blah problem description blah steps to reproduce blah visuals blah troubleshooting information blah who should i assign we typically assign the developer who opened the issue in the qa repository my question isn t in here what should i do you should consult the google it ask katie ask a developer google it again cry
| 1
|
204,292
| 15,437,247,358
|
IssuesEvent
|
2021-03-07 16:01:27
|
commercialhaskell/stackage
|
https://api.github.com/repos/commercialhaskell/stackage
|
opened
|
sydtest-yesod-0.0.0.0 fails to compile with GHC 9.0.1
|
failure: compile failure: test-suite
|
```
Preprocessing test suite 'sydtest-yesod-blog-example-test' for sydtest-yesod-0.0.0.0..
Building test suite 'sydtest-yesod-blog-example-test' for sydtest-yesod-0.0.0.0..
[1 of 4] Compiling Example.Blog
/var/stackage/work/unpack-dir/unpacked/sydtest-yesod-0.0.0.0-c5b28e6fe56216c6c1eba08800cd2468c81dcd17dc72dfbe6db820d0fc0da888/blog-example/Example/Blog.hs:27:1: error:
Generating Persistent entities now requires the following language extensions:
DataKinds
FlexibleInstances
Please enable the extensions by copy/pasting these lines into the top of your file:
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE FlexibleInstances #-}
|
27 | share
| ^^^^^...
```
Will skip testing for now
CC @NorfairKing
|
1.0
|
sydtest-yesod-0.0.0.0 fails to compile with GHC 9.0.1 - ```
Preprocessing test suite 'sydtest-yesod-blog-example-test' for sydtest-yesod-0.0.0.0..
Building test suite 'sydtest-yesod-blog-example-test' for sydtest-yesod-0.0.0.0..
[1 of 4] Compiling Example.Blog
/var/stackage/work/unpack-dir/unpacked/sydtest-yesod-0.0.0.0-c5b28e6fe56216c6c1eba08800cd2468c81dcd17dc72dfbe6db820d0fc0da888/blog-example/Example/Blog.hs:27:1: error:
Generating Persistent entities now requires the following language extensions:
DataKinds
FlexibleInstances
Please enable the extensions by copy/pasting these lines into the top of your file:
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE FlexibleInstances #-}
|
27 | share
| ^^^^^...
```
Will skip testing for now
CC @NorfairKing
|
test
|
sydtest yesod fails to compile with ghc preprocessing test suite sydtest yesod blog example test for sydtest yesod building test suite sydtest yesod blog example test for sydtest yesod compiling example blog var stackage work unpack dir unpacked sydtest yesod blog example example blog hs error generating persistent entities now requires the following language extensions datakinds flexibleinstances please enable the extensions by copy pasting these lines into the top of your file language datakinds language flexibleinstances share will skip testing for now cc norfairking
| 1
|
22,761
| 11,782,783,473
|
IssuesEvent
|
2020-03-17 03:09:53
|
Azure/azure-cli
|
https://api.github.com/repos/Azure/azure-cli
|
closed
|
az acr import with --source and --registry is confusing.
|
Container Registry Service Attention
|
**Describe the bug**
az acr import
**To Reproduce**
`az acr import -h` has `--source` and `--registry` parameters. The help section doesn't document that `-r` and `--source` with registry is mutually exclusive (i.e. both can't specify registry/loginserver name). This causes confusion and bad user experience for cx, running into error state not knowing what's the issue. The error message should also be refined to include the issue above when user actually runs into that condition.
Our [official doc](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-import-images#import-from-a-registry-in-a-different-subscription
) does say: `Notice that the --source parameter specifies only the source repository and image name, not the registry login server name.`
**Expected behavior**
Proper help and error message.
**Environment summary**
```
Windows-10-10.0.18362-SP0
Python 3.6.6
Shell: powershell.exe
azure-cli 2.0.70 *
```
|
1.0
|
az acr import with --source and --registry is confusing. - **Describe the bug**
az acr import
**To Reproduce**
`az acr import -h` has `--source` and `--registry` parameters. The help section doesn't document that `-r` and `--source` with registry is mutually exclusive (i.e. both can't specify registry/loginserver name). This causes confusion and bad user experience for cx, running into error state not knowing what's the issue. The error message should also be refined to include the issue above when user actually runs into that condition.
Our [official doc](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-import-images#import-from-a-registry-in-a-different-subscription
) does say: `Notice that the --source parameter specifies only the source repository and image name, not the registry login server name.`
**Expected behavior**
Proper help and error message.
**Environment summary**
```
Windows-10-10.0.18362-SP0
Python 3.6.6
Shell: powershell.exe
azure-cli 2.0.70 *
```
|
non_test
|
az acr import with source and registry is confusing describe the bug az acr import to reproduce az acr import h has source and registry parameters the help section doesn t document that r and source with registry is mutually exclusive i e both can t specify registry loginserver name this causes confusion and bad user experience for cx running into error state not knowing what s the issue the error message should also be refined to include the issue above when user actually runs into that condition our does say notice that the source parameter specifies only the source repository and image name not the registry login server name expected behavior proper help and error message environment summary windows python shell powershell exe azure cli
| 0
|
74,050
| 7,373,768,097
|
IssuesEvent
|
2018-03-13 18:12:49
|
grpc/grpc
|
https://api.github.com/repos/grpc/grpc
|
opened
|
Bazel Debug build failure
|
infra/bazel test failures
|
Relevant log:
```
+ mkdir -p /tmpfs/src/keystore
+ cp /tmpfs/src/gfile/GrpcTesting-d0eeee2db331.json /tmpfs/src/keystore/4321_grpc-testing-service
++ mktemp -d
+ temp_dir=/tmp/tmp.qfu09RtbtH
+ ln -f /tmpfs/src/gfile/bazel-canary /tmp/tmp.qfu09RtbtH/bazel
ln: failed to create hard link '/tmp/tmp.qfu09RtbtH/bazel' => '/tmpfs/src/gfile/bazel-canary': Invalid cross-device link
```
|
1.0
|
Bazel Debug build failure - Relevant log:
```
+ mkdir -p /tmpfs/src/keystore
+ cp /tmpfs/src/gfile/GrpcTesting-d0eeee2db331.json /tmpfs/src/keystore/4321_grpc-testing-service
++ mktemp -d
+ temp_dir=/tmp/tmp.qfu09RtbtH
+ ln -f /tmpfs/src/gfile/bazel-canary /tmp/tmp.qfu09RtbtH/bazel
ln: failed to create hard link '/tmp/tmp.qfu09RtbtH/bazel' => '/tmpfs/src/gfile/bazel-canary': Invalid cross-device link
```
|
test
|
bazel debug build failure relevant log mkdir p tmpfs src keystore cp tmpfs src gfile grpctesting json tmpfs src keystore grpc testing service mktemp d temp dir tmp tmp ln f tmpfs src gfile bazel canary tmp tmp bazel ln failed to create hard link tmp tmp bazel tmpfs src gfile bazel canary invalid cross device link
| 1
|
27,714
| 4,326,942,649
|
IssuesEvent
|
2016-07-26 08:42:28
|
NishantUpadhyay-BTC/BLISS-Issue-Tracking
|
https://api.github.com/repos/NishantUpadhyay-BTC/BLISS-Issue-Tracking
|
closed
|
#1397 Guest UI: Personal Retreat Availability Page: Text change
|
Change Request Deployed to Test
|
Also, please add to the descriptive text:
"Begin your reservation by providing the following information (* = required):"
|
1.0
|
#1397 Guest UI: Personal Retreat Availability Page: Text change - Also, please add to the descriptive text:
"Begin your reservation by providing the following information (* = required):"
|
test
|
guest ui personal retreat availability page text change also please add to the descriptive text begin your reservation by providing the following information required
| 1
|
312,012
| 26,831,607,157
|
IssuesEvent
|
2023-02-02 16:22:22
|
Flowminder/FlowKit
|
https://api.github.com/repos/Flowminder/FlowKit
|
closed
|
FlowAuth test timeout
|
bug FlowAuth tests P-Next
|
As the permission space for FlowAPI increases, the FlowAuth end-to-end test is becoming increasingly shaky and slow.
**Product**
FlowAPI test suite
**Version**
1.5 +
**To Reproduce**
Run `flowauth_end_to_end` after adding new query types
**Expected behaviour**
Test ends in a timely manner (within half an hour), but does not cause failure due to timeout.
**Additional context**
The quick fix is to increase the timeout for Cypress, but this obviously isn't sustainable - do we need to reexamine the testing framework for flowauth?
|
1.0
|
FlowAuth test timeout - As the permission space for FlowAPI increases, the FlowAuth end-to-end test is becoming increasingly shaky and slow.
**Product**
FlowAPI test suite
**Version**
1.5 +
**To Reproduce**
Run `flowauth_end_to_end` after adding new query types
**Expected behaviour**
Test ends in a timely manner (within half an hour), but does not cause failure due to timeout.
**Additional context**
The quick fix is to increase the timeout for Cypress, but this obviously isn't sustainable - do we need to reexamine the testing framework for flowauth?
|
test
|
flowauth test timeout as the permission space for flowapi increases the flowauth end to end test is becoming increasingly shaky and slow product flowapi test suite version to reproduce run flowauth end to end after adding new query types expected behaviour test ends in a timely manner within half an hour but does not cause failure due to timeout additional context the quick fix is to increase the timeout for cypress but this obviously isn t sustainable do we need to reexamine the testing framework for flowauth
| 1
|
450,521
| 31,927,945,310
|
IssuesEvent
|
2023-09-19 04:26:35
|
UQcsse3200/2023-studio-1
|
https://api.github.com/repos/UQcsse3200/2023-studio-1
|
opened
|
Add logging to oxygen system
|
documentation team 5 task sprint 3
|
# **Description**
**Task:** Adding logging to oxygen system.
**Feature:** [Oxygen System implementation](#137)
Adding logger statements to PlanetOxygenService and OxygenDisplay to assist in potential future debugging.
# **Milestones**
List of steps that need to be completed for this task.
- [ ] Add to PlanetOxygenService.java (Sept. 18)
- [ ] Add to OxygenDisplay (Sept. 19)
**Completion Deadline:** Sept 19.
# **Member**
- e.g. Gil (@gilgilgilgilgil ) (Gil)
|
1.0
|
Add logging to oxygen system - # **Description**
**Task:** Adding logging to oxygen system.
**Feature:** [Oxygen System implementation](#137)
Adding logger statements to PlanetOxygenService and OxygenDisplay to assist in potential future debugging.
# **Milestones**
List of steps that need to be completed for this task.
- [ ] Add to PlanetOxygenService.java (Sept. 18)
- [ ] Add to OxygenDisplay (Sept. 19)
**Completion Deadline:** Sept 19.
# **Member**
- e.g. Gil (@gilgilgilgilgil ) (Gil)
|
non_test
|
add logging to oxygen system description task adding logging to oxygen system feature adding logger statements to planetoxygenservice and oxygendisplay to assist in potential future debugging milestones list of steps that need to be completed for this task add to planetoxygenservice java sept add to oxygendisplay sept completion deadline sept member e g gil gilgilgilgilgil gil
| 0
|
242,488
| 20,251,148,753
|
IssuesEvent
|
2022-02-14 18:00:59
|
rspott/WAF-test02
|
https://api.github.com/repos/rspott/WAF-test02
|
opened
|
There should be more than one owner assigned to your subscription for 1 Subscription(s)
|
WARP-Import test1 Security Azure Advisor
|
<a href="https://aka.ms/azure-advisor-portal">There should be more than one owner assigned to your subscription for 1 Subscription(s)</a>
<a href="https://aka.ms/azure-advisor-portal">There should be more than one owner assigned to your subscription for 1 Subscription(s)</a>
|
1.0
|
There should be more than one owner assigned to your subscription for 1 Subscription(s) - <a href="https://aka.ms/azure-advisor-portal">There should be more than one owner assigned to your subscription for 1 Subscription(s)</a>
<a href="https://aka.ms/azure-advisor-portal">There should be more than one owner assigned to your subscription for 1 Subscription(s)</a>
|
test
|
there should be more than one owner assigned to your subscription for subscription s
| 1
|
44,630
| 11,473,136,205
|
IssuesEvent
|
2020-02-09 21:20:05
|
jonno85uk/chestnut
|
https://api.github.com/repos/jonno85uk/chestnut
|
closed
|
Setup Travis CI for continuos builds after each commit and upload resulted AppImage
|
build
|
### TODO
0. [ ] Create `../blob/master/.travis.yml`
1. https://github.com/appimage/AppImageKit
1.1. https://github.com/probonopd/linuxdeployqt
1.2 https://github.com/linuxdeploy
2. https://github.com/probonopd/uploadtool
3. [ ] Enable Travis CI pushing to GitHub releases for this repo.
For example, take a look on *LeoCAD* `.travis.yml` implementation:
- https://github.com/leozide/leocad/blob/master/.travis.yml
And here is how resulted builds would look like:
- https://github.com/leozide/leocad/releases/tag/continuous
For any AppImage packaging questions & support:
- https://docs.appimage.org/user-guide/faq.html#question-where-do-i-get-support
|
1.0
|
Setup Travis CI for continuos builds after each commit and upload resulted AppImage - ### TODO
0. [ ] Create `../blob/master/.travis.yml`
1. https://github.com/appimage/AppImageKit
1.1. https://github.com/probonopd/linuxdeployqt
1.2 https://github.com/linuxdeploy
2. https://github.com/probonopd/uploadtool
3. [ ] Enable Travis CI pushing to GitHub releases for this repo.
For example, take a look on *LeoCAD* `.travis.yml` implementation:
- https://github.com/leozide/leocad/blob/master/.travis.yml
And here is how resulted builds would look like:
- https://github.com/leozide/leocad/releases/tag/continuous
For any AppImage packaging questions & support:
- https://docs.appimage.org/user-guide/faq.html#question-where-do-i-get-support
|
non_test
|
setup travis ci for continuos builds after each commit and upload resulted appimage todo create blob master travis yml enable travis ci pushing to github releases for this repo for example take a look on leocad travis yml implementation and here is how resulted builds would look like for any appimage packaging questions support
| 0
|
35,463
| 12,339,574,713
|
IssuesEvent
|
2020-05-14 18:21:51
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Performance impact to VMs
|
Pri2 assigned-to-author doc-enhancement security-center/svc triaged
|
It would be great to post a guidance on the type of performance impact that can be expected of running the scan. For instance, the scans will increase CPU by X% and memory by Y%. Would this be possible?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a9303652-2bf3-e20a-4213-628466cc209d
* Version Independent ID: b73187c6-e566-49af-ef57-d517557074a7
* Content: [Advanced Data Security for IaaS in Azure Security Center](https://docs.microsoft.com/en-us/azure/security-center/security-center-iaas-advanced-data)
* Content Source: [articles/security-center/security-center-iaas-advanced-data.md](https://github.com/Microsoft/azure-docs/blob/master/articles/security-center/security-center-iaas-advanced-data.md)
* Service: **security-center**
* GitHub Login: @monhaber
* Microsoft Alias: **v-mohabe**
|
True
|
Performance impact to VMs - It would be great to post a guidance on the type of performance impact that can be expected of running the scan. For instance, the scans will increase CPU by X% and memory by Y%. Would this be possible?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a9303652-2bf3-e20a-4213-628466cc209d
* Version Independent ID: b73187c6-e566-49af-ef57-d517557074a7
* Content: [Advanced Data Security for IaaS in Azure Security Center](https://docs.microsoft.com/en-us/azure/security-center/security-center-iaas-advanced-data)
* Content Source: [articles/security-center/security-center-iaas-advanced-data.md](https://github.com/Microsoft/azure-docs/blob/master/articles/security-center/security-center-iaas-advanced-data.md)
* Service: **security-center**
* GitHub Login: @monhaber
* Microsoft Alias: **v-mohabe**
|
non_test
|
performance impact to vms it would be great to post a guidance on the type of performance impact that can be expected of running the scan for instance the scans will increase cpu by x and memory by y would this be possible document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service security center github login monhaber microsoft alias v mohabe
| 0
|
21,994
| 3,930,979,701
|
IssuesEvent
|
2016-04-25 10:23:33
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
e2e flake: Kubectl client Simple pod [It] should support exec through an HTTP proxy
|
area/test kind/flake priority/P1 team/CSI-API Machinery SIG
|
Hit this twice since yesterday afternoon.
Example:
http://kubekins.dls.corp.google.com:8081/job/kubernetes-pull-build-test-e2e-gce/24671/#
Previous closed instances of this flake:
#19500 #17523 #15787 #15713
|
1.0
|
e2e flake: Kubectl client Simple pod [It] should support exec through an HTTP proxy - Hit this twice since yesterday afternoon.
Example:
http://kubekins.dls.corp.google.com:8081/job/kubernetes-pull-build-test-e2e-gce/24671/#
Previous closed instances of this flake:
#19500 #17523 #15787 #15713
|
test
|
flake kubectl client simple pod should support exec through an http proxy hit this twice since yesterday afternoon example previous closed instances of this flake
| 1
|
31,885
| 8,767,213,694
|
IssuesEvent
|
2018-12-17 19:04:35
|
akka/akka-persistence-couchbase
|
https://api.github.com/repos/akka/akka-persistence-couchbase
|
closed
|
Release through travis doesn't quite work
|
bug t:build
|
First saw an error message about the release not existing, then when retriggered it said some files was already there, looking at bintray I see only artifacts for one Scala version (2.12) publishing 2.11 manually for now.
|
1.0
|
Release through travis doesn't quite work - First saw an error message about the release not existing, then when retriggered it said some files was already there, looking at bintray I see only artifacts for one Scala version (2.12) publishing 2.11 manually for now.
|
non_test
|
release through travis doesn t quite work first saw an error message about the release not existing then when retriggered it said some files was already there looking at bintray i see only artifacts for one scala version publishing manually for now
| 0
|
55,257
| 6,460,676,200
|
IssuesEvent
|
2017-08-16 05:22:51
|
rLoopTeam/eng-software-pod
|
https://api.github.com/repos/rLoopTeam/eng-software-pod
|
closed
|
Verify the functionality of Undervolt trip configuration
|
clarification needed test
|
Test if this function works with the IRQ. Log the results.
|
1.0
|
Verify the functionality of Undervolt trip configuration - Test if this function works with the IRQ. Log the results.
|
test
|
verify the functionality of undervolt trip configuration test if this function works with the irq log the results
| 1
|
12,939
| 9,816,504,711
|
IssuesEvent
|
2019-06-13 14:47:33
|
PATRIC3/patric3_website
|
https://api.github.com/repos/PATRIC3/patric3_website
|
opened
|
RNA-seq: Tuxedo pipeline deprecated
|
Critical Service: RNA-seq
|
This was reported by one of the users during Seattle Workshop.
Data analyzed with Cufflinks in particular were reported to be susceptible to batch effects. The tuxedo pipeline was replaced in 2016 with https://www.nature.com/articles/nprot.2016.095 according to https://www.biostars.org/p/327842/
We need to update the algorithms used in the RNA-seq service to make sure it uses the latest and widely accepted tools.
|
1.0
|
RNA-seq: Tuxedo pipeline deprecated - This was reported by one of the users during Seattle Workshop.
Data analyzed with Cufflinks in particular were reported to be susceptible to batch effects. The tuxedo pipeline was replaced in 2016 with https://www.nature.com/articles/nprot.2016.095 according to https://www.biostars.org/p/327842/
We need to update the algorithms used in the RNA-seq service to make sure it uses the latest and widely accepted tools.
|
non_test
|
rna seq tuxedo pipeline deprecated this was reported by one of the users during seattle workshop data analyzed with cufflinks in particular were reported to be susceptible to batch effects the tuxedo pipeline was replaced in with according to we need to update the algorithms used in the rna seq service to make sure it uses the latest and widely accepted tools
| 0
|
182,847
| 14,167,532,316
|
IssuesEvent
|
2020-11-12 10:25:17
|
pandas-dev/pandas
|
https://api.github.com/repos/pandas-dev/pandas
|
closed
|
CI/TST: read_html test_banklist_url_positional_match failing with ResourceWarning on Travis
|
IO HTML Testing
|
Travis builds are recently failing, eg https://travis-ci.org/github/pandas-dev/pandas/jobs/742927007
```
=================================== FAILURES ===================================
_____________ TestReadHtml.test_banklist_url_positional_match[bs4] _____________
[gw0] linux -- Python 3.7.8 /home/travis/miniconda3/envs/pandas-dev/bin/python
self = <pandas.tests.io.test_html.TestReadHtml object at 0x7f3091661050>
@tm.network
def test_banklist_url_positional_match(self):
url = "http://www.fdic.gov/bank/individual/failed/banklist.html"
# Passing match argument as positional should cause a FutureWarning.
with tm.assert_produces_warning(FutureWarning):
df1 = self.read_html(
> url, "First Federal Bank of Florida", attrs={"id": "table"}
)
pandas/tests/io/test_html.py:130:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <contextlib._GeneratorContextManager object at 0x7f309168d4d0>
type = None, value = None, traceback = None
def __exit__(self, type, value, traceback):
if type is None:
try:
> next(self.gen)
E AssertionError: Caused unexpected warning(s): [('ResourceWarning', ResourceWarning("unclosed <ssl.SSLSocket fd=18, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('10.20.0.10', 43648), raddr=('172.217.212.95', 443)>"), '/home/travis/miniconda3/envs/pandas-dev/lib/python3.7/site-packages/html5lib/treebuilders/base.py', 38), ('ResourceWarning', ResourceWarning("unclosed <ssl.SSLSocket fd=16, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('10.20.0.10', 43650), raddr=('172.217.212.95', 443)>"), '/home/travis/miniconda3/envs/pandas-dev/lib/python3.7/site-packages/html5lib/treebuilders/base.py', 38), ('ResourceWarning', ResourceWarning("unclosed <ssl.SSLSocket fd=43, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('10.20.0.10', 43962), raddr=('172.217.214.95', 443)>"), '/home/travis/miniconda3/envs/pandas-dev/lib/python3.7/site-packages/bs4/builder/_html5lib.py', 335), ('ResourceWarning', ResourceWarning("unclosed <ssl.SSLSocket fd=42, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('10.20.0.10', 46504), raddr=('74.125.124.95', 443)>"), '/home/travis/miniconda3/envs/pandas-dev/lib/python3.7/site-packages/bs4/builder/_html5lib.py', 335), ('ResourceWarning', ResourceWarning("unclosed <ssl.SSLSocket fd=41, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('10.20.0.10', 46502), raddr=('74.125.124.95', 443)>"), '/home/travis/miniconda3/envs/pandas-dev/lib/python3.7/site-packages/bs4/builder/_html5lib.py', 335), ('ResourceWarning', ResourceWarning("unclosed <ssl.SSLSocket fd=15, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('10.20.0.10', 43640), raddr=('172.217.212.95', 443)>"), '/home/travis/miniconda3/envs/pandas-dev/lib/python3.7/site-packages/bs4/builder/_html5lib.py', 335)]
../../../miniconda3/envs/pandas-dev/lib/python3.7/contextlib.py:119: AssertionError
```
|
1.0
|
CI/TST: read_html test_banklist_url_positional_match failing with ResourceWarning on Travis - Travis builds are recently failing, eg https://travis-ci.org/github/pandas-dev/pandas/jobs/742927007
```
=================================== FAILURES ===================================
_____________ TestReadHtml.test_banklist_url_positional_match[bs4] _____________
[gw0] linux -- Python 3.7.8 /home/travis/miniconda3/envs/pandas-dev/bin/python
self = <pandas.tests.io.test_html.TestReadHtml object at 0x7f3091661050>
@tm.network
def test_banklist_url_positional_match(self):
url = "http://www.fdic.gov/bank/individual/failed/banklist.html"
# Passing match argument as positional should cause a FutureWarning.
with tm.assert_produces_warning(FutureWarning):
df1 = self.read_html(
> url, "First Federal Bank of Florida", attrs={"id": "table"}
)
pandas/tests/io/test_html.py:130:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <contextlib._GeneratorContextManager object at 0x7f309168d4d0>
type = None, value = None, traceback = None
def __exit__(self, type, value, traceback):
if type is None:
try:
> next(self.gen)
E AssertionError: Caused unexpected warning(s): [('ResourceWarning', ResourceWarning("unclosed <ssl.SSLSocket fd=18, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('10.20.0.10', 43648), raddr=('172.217.212.95', 443)>"), '/home/travis/miniconda3/envs/pandas-dev/lib/python3.7/site-packages/html5lib/treebuilders/base.py', 38), ('ResourceWarning', ResourceWarning("unclosed <ssl.SSLSocket fd=16, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('10.20.0.10', 43650), raddr=('172.217.212.95', 443)>"), '/home/travis/miniconda3/envs/pandas-dev/lib/python3.7/site-packages/html5lib/treebuilders/base.py', 38), ('ResourceWarning', ResourceWarning("unclosed <ssl.SSLSocket fd=43, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('10.20.0.10', 43962), raddr=('172.217.214.95', 443)>"), '/home/travis/miniconda3/envs/pandas-dev/lib/python3.7/site-packages/bs4/builder/_html5lib.py', 335), ('ResourceWarning', ResourceWarning("unclosed <ssl.SSLSocket fd=42, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('10.20.0.10', 46504), raddr=('74.125.124.95', 443)>"), '/home/travis/miniconda3/envs/pandas-dev/lib/python3.7/site-packages/bs4/builder/_html5lib.py', 335), ('ResourceWarning', ResourceWarning("unclosed <ssl.SSLSocket fd=41, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('10.20.0.10', 46502), raddr=('74.125.124.95', 443)>"), '/home/travis/miniconda3/envs/pandas-dev/lib/python3.7/site-packages/bs4/builder/_html5lib.py', 335), ('ResourceWarning', ResourceWarning("unclosed <ssl.SSLSocket fd=15, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('10.20.0.10', 43640), raddr=('172.217.212.95', 443)>"), '/home/travis/miniconda3/envs/pandas-dev/lib/python3.7/site-packages/bs4/builder/_html5lib.py', 335)]
../../../miniconda3/envs/pandas-dev/lib/python3.7/contextlib.py:119: AssertionError
```
|
test
|
ci tst read html test banklist url positional match failing with resourcewarning on travis travis builds are recently failing eg failures testreadhtml test banklist url positional match linux python home travis envs pandas dev bin python self tm network def test banklist url positional match self url passing match argument as positional should cause a futurewarning with tm assert produces warning futurewarning self read html url first federal bank of florida attrs id table pandas tests io test html py self type none value none traceback none def exit self type value traceback if type is none try next self gen e assertionerror caused unexpected warning s envs pandas dev lib contextlib py assertionerror
| 1
|
108,426
| 16,777,816,930
|
IssuesEvent
|
2021-06-15 01:04:00
|
gms-ws-demo/nibrs-pr-test
|
https://api.github.com/repos/gms-ws-demo/nibrs-pr-test
|
closed
|
CVE-2018-3258 (High) detected in mysql-connector-java-5.1.47.jar - autoclosed
|
security vulnerability
|
## CVE-2018-3258 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.47.jar</b></p></summary>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: nibrs-pr-test/tools/nibrs-staging-data/pom.xml</p>
<p>Path to vulnerable library: nibrs-pr-test/tools/nibrs-staging-data/target/nibrs-staging-data/WEB-INF/lib/mysql-connector-java-5.1.47.jar,canner/.m2/repository/mysql/mysql-connector-java/5.1.47/mysql-connector-java-5.1.47.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.47.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-demo/nibrs-pr-test/commit/860cc22f54e17594e32e303f0716fb065202fff5">860cc22f54e17594e32e303f0716fb065202fff5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 8.0.12 and prior. Easily exploitable vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.8 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H).
<p>Publish Date: 2018-10-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3258>CVE-2018-3258</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3258">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3258</a></p>
<p>Release Date: 2018-10-17</p>
<p>Fix Resolution: mysql:mysql-connector-java:8.0.13</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.47","packageFilePaths":["/tools/nibrs-staging-data/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.47","isMinimumFixVersionAvailable":true,"minimumFixVersion":"mysql:mysql-connector-java:8.0.13"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-3258","vulnerabilityDetails":"Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 8.0.12 and prior. Easily exploitable vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.8 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3258","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2018-3258 (High) detected in mysql-connector-java-5.1.47.jar - autoclosed - ## CVE-2018-3258 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.47.jar</b></p></summary>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: nibrs-pr-test/tools/nibrs-staging-data/pom.xml</p>
<p>Path to vulnerable library: nibrs-pr-test/tools/nibrs-staging-data/target/nibrs-staging-data/WEB-INF/lib/mysql-connector-java-5.1.47.jar,canner/.m2/repository/mysql/mysql-connector-java/5.1.47/mysql-connector-java-5.1.47.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.47.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-demo/nibrs-pr-test/commit/860cc22f54e17594e32e303f0716fb065202fff5">860cc22f54e17594e32e303f0716fb065202fff5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 8.0.12 and prior. Easily exploitable vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.8 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H).
<p>Publish Date: 2018-10-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3258>CVE-2018-3258</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3258">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3258</a></p>
<p>Release Date: 2018-10-17</p>
<p>Fix Resolution: mysql:mysql-connector-java:8.0.13</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.47","packageFilePaths":["/tools/nibrs-staging-data/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.47","isMinimumFixVersionAvailable":true,"minimumFixVersion":"mysql:mysql-connector-java:8.0.13"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-3258","vulnerabilityDetails":"Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 8.0.12 and prior. Easily exploitable vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.8 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3258","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_test
|
cve high detected in mysql connector java jar autoclosed cve high severity vulnerability vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to dependency file nibrs pr test tools nibrs staging data pom xml path to vulnerable library nibrs pr test tools nibrs staging data target nibrs staging data web inf lib mysql connector java jar canner repository mysql mysql connector java mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library found in head commit a href found in base branch master vulnerability details vulnerability in the mysql connectors component of oracle mysql subcomponent connector j supported versions that are affected are and prior easily exploitable vulnerability allows low privileged attacker with network access via multiple protocols to compromise mysql connectors successful attacks of this vulnerability can result in takeover of mysql connectors cvss base score confidentiality integrity and availability impacts cvss vector cvss av n ac l pr l ui n s u c h i h a h publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution mysql mysql connector java check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree mysql mysql connector java isminimumfixversionavailable true minimumfixversion mysql mysql connector java basebranches vulnerabilityidentifier cve vulnerabilitydetails vulnerability in the mysql connectors component of oracle mysql subcomponent connector j supported versions that are affected are and prior easily exploitable vulnerability allows low privileged attacker with network access via multiple protocols to compromise mysql connectors successful attacks of this vulnerability can result in takeover of mysql connectors cvss base score confidentiality integrity and availability impacts cvss vector cvss av n ac l pr l ui n s u c h i h a h vulnerabilityurl
| 0
|
186,712
| 15,081,558,386
|
IssuesEvent
|
2021-02-05 13:23:03
|
Cooltimmetje/Skuddbot-v2
|
https://api.github.com/repos/Cooltimmetje/Skuddbot-v2
|
opened
|
Betting shortcuts
|
accepted addition documentation
|
Add support for various betting shortcuts.
- [ ] Documentation
- [ ] Implement global shortcuts
- [ ] `bet`/nothing
- Bets the default bet amount of the user
- [ ] `all`
- Bets all currency of the user
- [ ] `half`
- Bets half the amount of the user's currency
- [ ] Percentages
- Bets the percentage of the user's currency amount. Example: `10%` of 100000 currency bets 10000.
- [ ] Thousands
- Bets the amount specified in thousands. Example: `1k` bets 1000.
- [ ] Implement into minigames
- [ ] Blackjack
- [ ] Challenge
- [ ] Refund remaining amount if opponent does not have enough money.
- [ ] Free for All
- [ ] Implement specific keyword `match`.
- Matches the highest bet currently entered.
- [ ] Double or Nothing
|
1.0
|
Betting shortcuts - Add support for various betting shortcuts.
- [ ] Documentation
- [ ] Implement global shortcuts
- [ ] `bet`/nothing
- Bets the default bet amount of the user
- [ ] `all`
- Bets all currency of the user
- [ ] `half`
- Bets half the amount of the user's currency
- [ ] Percentages
- Bets the percentage of the user's currency amount. Example: `10%` of 100000 currency bets 10000.
- [ ] Thousands
- Bets the amount specified in thousands. Example: `1k` bets 1000.
- [ ] Implement into minigames
- [ ] Blackjack
- [ ] Challenge
- [ ] Refund remaining amount if opponent does not have enough money.
- [ ] Free for All
- [ ] Implement specific keyword `match`.
- Matches the highest bet currently entered.
- [ ] Double or Nothing
|
non_test
|
betting shortcuts add support for various betting shortcuts documentation implement global shortcuts bet nothing bets the default bet amount of the user all bets all currency of the user half bets half the amount of the user s currency percentages bets the percentage of the user s currency amount example of currency bets thousands bets the amount specified in thousands example bets implement into minigames blackjack challenge refund remaining amount if opponent does not have enough money free for all implement specific keyword match matches the highest bet currently entered double or nothing
| 0
|
93,944
| 8,459,726,905
|
IssuesEvent
|
2018-10-22 16:44:57
|
ValveSoftware/csgo-osx-linux
|
https://api.github.com/repos/ValveSoftware/csgo-osx-linux
|
closed
|
[PANORAMA] Single core getting hammered in main menu
|
Need Retest
|
#### Your system information
* System information from steam (`Steam` -> `Help` -> `System Information`) in a [gist](https://gist.github.com/): https://gist.github.com/Veske/dce89e00e949d82b9baa223547212351
* Have you checked for system updates?: [Yes/No] Yes
#### Please describe your issue in as much detail as possible:
Describe what you _expected_ should happen and what _did_ happen. Please link any large pastes as a [Github Gist](https://gist.github.com/).
I am seeing a single CPU core getting 100% usage while the others are more or less not doing much.

The usage dropped once I closed CS:GO running with -panorama flag.
#### Steps for reproducing this issue:
1. Launch CS:GO with -panorama flag
2. Alt-tab out and check CPU usage in whatever tool you prefer.
|
1.0
|
[PANORAMA] Single core getting hammered in main menu - #### Your system information
* System information from steam (`Steam` -> `Help` -> `System Information`) in a [gist](https://gist.github.com/): https://gist.github.com/Veske/dce89e00e949d82b9baa223547212351
* Have you checked for system updates?: [Yes/No] Yes
#### Please describe your issue in as much detail as possible:
Describe what you _expected_ should happen and what _did_ happen. Please link any large pastes as a [Github Gist](https://gist.github.com/).
I am seeing a single CPU core getting 100% usage while the others are more or less not doing much.

The usage dropped once I closed CS:GO running with -panorama flag.
#### Steps for reproducing this issue:
1. Launch CS:GO with -panorama flag
2. Alt-tab out and check CPU usage in whatever tool you prefer.
|
test
|
single core getting hammered in main menu your system information system information from steam steam help system information in a have you checked for system updates yes please describe your issue in as much detail as possible describe what you expected should happen and what did happen please link any large pastes as a i am seeing a single cpu core getting usage while the others are more or less not doing much the usage dropped once i closed cs go running with panorama flag steps for reproducing this issue launch cs go with panorama flag alt tab out and check cpu usage in whatever tool you prefer
| 1
|
83,853
| 7,882,297,262
|
IssuesEvent
|
2018-06-26 22:05:26
|
zkSNACKs/WalletWasabi
|
https://api.github.com/repos/zkSNACKs/WalletWasabi
|
closed
|
Remove selection at tab change
|
UX debug stability/testing
|
Remove selection when tab changes from History or Receive, because if there is only one record, then the selection being kept and there is no way to copypaste.
|
1.0
|
Remove selection at tab change - Remove selection when tab changes from History or Receive, because if there is only one record, then the selection being kept and there is no way to copypaste.
|
test
|
remove selection at tab change remove selection when tab changes from history or receive because if there is only one record then the selection being kept and there is no way to copypaste
| 1
|
106,494
| 9,160,799,301
|
IssuesEvent
|
2019-03-01 08:41:49
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
[Coverity CID :190929]Integer handling issues in /tests/drivers/hwinfo/api/src/main.c
|
Coverity area: Tests bug priority: low
|
Static code scan issues seen in File: /tests/drivers/hwinfo/api/src/main.c
Category: Integer handling issues
Function: test_device_id_get
Component: Tests
CID: 190929
Please fix or provide comments to square it off in coverity in the link: https://scan9.coverity.com/reports.htm#v32951/p12996
|
1.0
|
[Coverity CID :190929]Integer handling issues in /tests/drivers/hwinfo/api/src/main.c - Static code scan issues seen in File: /tests/drivers/hwinfo/api/src/main.c
Category: Integer handling issues
Function: test_device_id_get
Component: Tests
CID: 190929
Please fix or provide comments to square it off in coverity in the link: https://scan9.coverity.com/reports.htm#v32951/p12996
|
test
|
integer handling issues in tests drivers hwinfo api src main c static code scan issues seen in file tests drivers hwinfo api src main c category integer handling issues function test device id get component tests cid please fix or provide comments to square it off in coverity in the link
| 1
|
307,244
| 9,415,005,444
|
IssuesEvent
|
2019-04-10 11:35:10
|
nhn/tui.editor
|
https://api.github.com/repos/nhn/tui.editor
|
closed
|
Unexpected column table when press delete key in the table of Wysiwyg (2MD)
|
Category: Table NHN Priority: High Type: Bug
|
## Version
v1.3.1
## Test Environment
Chrome
## Current Behavior

1. In Wysiwyg, press delete key in the table.
2. Unexpected columns appears.
## Expected Behavior
Should not appear unexpected columns.
|
1.0
|
Unexpected column table when press delete key in the table of Wysiwyg (2MD) - ## Version
v1.3.1
## Test Environment
Chrome
## Current Behavior

1. In Wysiwyg, press delete key in the table.
2. Unexpected columns appears.
## Expected Behavior
Should not appear unexpected columns.
|
non_test
|
unexpected column table when press delete key in the table of wysiwyg version test environment chrome current behavior in wysiwyg press delete key in the table unexpected columns appears expected behavior should not appear unexpected columns
| 0
|
209,761
| 16,242,608,896
|
IssuesEvent
|
2021-05-07 11:23:11
|
iver-wharf/iver-wharf.github.io
|
https://api.github.com/repos/iver-wharf/iver-wharf.github.io
|
opened
|
Problem pages
|
documentation
|
We're introducing problem types, according to [IETF RFC-7087](https://www.rfc-archive.org/getrfc?rfc=7807#gsc.tab=0), and need problem describing pages for the following:
- docs/prob/api/internal-server-error.md
- docs/prob/api/invalid-param.md
- docs/prob/api/invalid-param-int.md
- docs/prob/api/invalid-param-uint.md
- docs/prob/api/missing-param-string.md
- docs/prob/api/project/cannot-change-group.md
- docs/prob/api/project/run/params-deserialize.md
- docs/prob/api/project/run/params-serialize.md
- docs/prob/api/project/run/trigger.md
- docs/prob/api/provider/invalid-name.md
- docs/prob/api/record-not-found.md
- docs/prob/api/unexpected-body-read-error.md
- docs/prob/api/unexpected-db-read-error.md
- docs/prob/api/unexpected-db-write-error.md
- docs/prob/api/unexpected-multipart-read-error.md
- docs/prob/build/run/invalid-input.md
See api!45 and api!46 MRs for more info (sorry, not open sourced yet)
|
1.0
|
Problem pages - We're introducing problem types, according to [IETF RFC-7087](https://www.rfc-archive.org/getrfc?rfc=7807#gsc.tab=0), and need problem describing pages for the following:
- docs/prob/api/internal-server-error.md
- docs/prob/api/invalid-param.md
- docs/prob/api/invalid-param-int.md
- docs/prob/api/invalid-param-uint.md
- docs/prob/api/missing-param-string.md
- docs/prob/api/project/cannot-change-group.md
- docs/prob/api/project/run/params-deserialize.md
- docs/prob/api/project/run/params-serialize.md
- docs/prob/api/project/run/trigger.md
- docs/prob/api/provider/invalid-name.md
- docs/prob/api/record-not-found.md
- docs/prob/api/unexpected-body-read-error.md
- docs/prob/api/unexpected-db-read-error.md
- docs/prob/api/unexpected-db-write-error.md
- docs/prob/api/unexpected-multipart-read-error.md
- docs/prob/build/run/invalid-input.md
See api!45 and api!46 MRs for more info (sorry, not open sourced yet)
|
non_test
|
problem pages we re introducing problem types according to and need problem describing pages for the following docs prob api internal server error md docs prob api invalid param md docs prob api invalid param int md docs prob api invalid param uint md docs prob api missing param string md docs prob api project cannot change group md docs prob api project run params deserialize md docs prob api project run params serialize md docs prob api project run trigger md docs prob api provider invalid name md docs prob api record not found md docs prob api unexpected body read error md docs prob api unexpected db read error md docs prob api unexpected db write error md docs prob api unexpected multipart read error md docs prob build run invalid input md see api and api mrs for more info sorry not open sourced yet
| 0
|
20,306
| 2,622,644,640
|
IssuesEvent
|
2015-03-04 05:31:13
|
Kimi-Arthur/pimix-software-suite
|
https://api.github.com/repos/Kimi-Arthur/pimix-software-suite
|
opened
|
[Capricorn] Multithread stop inside worker
|
auto-migrated Priority-High Type-Enhancement
|
```
Using method to set flag may be considered.
```
Original issue reported on code.google.com by `kimi.rib...@gmail.com` on 21 Aug 2013 at 5:42
* Blocking: #10
|
1.0
|
[Capricorn] Multithread stop inside worker - ```
Using method to set flag may be considered.
```
Original issue reported on code.google.com by `kimi.rib...@gmail.com` on 21 Aug 2013 at 5:42
* Blocking: #10
|
non_test
|
multithread stop inside worker using method to set flag may be considered original issue reported on code google com by kimi rib gmail com on aug at blocking
| 0
|
490,388
| 14,118,861,609
|
IssuesEvent
|
2020-11-08 15:23:02
|
formium/formik
|
https://api.github.com/repos/formium/formik
|
closed
|
setFieldValue creates infinite loop
|
Priority: High Type: Bug
|
## 🐛 Bug report
### Current Behavior
I have an input component that has some internal state (i.e. the inputs are made on another scale - e.g. values are written in millions instead of units. But the state of interest are always just the units.). This component only takes an initial value and not the current value as props. It also exposes a prop called `onChangeValue` which is basically a callback with the current value as input.
### Expected behavior
The following should update `formik.values.value` but instead I get an infinite loop.
```js
onChangeValue={value => formik.setFieldValue("value", value)}
```
### Reproducible example
```js
import { useFormik } from "formik";
import * as React from "react";
function CustomInput({ initialValue, scale, onChangeValue, name }) {
const [value, setValue] = React.useState(initialValue / scale);
React.useEffect(() => {
onChangeValue(value * scale);
}, [value, scale, onChangeValue]);
return (
<input value={value} onChange={(event) => setValue(event.target.value)} name={name} />
);
}
export default function Demo() {
const initialValue = 100;
const formik = useFormik({
initialValues: {
value: initialValue
},
onSubmit: (values) => {
console.log(JSON.stringify(values, null, 2));
}
});
return (
<form onSubmit={formik.handleSubmit}>
<CustomInput
initialValue={initialValue}
scale={10}
name="value"
onChangeValue={value => formik.setFieldValue("value", value)}
/>
</form>
);
}
```
### Solution without formik
The following solution works without using formik
```js
import * as React from "react";
function CustomInput({ initialValue, scale, onChangeValue, name }) {
const [value, setValue] = React.useState(initialValue / scale);
React.useEffect(() => {
onChangeValue(value * scale);
}, [value, scale, onChangeValue]);
return (
<input
value={value}
onChange={(event) => setValue(event.target.value)}
name={name}
/>
);
}
export default function NoFormikDemo() {
const initialValue = 100;
const [value, setValue] = React.useState(initialValue);
function handleSubmit(event) {
event.preventDefault();
console.log(value);
}
return (
<form onSubmit={handleSubmit}>
<CustomInput
initialValue={initialValue}
scale={10}
onChangeValue={setValue}
/>
</form>
);
}
```
### Your environment
<!-- PLEASE FILL THIS OUT -->
| Software | Version(s) |
| ---------------- | ---------- |
| Formik | 2.2.1
| React | 16.14.0
| TypeScript | 4.0.3
| Browser | chrome
| npm/Yarn | npm
| Operating System | macOS
|
1.0
|
setFieldValue creates infinite loop - ## 🐛 Bug report
### Current Behavior
I have an input component that has some internal state (i.e. the inputs are made on another scale - e.g. values are written in millions instead of units. But the state of interest are always just the units.). This component only takes an initial value and not the current value as props. It also exposes a prop called `onChangeValue` which is basically a callback with the current value as input.
### Expected behavior
The following should update `formik.values.value` but instead I get an infinite loop.
```js
onChangeValue={value => formik.setFieldValue("value", value)}
```
### Reproducible example
```js
import { useFormik } from "formik";
import * as React from "react";
function CustomInput({ initialValue, scale, onChangeValue, name }) {
const [value, setValue] = React.useState(initialValue / scale);
React.useEffect(() => {
onChangeValue(value * scale);
}, [value, scale, onChangeValue]);
return (
<input value={value} onChange={(event) => setValue(event.target.value)} name={name} />
);
}
export default function Demo() {
const initialValue = 100;
const formik = useFormik({
initialValues: {
value: initialValue
},
onSubmit: (values) => {
console.log(JSON.stringify(values, null, 2));
}
});
return (
<form onSubmit={formik.handleSubmit}>
<CustomInput
initialValue={initialValue}
scale={10}
name="value"
onChangeValue={value => formik.setFieldValue("value", value)}
/>
</form>
);
}
```
### Solution without formik
The following solution works without using formik
```js
import * as React from "react";
function CustomInput({ initialValue, scale, onChangeValue, name }) {
const [value, setValue] = React.useState(initialValue / scale);
React.useEffect(() => {
onChangeValue(value * scale);
}, [value, scale, onChangeValue]);
return (
<input
value={value}
onChange={(event) => setValue(event.target.value)}
name={name}
/>
);
}
export default function NoFormikDemo() {
const initialValue = 100;
const [value, setValue] = React.useState(initialValue);
function handleSubmit(event) {
event.preventDefault();
console.log(value);
}
return (
<form onSubmit={handleSubmit}>
<CustomInput
initialValue={initialValue}
scale={10}
onChangeValue={setValue}
/>
</form>
);
}
```
### Your environment
<!-- PLEASE FILL THIS OUT -->
| Software | Version(s) |
| ---------------- | ---------- |
| Formik | 2.2.1
| React | 16.14.0
| TypeScript | 4.0.3
| Browser | chrome
| npm/Yarn | npm
| Operating System | macOS
|
non_test
|
setfieldvalue creates infinite loop 🐛 bug report current behavior i have an input component that has some internal state i e the inputs are made on another scale e g values are written in millions instead of units but the state of interest are always just the units this component only takes an initial value and not the current value as props it also exposes a prop called onchangevalue which is basically a callback with the current value as input expected behavior the following should update formik values value but instead i get an infinite loop js onchangevalue value formik setfieldvalue value value reproducible example js import useformik from formik import as react from react function custominput initialvalue scale onchangevalue name const react usestate initialvalue scale react useeffect onchangevalue value scale return setvalue event target value name name export default function demo const initialvalue const formik useformik initialvalues value initialvalue onsubmit values console log json stringify values null return custominput initialvalue initialvalue scale name value onchangevalue value formik setfieldvalue value value solution without formik the following solution works without using formik js import as react from react function custominput initialvalue scale onchangevalue name const react usestate initialvalue scale react useeffect onchangevalue value scale return input value value onchange event setvalue event target value name name export default function noformikdemo const initialvalue const react usestate initialvalue function handlesubmit event event preventdefault console log value return custominput initialvalue initialvalue scale onchangevalue setvalue your environment software version s formik react typescript browser chrome npm yarn npm operating system macos
| 0
|
76,270
| 26,339,523,685
|
IssuesEvent
|
2023-01-10 16:37:55
|
apache/jmeter
|
https://api.github.com/repos/apache/jmeter
|
opened
|
OpenModelThreadGroupController cannot be cast to LoopController
|
defect to-triage
|
### Expected behavior
Using new Open Model Thread Group throws exception when there are lot of threads to hit the target RPS value.
### Actual behavior
```
2023-01-09 22:38:03,363 ERROR o.a.j.t.JMeterThread: Test failed!
java.lang.ClassCastException: org.apache.jmeter.threads.openmodel.OpenModelThreadGroupController cannot be cast to org.apache.jmeter.control.LoopController
at org.apache.jmeter.threads.AbstractThreadGroup.startNextLoop(AbstractThreadGroup.java:171) ~[ApacheJMeter_core.jar:5.5]
at org.apache.jmeter.threads.JMeterThread.continueOnThreadLoop(JMeterThread.java:434) ~[ApacheJMeter_core.jar:5.5]
at org.apache.jmeter.threads.JMeterThread.triggerLoopLogicalActionOnParentControllers(JMeterThread.java:372) ~[ApacheJMeter_core.jar:5.5]
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:282) ~[ApacheJMeter_core.jar:5.5]
at org.apache.jmeter.threads.openmodel.OpenModelThreadGroup$ThreadsStarter.run$lambda-0(OpenModelThreadGroup.kt:128) ~[ApacheJMeter_core.jar:5.5]
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) ~[?:1.8.0_351]
at java.util.concurrent.FutureTask.run(Unknown Source) ~[?:1.8.0_351]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[?:1.8.0_351]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ~[?:1.8.0_351]
at java.lang.Thread.run(Unknown Source) ~[?:1.8.0_351]
2023-01-09 22:38:03,363 INFO o.a.j.t.JMeterThread: Thread finished: Open Model Thread Group 1-248678
```
### Steps to reproduce the problem
Note: Repro is on our internal service that cannot be shared here. Hopefully this is enough info.
* Create new test plan with Open Model Thread Group.
* Set target RPS to a value that will create heavy load
* As the threads increase over time it works fine until it hits a peak concurrent load that starts to throw the above exception.
### JMeter Version
5.5
### Java Version
java version "1.8.0_351" Java(TM) SE Runtime Environment (build 1.8.0_351-b10) Java HotSpot(TM) 64-Bit Server VM (build 25.351-b10, mixed mode)
### OS Version
Windows 11 22621.963
|
1.0
|
OpenModelThreadGroupController cannot be cast to LoopController - ### Expected behavior
Using new Open Model Thread Group throws exception when there are lot of threads to hit the target RPS value.
### Actual behavior
```
2023-01-09 22:38:03,363 ERROR o.a.j.t.JMeterThread: Test failed!
java.lang.ClassCastException: org.apache.jmeter.threads.openmodel.OpenModelThreadGroupController cannot be cast to org.apache.jmeter.control.LoopController
at org.apache.jmeter.threads.AbstractThreadGroup.startNextLoop(AbstractThreadGroup.java:171) ~[ApacheJMeter_core.jar:5.5]
at org.apache.jmeter.threads.JMeterThread.continueOnThreadLoop(JMeterThread.java:434) ~[ApacheJMeter_core.jar:5.5]
at org.apache.jmeter.threads.JMeterThread.triggerLoopLogicalActionOnParentControllers(JMeterThread.java:372) ~[ApacheJMeter_core.jar:5.5]
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:282) ~[ApacheJMeter_core.jar:5.5]
at org.apache.jmeter.threads.openmodel.OpenModelThreadGroup$ThreadsStarter.run$lambda-0(OpenModelThreadGroup.kt:128) ~[ApacheJMeter_core.jar:5.5]
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) ~[?:1.8.0_351]
at java.util.concurrent.FutureTask.run(Unknown Source) ~[?:1.8.0_351]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[?:1.8.0_351]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ~[?:1.8.0_351]
at java.lang.Thread.run(Unknown Source) ~[?:1.8.0_351]
2023-01-09 22:38:03,363 INFO o.a.j.t.JMeterThread: Thread finished: Open Model Thread Group 1-248678
```
### Steps to reproduce the problem
Note: Repro is on our internal service that cannot be shared here. Hopefully this is enough info.
* Create new test plan with Open Model Thread Group.
* Set target RPS to a value that will create heavy load
* As the threads increase over time it works fine until it hits a peak concurrent load that starts to throw the above exception.
### JMeter Version
5.5
### Java Version
java version "1.8.0_351" Java(TM) SE Runtime Environment (build 1.8.0_351-b10) Java HotSpot(TM) 64-Bit Server VM (build 25.351-b10, mixed mode)
### OS Version
Windows 11 22621.963
|
non_test
|
openmodelthreadgroupcontroller cannot be cast to loopcontroller expected behavior using new open model thread group throws exception when there are lot of threads to hit the target rps value actual behavior error o a j t jmeterthread test failed java lang classcastexception org apache jmeter threads openmodel openmodelthreadgroupcontroller cannot be cast to org apache jmeter control loopcontroller at org apache jmeter threads abstractthreadgroup startnextloop abstractthreadgroup java at org apache jmeter threads jmeterthread continueonthreadloop jmeterthread java at org apache jmeter threads jmeterthread triggerlooplogicalactiononparentcontrollers jmeterthread java at org apache jmeter threads jmeterthread run jmeterthread java at org apache jmeter threads openmodel openmodelthreadgroup threadsstarter run lambda openmodelthreadgroup kt at java util concurrent executors runnableadapter call unknown source at java util concurrent futuretask run unknown source at java util concurrent threadpoolexecutor runworker unknown source at java util concurrent threadpoolexecutor worker run unknown source at java lang thread run unknown source info o a j t jmeterthread thread finished open model thread group steps to reproduce the problem note repro is on our internal service that cannot be shared here hopefully this is enough info create new test plan with open model thread group set target rps to a value that will create heavy load as the threads increase over time it works fine until it hits a peak concurrent load that starts to throw the above exception jmeter version java version java version java tm se runtime environment build java hotspot tm bit server vm build mixed mode os version windows
| 0
|
452,870
| 13,060,557,451
|
IssuesEvent
|
2020-07-30 12:37:48
|
Ghost-chu/QuickShop-Reremake
|
https://api.github.com/repos/Ghost-chu/QuickShop-Reremake
|
closed
|
[Feature] Find My Shop
|
Feature Request Priority:Major Should Implement By Addon
|
**Describe the Feature**
I have changed the limited amount of shops create.
A player is unable to create a new shop and he can't remember where the shop was created.
I would like to propose is search own shop in a database with the command `/qs list` or something.
- Show a list of the shop,
- Show detail of the shop when Hover that name.
- Show coordinates, name world, price, selling or buying
- Show the amount of shop/limit. (if operator limit it)
Like this screenshot
**Screenshots**

**Additional context**
Admin, Operator or Who has permission may find another player shop.
|
1.0
|
[Feature] Find My Shop - **Describe the Feature**
I have changed the limited amount of shops create.
A player is unable to create a new shop and he can't remember where the shop was created.
I would like to propose is search own shop in a database with the command `/qs list` or something.
- Show a list of the shop,
- Show detail of the shop when Hover that name.
- Show coordinates, name world, price, selling or buying
- Show the amount of shop/limit. (if operator limit it)
Like this screenshot
**Screenshots**

**Additional context**
Admin, Operator or Who has permission may find another player shop.
|
non_test
|
find my shop describe the feature i have changed the limited amount of shops create a player is unable to create a new shop and he can t remember where the shop was created i would like to propose is search own shop in a database with the command qs list or something show a list of the shop show detail of the shop when hover that name show coordinates name world price selling or buying show the amount of shop limit if operator limit it like this screenshot screenshots additional context admin operator or who has permission may find another player shop
| 0
|
302,526
| 9,261,016,488
|
IssuesEvent
|
2019-03-18 08:01:21
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.youtube.com - video or audio doesn't play
|
browser-firefox priority-critical
|
<!-- @browser: Firefox 65.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:65.0) Gecko/20100101 Firefox/65.0 -->
<!-- @reported_with: web -->
**URL**: https://www.youtube.com/watch?v=Hb-srHSD_SE
**Browser / Version**: Firefox 65.0
**Operating System**: Windows 8.1
**Tested Another Browser**: No
**Problem type**: Video or audio doesn't play
**Description**: The site opens automatically a few moments later when I close its tab. Its is just like a pop up in its opening and a separate tab is opened.
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.youtube.com - video or audio doesn't play - <!-- @browser: Firefox 65.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:65.0) Gecko/20100101 Firefox/65.0 -->
<!-- @reported_with: web -->
**URL**: https://www.youtube.com/watch?v=Hb-srHSD_SE
**Browser / Version**: Firefox 65.0
**Operating System**: Windows 8.1
**Tested Another Browser**: No
**Problem type**: Video or audio doesn't play
**Description**: The site opens automatically a few moments later when I close its tab. Its is just like a pop up in its opening and a separate tab is opened.
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_test
|
video or audio doesn t play url browser version firefox operating system windows tested another browser no problem type video or audio doesn t play description the site opens automatically a few moments later when i close its tab its is just like a pop up in its opening and a separate tab is opened steps to reproduce browser configuration none from with ❤️
| 0
|
990
| 3,022,953,485
|
IssuesEvent
|
2015-08-01 00:39:23
|
dart-lang/dartdoc
|
https://api.github.com/repos/dart-lang/dartdoc
|
closed
|
When generating doc tests, run a link checker to ensure no dead links
|
Infrastructure
|
Seems like a good way to verify we generated a good set of docs.
|
1.0
|
When generating doc tests, run a link checker to ensure no dead links - Seems like a good way to verify we generated a good set of docs.
|
non_test
|
when generating doc tests run a link checker to ensure no dead links seems like a good way to verify we generated a good set of docs
| 0
|
346,279
| 30,881,837,590
|
IssuesEvent
|
2023-08-03 18:14:06
|
ray-project/ray
|
https://api.github.com/repos/ray-project/ray
|
opened
|
Release test rllib_learning_tests_impala_torch.aws failed
|
bug P0 rllib release-test
|
Release test **rllib_learning_tests_impala_torch.aws** failed. See https://buildkite.com/ray-project/release-tests-branch/builds/2019#0189bc92-c0c9-4367-b759-7ec03048a6d3 for more details.
Managed by OSS Test Policy
|
1.0
|
Release test rllib_learning_tests_impala_torch.aws failed - Release test **rllib_learning_tests_impala_torch.aws** failed. See https://buildkite.com/ray-project/release-tests-branch/builds/2019#0189bc92-c0c9-4367-b759-7ec03048a6d3 for more details.
Managed by OSS Test Policy
|
test
|
release test rllib learning tests impala torch aws failed release test rllib learning tests impala torch aws failed see for more details managed by oss test policy
| 1
|
267,059
| 23,277,451,049
|
IssuesEvent
|
2022-08-05 08:39:39
|
gravitee-io/issues
|
https://api.github.com/repos/gravitee-io/issues
|
closed
|
[Jupiter benchmark] Optimize cache
|
project: APIM status: in test archpim
|
With Jupiter and the new execution mode, each policy must complete before the next one is executed.
Policy execution now includes the eventual transformation made on the request or response body. To make sure the different transformation are not re-executed during the execution of the reactive chain, we are relying on Maybe.cache() and Flowable.cache(). The caching is then operated by the reactive framework itself and allows to keep the code relatively simple.
To optimize caching, we’ve already spotted the different places where it requires to applying caching and it works as expected. However, depending on the number of policies and body transformations that may occur during the request execution, this caching mechanism can be memory intensive because each transformation that has been cached is kept until the end of the request execution (not eligible to GC).
Ex:
xml2json → json2xml → assign-content → Backend → xml2json → json2xml → assign-content
keeps 3 version of the request body and 3 versions of the response body in memory.
We should find a way to optimize the caching by only keeping the latest useful version of the request or response body and release the other ones as soon as possible (at least make them eligible to the GC).
|
1.0
|
[Jupiter benchmark] Optimize cache - With Jupiter and the new execution mode, each policy must complete before the next one is executed.
Policy execution now includes the eventual transformation made on the request or response body. To make sure the different transformation are not re-executed during the execution of the reactive chain, we are relying on Maybe.cache() and Flowable.cache(). The caching is then operated by the reactive framework itself and allows to keep the code relatively simple.
To optimize caching, we’ve already spotted the different places where it requires to applying caching and it works as expected. However, depending on the number of policies and body transformations that may occur during the request execution, this caching mechanism can be memory intensive because each transformation that has been cached is kept until the end of the request execution (not eligible to GC).
Ex:
xml2json → json2xml → assign-content → Backend → xml2json → json2xml → assign-content
keeps 3 version of the request body and 3 versions of the response body in memory.
We should find a way to optimize the caching by only keeping the latest useful version of the request or response body and release the other ones as soon as possible (at least make them eligible to the GC).
|
test
|
optimize cache with jupiter and the new execution mode each policy must complete before the next one is executed policy execution now includes the eventual transformation made on the request or response body to make sure the different transformation are not re executed during the execution of the reactive chain we are relying on maybe cache and flowable cache the caching is then operated by the reactive framework itself and allows to keep the code relatively simple to optimize caching we’ve already spotted the different places where it requires to applying caching and it works as expected however depending on the number of policies and body transformations that may occur during the request execution this caching mechanism can be memory intensive because each transformation that has been cached is kept until the end of the request execution not eligible to gc ex → → assign content → backend → → → assign content keeps version of the request body and versions of the response body in memory we should find a way to optimize the caching by only keeping the latest useful version of the request or response body and release the other ones as soon as possible at least make them eligible to the gc
| 1
|
423,425
| 12,296,021,825
|
IssuesEvent
|
2020-05-11 05:56:11
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
mail.google.com - site is not usable
|
browser-firefox engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical
|
<!-- @browser: Firefox 77.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.2; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/52667 -->
**URL**: https://mail.google.com/mail/u/0/
**Browser / Version**: Firefox 77.0
**Operating System**: Windows 8
**Tested Another Browser**: No
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
Refusing to go to gmail sign in
<details><summary>View the screenshot</summary><img alt='Screenshot' src='https://webcompat.com/uploads/2020/5/1b2b3e40-c96b-4018-a7e3-4a4249f332da.jpeg'></details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200507233245</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/5/c092994c-20c6-4f48-8765-9a5b6062cbf2)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
mail.google.com - site is not usable - <!-- @browser: Firefox 77.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.2; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/52667 -->
**URL**: https://mail.google.com/mail/u/0/
**Browser / Version**: Firefox 77.0
**Operating System**: Windows 8
**Tested Another Browser**: No
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
Refusing to go to gmail sign in
<details><summary>View the screenshot</summary><img alt='Screenshot' src='https://webcompat.com/uploads/2020/5/1b2b3e40-c96b-4018-a7e3-4a4249f332da.jpeg'></details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200507233245</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/5/c092994c-20c6-4f48-8765-9a5b6062cbf2)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_test
|
mail google com site is not usable url browser version firefox operating system windows tested another browser no problem type site is not usable description page not loading correctly steps to reproduce refusing to go to gmail sign in view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
141,450
| 5,436,216,042
|
IssuesEvent
|
2017-03-05 23:14:02
|
IserveU/IserveU
|
https://api.github.com/repos/IserveU/IserveU
|
closed
|
Terms and Conditions button and accept box enters :hover state inconsistently
|
bug low priority
|
Unhovered, unclicked, the button appears to be active:

Clicking on it lightens it, but then the "I Agree" box darkens.

Also, this is a separate issue, but 3 tickets for the T&C box seems like overkill. Is there any way to get rid of the scrollbar on the right when you open the modal?

|
1.0
|
Terms and Conditions button and accept box enters :hover state inconsistently - Unhovered, unclicked, the button appears to be active:

Clicking on it lightens it, but then the "I Agree" box darkens.

Also, this is a separate issue, but 3 tickets for the T&C box seems like overkill. Is there any way to get rid of the scrollbar on the right when you open the modal?

|
non_test
|
terms and conditions button and accept box enters hover state inconsistently unhovered unclicked the button appears to be active clicking on it lightens it but then the i agree box darkens also this is a separate issue but tickets for the t c box seems like overkill is there any way to get rid of the scrollbar on the right when you open the modal
| 0
|
292,871
| 8,969,606,935
|
IssuesEvent
|
2019-01-29 11:16:53
|
ConsenSys/mythril-classic
|
https://api.github.com/repos/ConsenSys/mythril-classic
|
closed
|
Latest mythril-dev docker image fails on analysis
|
Priority: Medium Review bug
|
## Description
Latest `mythril/myth-dev` docker image [fails on analysis](https://circleci.com/workflow-run/5a531854-ae08-481b-9595-277d33dec6ef) with
```
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/mythril-0.19.11-py3.6.egg/mythril/laser/ethereum/svm.py", line 283, in execute_state
```
## How to Reproduce
The failure occurs on the latest `mythril-dev` docker image so
```
docker pull mythril/myth-dev
docker run -it -v "$PWD"/build:/build mythril/myth-dev myth --truffle
```
The sample CI builds used here run against the [ColonyNetwork](https://github.com/JoinColony/colonyNetwork) repo.
## Expected behavior
`mythril-dev` image was analysing our project correctly 4 days ago as seen in [this build](https://circleci.com/gh/JoinColony/colonyNetwork/5945)
## Environment
- Mythril version: v0.19.11
- Solidity compiler and version: 0.4.23
- Python version: 2.7.15
- OS and Version: Mac OS High Sierra
|
1.0
|
Latest mythril-dev docker image fails on analysis - ## Description
Latest `mythril/myth-dev` docker image [fails on analysis](https://circleci.com/workflow-run/5a531854-ae08-481b-9595-277d33dec6ef) with
```
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/mythril-0.19.11-py3.6.egg/mythril/laser/ethereum/svm.py", line 283, in execute_state
```
## How to Reproduce
The failure occurs on the latest `mythril-dev` docker image so
```
docker pull mythril/myth-dev
docker run -it -v "$PWD"/build:/build mythril/myth-dev myth --truffle
```
The sample CI builds used here run against the [ColonyNetwork](https://github.com/JoinColony/colonyNetwork) repo.
## Expected behavior
`mythril-dev` image was analysing our project correctly 4 days ago as seen in [this build](https://circleci.com/gh/JoinColony/colonyNetwork/5945)
## Environment
- Mythril version: v0.19.11
- Solidity compiler and version: 0.4.23
- Python version: 2.7.15
- OS and Version: Mac OS High Sierra
|
non_test
|
latest mythril dev docker image fails on analysis description latest mythril myth dev docker image with traceback most recent call last file usr local lib dist packages mythril egg mythril laser ethereum svm py line in execute state how to reproduce the failure occurs on the latest mythril dev docker image so docker pull mythril myth dev docker run it v pwd build build mythril myth dev myth truffle the sample ci builds used here run against the repo expected behavior mythril dev image was analysing our project correctly days ago as seen in environment mythril version solidity compiler and version python version os and version mac os high sierra
| 0
|
22,397
| 4,794,198,514
|
IssuesEvent
|
2016-10-31 20:19:56
|
canjs/canjs
|
https://api.github.com/repos/canjs/canjs
|
closed
|
Docs show can.Map and can.List
|
Documentation
|
Isn't there no more can.Map or can.List?
http://canjs.github.io/canjs/doc/can-connect/can/map/map.html
^ That behavior links to 2.3 can.Map docs and shows can.Map in its examples.

^ super-map shows can.Map and can.List in its examples also
|
1.0
|
Docs show can.Map and can.List - Isn't there no more can.Map or can.List?
http://canjs.github.io/canjs/doc/can-connect/can/map/map.html
^ That behavior links to 2.3 can.Map docs and shows can.Map in its examples.

^ super-map shows can.Map and can.List in its examples also
|
non_test
|
docs show can map and can list isn t there no more can map or can list that behavior links to can map docs and shows can map in its examples super map shows can map and can list in its examples also
| 0
|
87,364
| 8,072,784,937
|
IssuesEvent
|
2018-08-06 17:04:11
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
Failing Tests: [sig-cluster-lifecycle] Reboot [Disruptive] [Feature:Reboot]
|
kind/bug kind/failing-test milestone/removed priority/critical-urgent sig/cluster-lifecycle sig/gcp
|
# Failing Jobs
[sig-release-master-blocking#gci-gke-reboot](https://k8s-testgrid.appspot.com/sig-release-master-blocking#gci-gke-reboot)
# Failing Tests
[each node by ordering clean reboot and ensure they function upon restart](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-reboot/16044)
[each node by switching off the network interface and ensure they function upon switch on](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-reboot/16044)
[each node by dropping all inbound packets for a while and ensure they function afterwards](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-reboot/16044)
[each node by dropping all outbound packets for a while and ensure they function afterwards](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-reboot/16044)
[each node by ordering unclean reboot and ensure they function upon restart](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-reboot/16044)
[each node by triggering kernel panic and ensure they function upon restart](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-reboot/16044)
Tests failing on sig-master-blocking
/cc @tpepper @AishSundar
/kind bug
/priority failing-test
/priority important-soon
/sig cluster-lifecycle
/milestone v1.12
|
1.0
|
Failing Tests: [sig-cluster-lifecycle] Reboot [Disruptive] [Feature:Reboot] - # Failing Jobs
[sig-release-master-blocking#gci-gke-reboot](https://k8s-testgrid.appspot.com/sig-release-master-blocking#gci-gke-reboot)
# Failing Tests
[each node by ordering clean reboot and ensure they function upon restart](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-reboot/16044)
[each node by switching off the network interface and ensure they function upon switch on](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-reboot/16044)
[each node by dropping all inbound packets for a while and ensure they function afterwards](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-reboot/16044)
[each node by dropping all outbound packets for a while and ensure they function afterwards](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-reboot/16044)
[each node by ordering unclean reboot and ensure they function upon restart](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-reboot/16044)
[each node by triggering kernel panic and ensure they function upon restart](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-reboot/16044)
Tests failing on sig-master-blocking
/cc @tpepper @AishSundar
/kind bug
/priority failing-test
/priority important-soon
/sig cluster-lifecycle
/milestone v1.12
|
test
|
failing tests reboot failing jobs failing tests tests failing on sig master blocking cc tpepper aishsundar kind bug priority failing test priority important soon sig cluster lifecycle milestone
| 1
|
11,454
| 30,550,681,521
|
IssuesEvent
|
2023-07-20 08:18:39
|
osrd-project/osrd
|
https://api.github.com/repos/osrd-project/osrd
|
closed
|
Isolate infra generator
|
area:core kind:architecture
|
## Short description meaningful for devs and users
- externalize core/examples/generated/lib into a new python library outside of code
- use mono repo library requirement handling
- for now we keep the [scripts](core/examples/generated/scripts) inside core
## Motivation
This will avoid having weird import such as this: https://github.com/DGEXSolutions/osrd/blob/8bba27c247b826a3c74ae49247a6f4fad6c945dc/tests/run_integration_tests.py#L34
## Dependencies
- https://github.com/DGEXSolutions/osrd/issues/1150
|
1.0
|
Isolate infra generator - ## Short description meaningful for devs and users
- externalize core/examples/generated/lib into a new python library outside of code
- use mono repo library requirement handling
- for now we keep the [scripts](core/examples/generated/scripts) inside core
## Motivation
This will avoid having weird import such as this: https://github.com/DGEXSolutions/osrd/blob/8bba27c247b826a3c74ae49247a6f4fad6c945dc/tests/run_integration_tests.py#L34
## Dependencies
- https://github.com/DGEXSolutions/osrd/issues/1150
|
non_test
|
isolate infra generator short description meaningful for devs and users externalize core examples generated lib into a new python library outside of code use mono repo library requirement handling for now we keep the core examples generated scripts inside core motivation this will avoid having weird import such as this dependencies
| 0
|
684,743
| 23,428,800,756
|
IssuesEvent
|
2022-08-14 20:05:31
|
red-hat-storage/ocs-ci
|
https://api.github.com/repos/red-hat-storage/ocs-ci
|
closed
|
flexy baremetal psi , post installation check fails because of mismatch in the expected output from osd tree
|
bug High Priority lifecycle/stale
|
Vimal Patel,
1 min
,
New
"""
if cls is None:
cls = validator_for(schema)
cls.check_schema(schema)
validator = cls(schema, *args, **kwargs)
error = exceptions.best_match(validator.iter_errors(instance))
if error is not None:
> raise error
E jsonschema.exceptions.ValidationError: 'ocp45vjp-h4bqr-worker-8lddv' is not one of ['ocs-deviceset-0-data-0-g5m9v', 'ocs-deviceset-1-data-0-xlsvx', 'ocs-deviceset-2-data-0-mc2g4']
E
E Failed validating 'enum' in schema['properties']['name']:
E {'enum': ['ocs-deviceset-0-data-0-g5m9v',
E 'ocs-deviceset-1-data-0-xlsvx',
E 'ocs-deviceset-2-data-0-mc2g4']}
E
E On instance['name']:
E 'ocp45vjp-h4bqr-worker-8lddv'
../myvenv/lib/python3.6/site-packages/jsonschema/validators.py:934: ValidationError
|
1.0
|
flexy baremetal psi , post installation check fails because of mismatch in the expected output from osd tree - Vimal Patel,
1 min
,
New
"""
if cls is None:
cls = validator_for(schema)
cls.check_schema(schema)
validator = cls(schema, *args, **kwargs)
error = exceptions.best_match(validator.iter_errors(instance))
if error is not None:
> raise error
E jsonschema.exceptions.ValidationError: 'ocp45vjp-h4bqr-worker-8lddv' is not one of ['ocs-deviceset-0-data-0-g5m9v', 'ocs-deviceset-1-data-0-xlsvx', 'ocs-deviceset-2-data-0-mc2g4']
E
E Failed validating 'enum' in schema['properties']['name']:
E {'enum': ['ocs-deviceset-0-data-0-g5m9v',
E 'ocs-deviceset-1-data-0-xlsvx',
E 'ocs-deviceset-2-data-0-mc2g4']}
E
E On instance['name']:
E 'ocp45vjp-h4bqr-worker-8lddv'
../myvenv/lib/python3.6/site-packages/jsonschema/validators.py:934: ValidationError
|
non_test
|
flexy baremetal psi post installation check fails because of mismatch in the expected output from osd tree vimal patel min new if cls is none cls validator for schema cls check schema schema validator cls schema args kwargs error exceptions best match validator iter errors instance if error is not none raise error e jsonschema exceptions validationerror worker is not one of e e failed validating enum in schema e enum ocs deviceset data e ocs deviceset data xlsvx e ocs deviceset data e e on instance e worker myvenv lib site packages jsonschema validators py validationerror
| 0
|
41,068
| 12,813,026,800
|
IssuesEvent
|
2020-07-04 10:20:34
|
radicle-dev/radicle-upstream
|
https://api.github.com/repos/radicle-dev/radicle-upstream
|
closed
|
What to do with external links
|
bug security
|
Now that there is basic markdown support in the app, it is possible to "inject" external links into the app.

We should figure out what to do in this case as there are security implications: https://www.electronjs.org/docs/tutorial/security#isolation-for-untrusted-content.
\cc @cloudhead, @juliendonck
|
True
|
What to do with external links - Now that there is basic markdown support in the app, it is possible to "inject" external links into the app.

We should figure out what to do in this case as there are security implications: https://www.electronjs.org/docs/tutorial/security#isolation-for-untrusted-content.
\cc @cloudhead, @juliendonck
|
non_test
|
what to do with external links now that there is basic markdown support in the app it is possible to inject external links into the app we should figure out what to do in this case as there are security implications cc cloudhead juliendonck
| 0
|
505,201
| 14,629,643,309
|
IssuesEvent
|
2020-12-23 16:12:26
|
Instant-Visio/InstantVisio-WebApp
|
https://api.github.com/repos/Instant-Visio/InstantVisio-WebApp
|
opened
|
Redirect if navigating to wrong routes
|
high priority webApp
|
The following should not be accessible, redirect to home page or admin dashboard
- `/premium-video`
- `/premium-video/room` (missing room Id)
- `/premium-video/room/${roomId}` (missing password)
|
1.0
|
Redirect if navigating to wrong routes - The following should not be accessible, redirect to home page or admin dashboard
- `/premium-video`
- `/premium-video/room` (missing room Id)
- `/premium-video/room/${roomId}` (missing password)
|
non_test
|
redirect if navigating to wrong routes the following should not be accessible redirect to home page or admin dashboard premium video premium video room missing room id premium video room roomid missing password
| 0
|
14,955
| 9,437,098,710
|
IssuesEvent
|
2019-04-13 12:39:36
|
josh-tf/cbvpos
|
https://api.github.com/repos/josh-tf/cbvpos
|
closed
|
WS-2015-0015 Medium Severity Vulnerability detected by WhiteSource
|
security vulnerability
|
## WS-2015-0015 - Medium Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ms-0.6.2.tgz</b></p></summary>
<p>Tiny ms conversion utility</p>
<p>Library home page: <a href="http://registry.npmjs.org/ms/-/ms-0.6.2.tgz">http://registry.npmjs.org/ms/-/ms-0.6.2.tgz</a></p>
<p>Path to dependency file: /cbvpos/app/package.json</p>
<p>Path to vulnerable library: /tmp/git/cbvpos/app/node_modules/ms/package.json</p>
<p>
Dependency Hierarchy:
- grunt-mocha-webdriver-1.2.2.tgz (Root Library)
- mocha-1.21.5.tgz
- debug-2.0.0.tgz
- :x: **ms-0.6.2.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Ms is vulnerable to regular expression denial of service (ReDoS) when extremely long version strings are parsed.
<p>Publish Date: 2015-10-24
<p>URL: <a href=https://nodesecurity.io/advisories/46>WS-2015-0015</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nodesecurity.io/advisories/46">https://nodesecurity.io/advisories/46</a></p>
<p>Release Date: 2015-10-24</p>
<p>Fix Resolution: Update to version 0.7.1 or greater. An alternative would be to limit the input length of the user input before passing it into ms.</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isOpenPROnNewVersion":false,"isPackageBased":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ms","packageVersion":"0.6.2","isTransitiveDependency":true,"dependencyTree":"grunt-mocha-webdriver:1.2.2;mocha:1.21.5;debug:2.0.0;ms:0.6.2","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"WS-2015-0015","vulnerabilityDetails":"Ms is vulnerable to regular expression denial of service (ReDoS) when extremely long version strings are parsed.","cvss2Severity":"medium","cvss2Score":"5.3","extraData":{}}</REMEDIATE> -->
|
True
|
WS-2015-0015 Medium Severity Vulnerability detected by WhiteSource - ## WS-2015-0015 - Medium Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ms-0.6.2.tgz</b></p></summary>
<p>Tiny ms conversion utility</p>
<p>Library home page: <a href="http://registry.npmjs.org/ms/-/ms-0.6.2.tgz">http://registry.npmjs.org/ms/-/ms-0.6.2.tgz</a></p>
<p>Path to dependency file: /cbvpos/app/package.json</p>
<p>Path to vulnerable library: /tmp/git/cbvpos/app/node_modules/ms/package.json</p>
<p>
Dependency Hierarchy:
- grunt-mocha-webdriver-1.2.2.tgz (Root Library)
- mocha-1.21.5.tgz
- debug-2.0.0.tgz
- :x: **ms-0.6.2.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Ms is vulnerable to regular expression denial of service (ReDoS) when extremely long version strings are parsed.
<p>Publish Date: 2015-10-24
<p>URL: <a href=https://nodesecurity.io/advisories/46>WS-2015-0015</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nodesecurity.io/advisories/46">https://nodesecurity.io/advisories/46</a></p>
<p>Release Date: 2015-10-24</p>
<p>Fix Resolution: Update to version 0.7.1 or greater. An alternative would be to limit the input length of the user input before passing it into ms.</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isOpenPROnNewVersion":false,"isPackageBased":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ms","packageVersion":"0.6.2","isTransitiveDependency":true,"dependencyTree":"grunt-mocha-webdriver:1.2.2;mocha:1.21.5;debug:2.0.0;ms:0.6.2","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"WS-2015-0015","vulnerabilityDetails":"Ms is vulnerable to regular expression denial of service (ReDoS) when extremely long version strings are parsed.","cvss2Severity":"medium","cvss2Score":"5.3","extraData":{}}</REMEDIATE> -->
|
non_test
|
ws medium severity vulnerability detected by whitesource ws medium severity vulnerability vulnerable library ms tgz tiny ms conversion utility library home page a href path to dependency file cbvpos app package json path to vulnerable library tmp git cbvpos app node modules ms package json dependency hierarchy grunt mocha webdriver tgz root library mocha tgz debug tgz x ms tgz vulnerable library vulnerability details ms is vulnerable to regular expression denial of service redos when extremely long version strings are parsed publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution update to version or greater an alternative would be to limit the input length of the user input before passing it into ms step up your open source security game with whitesource isopenpronvulnerability true isopenpronnewversion false ispackagebased true packages vulnerabilityidentifier ws vulnerabilitydetails ms is vulnerable to regular expression denial of service redos when extremely long version strings are parsed medium extradata
| 0
|
161,379
| 12,542,331,317
|
IssuesEvent
|
2020-06-05 13:52:45
|
fission/fission
|
https://api.github.com/repos/fission/fission
|
closed
|
Error running function with NodePort router
|
area-function area-test
|
<!-- Please answer these questions before submitting your issue. Thanks! -->
<!-- Documentation URL: https://docs.fission.io/ -->
<!-- Troubleshooting guide: https://docs.fission.io/trouble-shooting/ -->
Kubernetes v1.18
Fission v1.9.0
<!-- If you tested with other services, for example Istio, please also provide the version of service as well. -->
Kubeadm on a single master node (CentOS8 VM) and WeaveNet addon.
**Describe the bug**
Set up fission with serviceType and routerServiceType = NodePort as my kube cluster doesn't have LoadBalancer support.
When running fission fn test --name hello I get:
` Error: error executing HTTP request: Get http://<ip address>:<random port e.g. 45153>/fission-function/hello: context deadline exceeded`
I also get:
`Warning: The environment variable FISSION_ROUTER is no longer supported for this command`
Since the port changes every time I run the command, I disabled the firewall temporarily to see if this fixed the issue but it did not.
When trying to curl `<ip address>:31314/hello` I only get "error sending request to function" returned. The function logs don't show anything.
The router service has the type NodePort and port 31314 displayed. I've tested using NodePort outside of Fission following [this example](https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/) and it works correctly so I don't think it's an issue with my Kubernetes installation.
**To Reproduce**
Install Fission using Helm, by running the command:
`helm install --namespace $FISSION_NAMESPACE --name-template fission \
--set serviceType=NodePort,routerServiceType=NodePort \
https://github.com/fission/fission/releases/download/1.9.0/fission-all-1.9.0.tgz`
Try to run the hello nodejs example.
I'm running a CentOS8 VM.
**Expected result**
Return response "Hello, world!"
**Actual result**
```
Warning: The environment variable FISSION_ROUTER is no longer supported for this command
Options:
...
Global Options:
...
Usage:
fission function test [options]
"Error: error executing HTTP request: Get http://<ip address>:45153/fission-function/hello: context deadline exceeded"
```
Thanks.
|
1.0
|
Error running function with NodePort router - <!-- Please answer these questions before submitting your issue. Thanks! -->
<!-- Documentation URL: https://docs.fission.io/ -->
<!-- Troubleshooting guide: https://docs.fission.io/trouble-shooting/ -->
Kubernetes v1.18
Fission v1.9.0
<!-- If you tested with other services, for example Istio, please also provide the version of service as well. -->
Kubeadm on a single master node (CentOS8 VM) and WeaveNet addon.
**Describe the bug**
Set up fission with serviceType and routerServiceType = NodePort as my kube cluster doesn't have LoadBalancer support.
When running fission fn test --name hello I get:
` Error: error executing HTTP request: Get http://<ip address>:<random port e.g. 45153>/fission-function/hello: context deadline exceeded`
I also get:
`Warning: The environment variable FISSION_ROUTER is no longer supported for this command`
Since the port changes every time I run the command, I disabled the firewall temporarily to see if this fixed the issue but it did not.
When trying to curl `<ip address>:31314/hello` I only get "error sending request to function" returned. The function logs don't show anything.
The router service has the type NodePort and port 31314 displayed. I've tested using NodePort outside of Fission following [this example](https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/) and it works correctly so I don't think it's an issue with my Kubernetes installation.
**To Reproduce**
Install Fission using Helm, by running the command:
`helm install --namespace $FISSION_NAMESPACE --name-template fission \
--set serviceType=NodePort,routerServiceType=NodePort \
https://github.com/fission/fission/releases/download/1.9.0/fission-all-1.9.0.tgz`
Try to run the hello nodejs example.
I'm running a CentOS8 VM.
**Expected result**
Return response "Hello, world!"
**Actual result**
```
Warning: The environment variable FISSION_ROUTER is no longer supported for this command
Options:
...
Global Options:
...
Usage:
fission function test [options]
"Error: error executing HTTP request: Get http://<ip address>:45153/fission-function/hello: context deadline exceeded"
```
Thanks.
|
test
|
error running function with nodeport router kubernetes fission kubeadm on a single master node vm and weavenet addon describe the bug set up fission with servicetype and routerservicetype nodeport as my kube cluster doesn t have loadbalancer support when running fission fn test name hello i get error error executing http request get address fission function hello context deadline exceeded i also get warning the environment variable fission router is no longer supported for this command since the port changes every time i run the command i disabled the firewall temporarily to see if this fixed the issue but it did not when trying to curl hello i only get error sending request to function returned the function logs don t show anything the router service has the type nodeport and port displayed i ve tested using nodeport outside of fission following and it works correctly so i don t think it s an issue with my kubernetes installation to reproduce install fission using helm by running the command helm install namespace fission namespace name template fission set servicetype nodeport routerservicetype nodeport try to run the hello nodejs example i m running a vm expected result return response hello world actual result warning the environment variable fission router is no longer supported for this command options global options usage fission function test error error executing http request get address fission function hello context deadline exceeded thanks
| 1
|
344,889
| 30,770,262,711
|
IssuesEvent
|
2023-07-30 20:26:12
|
nodejs/jenkins-alerts
|
https://api.github.com/repos/nodejs/jenkins-alerts
|
closed
|
test-rackspace-win2022_vs2022-x64-6 is DOWN
|
potential-incident test-ci
|
:warning: The machine `test-rackspace-win2022_vs2022-x64-6` is currently offline.
Please refer to the [Jenkins Dashboard](https://ci.nodejs.org/manage/computer/test-rackspace-win2022_vs2022-x64-6) to check its status.
_This issue has been auto-generated by [UlisesGascon/jenkins-status-alerts-and-reporting](https://github.com/UlisesGascon/jenkins-status-alerts-and-reporting)._
|
1.0
|
test-rackspace-win2022_vs2022-x64-6 is DOWN - :warning: The machine `test-rackspace-win2022_vs2022-x64-6` is currently offline.
Please refer to the [Jenkins Dashboard](https://ci.nodejs.org/manage/computer/test-rackspace-win2022_vs2022-x64-6) to check its status.
_This issue has been auto-generated by [UlisesGascon/jenkins-status-alerts-and-reporting](https://github.com/UlisesGascon/jenkins-status-alerts-and-reporting)._
|
test
|
test rackspace is down warning the machine test rackspace is currently offline please refer to the to check its status this issue has been auto generated by
| 1
|
138,753
| 31,022,803,502
|
IssuesEvent
|
2023-08-10 07:01:57
|
appsmithorg/appsmith
|
https://api.github.com/repos/appsmithorg/appsmith
|
closed
|
[Feature]: For first time user, collapse all sections except DATA in property pane
|
Enhancement Frontend BE Coders Pod Integrations Pod Integrations Pod General
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Summary
source: https://docs.google.com/spreadsheets/d/1dFn8sgQluoxAWU-pgXCvkFTKq-Kb0ccvp8Wyuki6cgs/edit#gid=1778004507&range=7:7
notion: https://www.notion.so/appsmith/Activation-60c64894f42d4cdcb92220c1dbc73802?p=a27e7cfdc4ae4445b74f20b8963a637b&pm=s
Only for the first time user, at exactly the time that they see the [[Learning popover after binding](https://www.notion.so/Learning-popover-after-binding-790ca4efddd44cb79ae4ad4a3f7f0c41?pvs=21)](https://www.notion.so/Learning-popover-after-binding-790ca4efddd44cb79ae4ad4a3f7f0c41?pvs=21), all sections in the property pane except `Data` should be collapsed
https://www.figma.com/file/kbU9xPv44neCfv1FFo9Ndu/User-Activation?type=design&node-id=1091-60984&mode=design&t=Sjmppk3lWzTvZrYf-0
### Why should this be worked on?
activation project
|
1.0
|
[Feature]: For first time user, collapse all sections except DATA in property pane - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Summary
source: https://docs.google.com/spreadsheets/d/1dFn8sgQluoxAWU-pgXCvkFTKq-Kb0ccvp8Wyuki6cgs/edit#gid=1778004507&range=7:7
notion: https://www.notion.so/appsmith/Activation-60c64894f42d4cdcb92220c1dbc73802?p=a27e7cfdc4ae4445b74f20b8963a637b&pm=s
Only for the first time user, at exactly the time that they see the [[Learning popover after binding](https://www.notion.so/Learning-popover-after-binding-790ca4efddd44cb79ae4ad4a3f7f0c41?pvs=21)](https://www.notion.so/Learning-popover-after-binding-790ca4efddd44cb79ae4ad4a3f7f0c41?pvs=21), all sections in the property pane except `Data` should be collapsed
https://www.figma.com/file/kbU9xPv44neCfv1FFo9Ndu/User-Activation?type=design&node-id=1091-60984&mode=design&t=Sjmppk3lWzTvZrYf-0
### Why should this be worked on?
activation project
|
non_test
|
for first time user collapse all sections except data in property pane is there an existing issue for this i have searched the existing issues summary source notion only for the first time user at exactly the time that they see the all sections in the property pane except data should be collapsed why should this be worked on activation project
| 0
|
62,241
| 6,786,866,914
|
IssuesEvent
|
2017-10-31 00:16:12
|
easydigitaldownloads/edd-free-downloads
|
https://api.github.com/repos/easydigitaldownloads/edd-free-downloads
|
closed
|
No log entry created when downloading files from modal
|
Bug Has PR Needs Testing
|
Downloads -> Reports -> Logs ... filtered by `File Downloads`
When using the Free Downloads modal, no log entry is created for downloaded product files.
|
1.0
|
No log entry created when downloading files from modal - Downloads -> Reports -> Logs ... filtered by `File Downloads`
When using the Free Downloads modal, no log entry is created for downloaded product files.
|
test
|
no log entry created when downloading files from modal downloads reports logs filtered by file downloads when using the free downloads modal no log entry is created for downloaded product files
| 1
|
8,109
| 3,137,534,871
|
IssuesEvent
|
2015-09-11 03:38:30
|
angular-ui/angular-google-maps
|
https://api.github.com/repos/angular-ui/angular-google-maps
|
closed
|
website branch bower deps broken due to angular 1.3 vs 1.4.4 resolutions conflict
|
documentation website
|
see the bottom of #1498
|
1.0
|
website branch bower deps broken due to angular 1.3 vs 1.4.4 resolutions conflict - see the bottom of #1498
|
non_test
|
website branch bower deps broken due to angular vs resolutions conflict see the bottom of
| 0
|
16,736
| 4,081,614,521
|
IssuesEvent
|
2016-05-31 09:35:04
|
torchbox/wagtail
|
https://api.github.com/repos/torchbox/wagtail
|
closed
|
Update 'custom branding' docs to ditch django-overextends on Django 1.9+
|
difficulty:Easy Documentation size:Small
|
As observed in https://github.com/torchbox/wagtail/pull/2414#issuecomment-218782760, it appears that Django 1.9 has finally fixed circular template inheritance (https://docs.djangoproject.com/en/1.9/releases/1.9/#templates - "Django template loaders can now extend templates recursively"), which would seem to make django-overextends obsolete.
So, someone ought to:
* check that the custom branding snippets on http://docs.wagtail.io/en/v1.4.4/advanced_topics/customisation/branding.html work without django-overextends, with a plain `{% extends %}` tag
* if so, update that page with simplified instructions for Django 1.9, with a brief note at the bottom that Django 1.8 projects will need to use django-overextends.
|
1.0
|
Update 'custom branding' docs to ditch django-overextends on Django 1.9+ - As observed in https://github.com/torchbox/wagtail/pull/2414#issuecomment-218782760, it appears that Django 1.9 has finally fixed circular template inheritance (https://docs.djangoproject.com/en/1.9/releases/1.9/#templates - "Django template loaders can now extend templates recursively"), which would seem to make django-overextends obsolete.
So, someone ought to:
* check that the custom branding snippets on http://docs.wagtail.io/en/v1.4.4/advanced_topics/customisation/branding.html work without django-overextends, with a plain `{% extends %}` tag
* if so, update that page with simplified instructions for Django 1.9, with a brief note at the bottom that Django 1.8 projects will need to use django-overextends.
|
non_test
|
update custom branding docs to ditch django overextends on django as observed in it appears that django has finally fixed circular template inheritance django template loaders can now extend templates recursively which would seem to make django overextends obsolete so someone ought to check that the custom branding snippets on work without django overextends with a plain extends tag if so update that page with simplified instructions for django with a brief note at the bottom that django projects will need to use django overextends
| 0
|
149,170
| 23,440,719,599
|
IssuesEvent
|
2022-08-15 14:37:55
|
carbon-design-system/carbon-website
|
https://api.github.com/repos/carbon-design-system/carbon-website
|
opened
|
Fluid inputs: TextInput docs
|
type: enhancement 💡 role: design :pencil2:
|
Fluid inputs: Add TextInput website docs
- [ ] Create website branch to house updates
- [ ] Create fluid input pattern with initial Fluid TextInput
- [ ] Update "style" and "usage" tabs of component page to include Fluid variant
|
1.0
|
Fluid inputs: TextInput docs - Fluid inputs: Add TextInput website docs
- [ ] Create website branch to house updates
- [ ] Create fluid input pattern with initial Fluid TextInput
- [ ] Update "style" and "usage" tabs of component page to include Fluid variant
|
non_test
|
fluid inputs textinput docs fluid inputs add textinput website docs create website branch to house updates create fluid input pattern with initial fluid textinput update style and usage tabs of component page to include fluid variant
| 0
|
278,924
| 24,185,602,721
|
IssuesEvent
|
2022-09-23 13:03:22
|
vitessio/vitess
|
https://api.github.com/repos/vitessio/vitess
|
closed
|
Switch flag definitions to be on pflag instead of flag in `package go/cmd/vttestserver`
|
Type: Internal Cleanup Type: Enhancement Component: vttestserver Component: CLI
|
Part of https://github.com/vitessio/vitess/issues/10697.
Current flags:
```
$ git grep -E "\bflag\.[A-Z]" -- go/cmd/vttestserver/*.go
go/cmd/vttestserver/main.go: flag.IntVar(&basePort, "port", 0,
go/cmd/vttestserver/main.go: flag.StringVar(&protoTopo, "proto_topo", "",
go/cmd/vttestserver/main.go: flag.StringVar(&config.SchemaDir, "schema_dir", "",
go/cmd/vttestserver/main.go: flag.StringVar(&config.DefaultSchemaDir, "default_schema_dir", "",
go/cmd/vttestserver/main.go: flag.StringVar(&config.DataDir, "data_dir", "",
go/cmd/vttestserver/main.go: flag.BoolVar(&config.OnlyMySQL, "mysql_only", false,
go/cmd/vttestserver/main.go: flag.BoolVar(&config.PersistentMode, "persistent_mode", false,
go/cmd/vttestserver/main.go: flag.BoolVar(&doSeed, "initialize_with_random_data", false,
go/cmd/vttestserver/main.go: flag.IntVar(&seed.RngSeed, "rng_seed", 123,
go/cmd/vttestserver/main.go: flag.IntVar(&seed.MinSize, "min_table_shard_size", 1000,
go/cmd/vttestserver/main.go: flag.IntVar(&seed.MaxSize, "max_table_shard_size", 10000,
go/cmd/vttestserver/main.go: flag.Float64Var(&seed.NullProbability, "null_probability", 0.1,
go/cmd/vttestserver/main.go: flag.StringVar(&config.MySQLBindHost, "mysql_bind_host", "localhost",
go/cmd/vttestserver/main.go: flag.StringVar(&mycnf, "extra_my_cnf", "",
go/cmd/vttestserver/main.go: flag.StringVar(&topo.cells, "cells", "test", "Comma separated list of cells")
go/cmd/vttestserver/main.go: flag.StringVar(&topo.keyspaces, "keyspaces", "test_keyspace",
go/cmd/vttestserver/main.go: flag.StringVar(&topo.shards, "num_shards", "2",
go/cmd/vttestserver/main.go: flag.IntVar(&topo.replicas, "replica_count", 2,
go/cmd/vttestserver/main.go: flag.IntVar(&topo.rdonly, "rdonly_count", 1,
go/cmd/vttestserver/main.go: flag.StringVar(&config.Charset, "charset", "utf8mb4", "MySQL charset")
go/cmd/vttestserver/main.go: flag.StringVar(&config.PlannerVersion, "planner-version", "", "Sets the default planner to use when the session has not changed it. Valid values are: V3, Gen4, Gen4Greedy and Gen4Fallback. Gen4Fallback tries the new gen4 planner and falls back to the V3 planner if the gen4 fails.")
go/cmd/vttestserver/main.go: flag.StringVar(&config.PlannerVersionDeprecated, "planner_version", "", "planner_version is deprecated. Please use planner-version instead")
go/cmd/vttestserver/main.go: flag.StringVar(&config.SnapshotFile, "snapshot_file", "",
go/cmd/vttestserver/main.go: flag.BoolVar(&config.EnableSystemSettings, "enable_system_settings", true, "This will enable the system settings to be changed per session at the database connection level")
go/cmd/vttestserver/main.go: flag.StringVar(&config.TransactionMode, "transaction_mode", "MULTI", "Transaction mode MULTI (default), SINGLE or TWOPC ")
go/cmd/vttestserver/main.go: flag.Float64Var(&config.TransactionTimeout, "queryserver-config-transaction-timeout", 0, "query server transaction timeout (in seconds), a transaction will be killed if it takes longer than this value")
go/cmd/vttestserver/main.go: flag.StringVar(&config.TabletHostName, "tablet_hostname", "localhost", "The hostname to use for the tablet otherwise it will be derived from OS' hostname")
go/cmd/vttestserver/main.go: flag.BoolVar(&config.InitWorkflowManager, "workflow_manager_init", false, "Enable workflow manager")
go/cmd/vttestserver/main.go: flag.StringVar(&config.VSchemaDDLAuthorizedUsers, "vschema_ddl_authorized_users", "", "Comma separated list of users authorized to execute vschema ddl operations via vtgate")
go/cmd/vttestserver/main.go: flag.StringVar(&config.ForeignKeyMode, "foreign_key_mode", "allow", "This is to provide how to handle foreign key constraint in create/alter table. Valid values are: allow, disallow")
go/cmd/vttestserver/main.go: flag.BoolVar(&config.EnableOnlineDDL, "enable_online_ddl", true, "Allow users to submit, review and control Online DDL")
go/cmd/vttestserver/main.go: flag.BoolVar(&config.EnableDirectDDL, "enable_direct_ddl", true, "Allow users to submit direct DDL statements")
go/cmd/vttestserver/main.go: flag.StringVar(&config.ExternalTopoImplementation, "external_topo_implementation", "", "the topology implementation to use for vtcombo process")
go/cmd/vttestserver/main.go: flag.StringVar(&config.ExternalTopoGlobalServerAddress, "external_topo_global_server_address", "", "the address of the global topology server for vtcombo process")
go/cmd/vttestserver/main.go: flag.StringVar(&config.ExternalTopoGlobalRoot, "external_topo_global_root", "", "the path of the global topology data in the global topology server for vtcombo process")
go/cmd/vttestserver/main.go: if flag.Lookup(f.Name) == nil {
go/cmd/vttestserver/main.go: flag.Var(f.Value, f.Name, f.Usage)
go/cmd/vttestserver/main.go: flag.Parse()
```
|
1.0
|
Switch flag definitions to be on pflag instead of flag in `package go/cmd/vttestserver` - Part of https://github.com/vitessio/vitess/issues/10697.
Current flags:
```
$ git grep -E "\bflag\.[A-Z]" -- go/cmd/vttestserver/*.go
go/cmd/vttestserver/main.go: flag.IntVar(&basePort, "port", 0,
go/cmd/vttestserver/main.go: flag.StringVar(&protoTopo, "proto_topo", "",
go/cmd/vttestserver/main.go: flag.StringVar(&config.SchemaDir, "schema_dir", "",
go/cmd/vttestserver/main.go: flag.StringVar(&config.DefaultSchemaDir, "default_schema_dir", "",
go/cmd/vttestserver/main.go: flag.StringVar(&config.DataDir, "data_dir", "",
go/cmd/vttestserver/main.go: flag.BoolVar(&config.OnlyMySQL, "mysql_only", false,
go/cmd/vttestserver/main.go: flag.BoolVar(&config.PersistentMode, "persistent_mode", false,
go/cmd/vttestserver/main.go: flag.BoolVar(&doSeed, "initialize_with_random_data", false,
go/cmd/vttestserver/main.go: flag.IntVar(&seed.RngSeed, "rng_seed", 123,
go/cmd/vttestserver/main.go: flag.IntVar(&seed.MinSize, "min_table_shard_size", 1000,
go/cmd/vttestserver/main.go: flag.IntVar(&seed.MaxSize, "max_table_shard_size", 10000,
go/cmd/vttestserver/main.go: flag.Float64Var(&seed.NullProbability, "null_probability", 0.1,
go/cmd/vttestserver/main.go: flag.StringVar(&config.MySQLBindHost, "mysql_bind_host", "localhost",
go/cmd/vttestserver/main.go: flag.StringVar(&mycnf, "extra_my_cnf", "",
go/cmd/vttestserver/main.go: flag.StringVar(&topo.cells, "cells", "test", "Comma separated list of cells")
go/cmd/vttestserver/main.go: flag.StringVar(&topo.keyspaces, "keyspaces", "test_keyspace",
go/cmd/vttestserver/main.go: flag.StringVar(&topo.shards, "num_shards", "2",
go/cmd/vttestserver/main.go: flag.IntVar(&topo.replicas, "replica_count", 2,
go/cmd/vttestserver/main.go: flag.IntVar(&topo.rdonly, "rdonly_count", 1,
go/cmd/vttestserver/main.go: flag.StringVar(&config.Charset, "charset", "utf8mb4", "MySQL charset")
go/cmd/vttestserver/main.go: flag.StringVar(&config.PlannerVersion, "planner-version", "", "Sets the default planner to use when the session has not changed it. Valid values are: V3, Gen4, Gen4Greedy and Gen4Fallback. Gen4Fallback tries the new gen4 planner and falls back to the V3 planner if the gen4 fails.")
go/cmd/vttestserver/main.go: flag.StringVar(&config.PlannerVersionDeprecated, "planner_version", "", "planner_version is deprecated. Please use planner-version instead")
go/cmd/vttestserver/main.go: flag.StringVar(&config.SnapshotFile, "snapshot_file", "",
go/cmd/vttestserver/main.go: flag.BoolVar(&config.EnableSystemSettings, "enable_system_settings", true, "This will enable the system settings to be changed per session at the database connection level")
go/cmd/vttestserver/main.go: flag.StringVar(&config.TransactionMode, "transaction_mode", "MULTI", "Transaction mode MULTI (default), SINGLE or TWOPC ")
go/cmd/vttestserver/main.go: flag.Float64Var(&config.TransactionTimeout, "queryserver-config-transaction-timeout", 0, "query server transaction timeout (in seconds), a transaction will be killed if it takes longer than this value")
go/cmd/vttestserver/main.go: flag.StringVar(&config.TabletHostName, "tablet_hostname", "localhost", "The hostname to use for the tablet otherwise it will be derived from OS' hostname")
go/cmd/vttestserver/main.go: flag.BoolVar(&config.InitWorkflowManager, "workflow_manager_init", false, "Enable workflow manager")
go/cmd/vttestserver/main.go: flag.StringVar(&config.VSchemaDDLAuthorizedUsers, "vschema_ddl_authorized_users", "", "Comma separated list of users authorized to execute vschema ddl operations via vtgate")
go/cmd/vttestserver/main.go: flag.StringVar(&config.ForeignKeyMode, "foreign_key_mode", "allow", "This is to provide how to handle foreign key constraint in create/alter table. Valid values are: allow, disallow")
go/cmd/vttestserver/main.go: flag.BoolVar(&config.EnableOnlineDDL, "enable_online_ddl", true, "Allow users to submit, review and control Online DDL")
go/cmd/vttestserver/main.go: flag.BoolVar(&config.EnableDirectDDL, "enable_direct_ddl", true, "Allow users to submit direct DDL statements")
go/cmd/vttestserver/main.go: flag.StringVar(&config.ExternalTopoImplementation, "external_topo_implementation", "", "the topology implementation to use for vtcombo process")
go/cmd/vttestserver/main.go: flag.StringVar(&config.ExternalTopoGlobalServerAddress, "external_topo_global_server_address", "", "the address of the global topology server for vtcombo process")
go/cmd/vttestserver/main.go: flag.StringVar(&config.ExternalTopoGlobalRoot, "external_topo_global_root", "", "the path of the global topology data in the global topology server for vtcombo process")
go/cmd/vttestserver/main.go: if flag.Lookup(f.Name) == nil {
go/cmd/vttestserver/main.go: flag.Var(f.Value, f.Name, f.Usage)
go/cmd/vttestserver/main.go: flag.Parse()
```
|
test
|
switch flag definitions to be on pflag instead of flag in package go cmd vttestserver part of current flags git grep e bflag go cmd vttestserver go go cmd vttestserver main go flag intvar baseport port go cmd vttestserver main go flag stringvar prototopo proto topo go cmd vttestserver main go flag stringvar config schemadir schema dir go cmd vttestserver main go flag stringvar config defaultschemadir default schema dir go cmd vttestserver main go flag stringvar config datadir data dir go cmd vttestserver main go flag boolvar config onlymysql mysql only false go cmd vttestserver main go flag boolvar config persistentmode persistent mode false go cmd vttestserver main go flag boolvar doseed initialize with random data false go cmd vttestserver main go flag intvar seed rngseed rng seed go cmd vttestserver main go flag intvar seed minsize min table shard size go cmd vttestserver main go flag intvar seed maxsize max table shard size go cmd vttestserver main go flag seed nullprobability null probability go cmd vttestserver main go flag stringvar config mysqlbindhost mysql bind host localhost go cmd vttestserver main go flag stringvar mycnf extra my cnf go cmd vttestserver main go flag stringvar topo cells cells test comma separated list of cells go cmd vttestserver main go flag stringvar topo keyspaces keyspaces test keyspace go cmd vttestserver main go flag stringvar topo shards num shards go cmd vttestserver main go flag intvar topo replicas replica count go cmd vttestserver main go flag intvar topo rdonly rdonly count go cmd vttestserver main go flag stringvar config charset charset mysql charset go cmd vttestserver main go flag stringvar config plannerversion planner version sets the default planner to use when the session has not changed it valid values are and tries the new planner and falls back to the planner if the fails go cmd vttestserver main go flag stringvar config plannerversiondeprecated planner version planner version is deprecated please use planner version instead go cmd vttestserver main go flag stringvar config snapshotfile snapshot file go cmd vttestserver main go flag boolvar config enablesystemsettings enable system settings true this will enable the system settings to be changed per session at the database connection level go cmd vttestserver main go flag stringvar config transactionmode transaction mode multi transaction mode multi default single or twopc go cmd vttestserver main go flag config transactiontimeout queryserver config transaction timeout query server transaction timeout in seconds a transaction will be killed if it takes longer than this value go cmd vttestserver main go flag stringvar config tablethostname tablet hostname localhost the hostname to use for the tablet otherwise it will be derived from os hostname go cmd vttestserver main go flag boolvar config initworkflowmanager workflow manager init false enable workflow manager go cmd vttestserver main go flag stringvar config vschemaddlauthorizedusers vschema ddl authorized users comma separated list of users authorized to execute vschema ddl operations via vtgate go cmd vttestserver main go flag stringvar config foreignkeymode foreign key mode allow this is to provide how to handle foreign key constraint in create alter table valid values are allow disallow go cmd vttestserver main go flag boolvar config enableonlineddl enable online ddl true allow users to submit review and control online ddl go cmd vttestserver main go flag boolvar config enabledirectddl enable direct ddl true allow users to submit direct ddl statements go cmd vttestserver main go flag stringvar config externaltopoimplementation external topo implementation the topology implementation to use for vtcombo process go cmd vttestserver main go flag stringvar config externaltopoglobalserveraddress external topo global server address the address of the global topology server for vtcombo process go cmd vttestserver main go flag stringvar config externaltopoglobalroot external topo global root the path of the global topology data in the global topology server for vtcombo process go cmd vttestserver main go if flag lookup f name nil go cmd vttestserver main go flag var f value f name f usage go cmd vttestserver main go flag parse
| 1
|
191,017
| 14,592,355,823
|
IssuesEvent
|
2020-12-19 17:15:31
|
LIBCAS/ARCLib
|
https://api.github.com/repos/LIBCAS/ARCLib
|
closed
|
Build ze zdrojových kódů - nevalidní závislost
|
to test
|
Zdravím,
stáhl jsem poslední commit, rozbalil a pokusil se o build aplikace:
`...ARCLib$ mvn clean package -Dmaven.test.skip=true`
Maven si začal stěžovat na nedostupnou knihovnu `itext` - viz výpis:

Build sice doběhl, ale nejsem si úplně jistý, zda-li je takto sestavená aplikace plně funkční, nebo jde opravdu jen o nějakou zapomenutou závislost na knihovně, která už pak není nikde využita.
Takže jsem stáhl inkriminovanou knihovnu ručně a naimportoval do svého lokálního `.m2` repozitáře. Následně opakovaný build už proběhl bez stížností.
Závěr. Pokud jde jen o zapomenutou závislost, bylo by dobré ji odstranit. Pokud je knihovna naopak využívána, ale musí se instalovat ručně, bylo by dobré to zmínit explicitně v dokumentaci.
Já knihovnu stáhl odtud:
[http://jaspersoft.jfrog.io/jaspersoft/third-party-ce-artifacts/com/lowagie/itext/2.1.7.js6/](url)
a do lokálního repozitáře nainstaloval příkazem:
```
mvn install:install-file -DgroupId=com.lowagie -DartifactId=itext -Dversion=2.1.7.js6 -Dpackaging=jar -Dfile=/tmp/itext-2.1.7.js6.jar
```
MD
|
1.0
|
Build ze zdrojových kódů - nevalidní závislost - Zdravím,
stáhl jsem poslední commit, rozbalil a pokusil se o build aplikace:
`...ARCLib$ mvn clean package -Dmaven.test.skip=true`
Maven si začal stěžovat na nedostupnou knihovnu `itext` - viz výpis:

Build sice doběhl, ale nejsem si úplně jistý, zda-li je takto sestavená aplikace plně funkční, nebo jde opravdu jen o nějakou zapomenutou závislost na knihovně, která už pak není nikde využita.
Takže jsem stáhl inkriminovanou knihovnu ručně a naimportoval do svého lokálního `.m2` repozitáře. Následně opakovaný build už proběhl bez stížností.
Závěr. Pokud jde jen o zapomenutou závislost, bylo by dobré ji odstranit. Pokud je knihovna naopak využívána, ale musí se instalovat ručně, bylo by dobré to zmínit explicitně v dokumentaci.
Já knihovnu stáhl odtud:
[http://jaspersoft.jfrog.io/jaspersoft/third-party-ce-artifacts/com/lowagie/itext/2.1.7.js6/](url)
a do lokálního repozitáře nainstaloval příkazem:
```
mvn install:install-file -DgroupId=com.lowagie -DartifactId=itext -Dversion=2.1.7.js6 -Dpackaging=jar -Dfile=/tmp/itext-2.1.7.js6.jar
```
MD
|
test
|
build ze zdrojových kódů nevalidní závislost zdravím stáhl jsem poslední commit rozbalil a pokusil se o build aplikace arclib mvn clean package dmaven test skip true maven si začal stěžovat na nedostupnou knihovnu itext viz výpis build sice doběhl ale nejsem si úplně jistý zda li je takto sestavená aplikace plně funkční nebo jde opravdu jen o nějakou zapomenutou závislost na knihovně která už pak není nikde využita takže jsem stáhl inkriminovanou knihovnu ručně a naimportoval do svého lokálního repozitáře následně opakovaný build už proběhl bez stížností závěr pokud jde jen o zapomenutou závislost bylo by dobré ji odstranit pokud je knihovna naopak využívána ale musí se instalovat ručně bylo by dobré to zmínit explicitně v dokumentaci já knihovnu stáhl odtud url a do lokálního repozitáře nainstaloval příkazem mvn install install file dgroupid com lowagie dartifactid itext dversion dpackaging jar dfile tmp itext jar md
| 1
|
187,138
| 6,746,501,786
|
IssuesEvent
|
2017-10-21 03:06:24
|
HabitRPG/habitica
|
https://api.github.com/repos/HabitRPG/habitica
|
closed
|
When adding tasks to an existing challenge, they are not tagged
|
priority: medium section: Challenges: all section: Challenges: creating / editing
|
### General Info
* UUID: bb089388-28ae-4e42-a8fa-f0c2bfb6f779
* Browser: Chrome
* OS: Windows 10
### Description
When I edit a challenge to add a new task, the new tasks aren't given the challenge tag for me or for other participants (checked on Ehlyah's account, 9c4776eb-7330-465b-85cc-e4ccef6c1ad1). The challenge in question is [this one](https://habitica.com/#/options/groups/challenges/52fbaf92-be04-43d5-9158-302c5cb4fd73).
Even if I add the tasks via the staging site and deliberately select the correct tag (note that the challenge tag doesn't seem to be applied by default on the staging site when you create a new task in a challenge), the tag is not applied.
This is a particular problem with this challenge because I'm going to be adding tasks to this challenge frequently over the next few days/weeks, and people might well want to filter by the tag (I certainly do). It's also an issue for challenges which involve adding discussion questions as people read a book/watch a series/etc.
Alys said this bug isn't currently reported and asked me to post here.
|
1.0
|
When adding tasks to an existing challenge, they are not tagged - ### General Info
* UUID: bb089388-28ae-4e42-a8fa-f0c2bfb6f779
* Browser: Chrome
* OS: Windows 10
### Description
When I edit a challenge to add a new task, the new tasks aren't given the challenge tag for me or for other participants (checked on Ehlyah's account, 9c4776eb-7330-465b-85cc-e4ccef6c1ad1). The challenge in question is [this one](https://habitica.com/#/options/groups/challenges/52fbaf92-be04-43d5-9158-302c5cb4fd73).
Even if I add the tasks via the staging site and deliberately select the correct tag (note that the challenge tag doesn't seem to be applied by default on the staging site when you create a new task in a challenge), the tag is not applied.
This is a particular problem with this challenge because I'm going to be adding tasks to this challenge frequently over the next few days/weeks, and people might well want to filter by the tag (I certainly do). It's also an issue for challenges which involve adding discussion questions as people read a book/watch a series/etc.
Alys said this bug isn't currently reported and asked me to post here.
|
non_test
|
when adding tasks to an existing challenge they are not tagged general info uuid browser chrome os windows description when i edit a challenge to add a new task the new tasks aren t given the challenge tag for me or for other participants checked on ehlyah s account the challenge in question is even if i add the tasks via the staging site and deliberately select the correct tag note that the challenge tag doesn t seem to be applied by default on the staging site when you create a new task in a challenge the tag is not applied this is a particular problem with this challenge because i m going to be adding tasks to this challenge frequently over the next few days weeks and people might well want to filter by the tag i certainly do it s also an issue for challenges which involve adding discussion questions as people read a book watch a series etc alys said this bug isn t currently reported and asked me to post here
| 0
|
69,136
| 7,125,715,987
|
IssuesEvent
|
2018-01-20 00:55:02
|
metafetish/buttplug-js
|
https://api.github.com/repos/metafetish/buttplug-js
|
closed
|
Use local server for tests instead of mock websocket server
|
testing
|
The original client tests were written when there was no local server, and uses a mocked websocket server. This is fine for testing the websocket connector, but we can change most of the functionality tests to use the local server, to make the tests more concise.
|
1.0
|
Use local server for tests instead of mock websocket server - The original client tests were written when there was no local server, and uses a mocked websocket server. This is fine for testing the websocket connector, but we can change most of the functionality tests to use the local server, to make the tests more concise.
|
test
|
use local server for tests instead of mock websocket server the original client tests were written when there was no local server and uses a mocked websocket server this is fine for testing the websocket connector but we can change most of the functionality tests to use the local server to make the tests more concise
| 1
|
210,990
| 16,162,789,092
|
IssuesEvent
|
2021-05-01 00:46:32
|
deweesa/Leaguerboard
|
https://api.github.com/repos/deweesa/Leaguerboard
|
closed
|
Need a testing plan for the project
|
testing
|
Have the config for using pytest, we just don't have any tests written yet.
|
1.0
|
Need a testing plan for the project - Have the config for using pytest, we just don't have any tests written yet.
|
test
|
need a testing plan for the project have the config for using pytest we just don t have any tests written yet
| 1
|
81,744
| 7,801,783,847
|
IssuesEvent
|
2018-06-10 02:46:16
|
dw/mitogen
|
https://api.github.com/repos/dw/mitogen
|
closed
|
ansible: async jobs should outlive ansible-playbook run
|
NeedsRegressionTest ansible bug
|
To match existing behaviour.
Needs core.py support for continuing to execute after the parent disconnects... some kind of "ExternalContext.stay_alive()" or something.
|
1.0
|
ansible: async jobs should outlive ansible-playbook run - To match existing behaviour.
Needs core.py support for continuing to execute after the parent disconnects... some kind of "ExternalContext.stay_alive()" or something.
|
test
|
ansible async jobs should outlive ansible playbook run to match existing behaviour needs core py support for continuing to execute after the parent disconnects some kind of externalcontext stay alive or something
| 1
|
321,634
| 27,544,850,202
|
IssuesEvent
|
2023-03-07 11:01:19
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: sequelize failed
|
C-test-failure O-robot O-roachtest release-blocker branch-release-22.1
|
roachtest.sequelize [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8948493?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8948493?buildTab=artifacts#/sequelize) on release-22.1 @ [01068d9bae1a832385a6dc1096195ed650ab4377](https://github.com/cockroachdb/cockroach/commits/01068d9bae1a832385a6dc1096195ed650ab4377):
```
test artifacts and logs in: /artifacts/sequelize/run_1
(sequelize.go:151).func1: COMMAND_PROBLEM: exit status 21
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-sessions
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sequelize.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
2.0
|
roachtest: sequelize failed - roachtest.sequelize [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8948493?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8948493?buildTab=artifacts#/sequelize) on release-22.1 @ [01068d9bae1a832385a6dc1096195ed650ab4377](https://github.com/cockroachdb/cockroach/commits/01068d9bae1a832385a6dc1096195ed650ab4377):
```
test artifacts and logs in: /artifacts/sequelize/run_1
(sequelize.go:151).func1: COMMAND_PROBLEM: exit status 21
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-sessions
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sequelize.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
test
|
roachtest sequelize failed roachtest sequelize with on release test artifacts and logs in artifacts sequelize run sequelize go command problem exit status parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest ssd help see see cc cockroachdb sql sessions
| 1
|
381,789
| 11,287,919,914
|
IssuesEvent
|
2020-01-16 06:21:29
|
grpc/grpc
|
https://api.github.com/repos/grpc/grpc
|
closed
|
How to set connect timeout time when server is not avalible or blocked
|
kind/bug lang/Python priority/P2
|
<!--
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers here:
- grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
- StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
python
### What operating system (Linux, Windows,...) and version?
linux debian 9
### What runtime / compiler are you using (e.g. python version or version of gcc)
python2.7
### What did you do?
If possible, provide a recipe for reproducing the error. Try being specific and include code snippets if helpful.
just lanuch a client with:
1. server not worked
2. server worked , but firewall will drop server port sometimes
the client code like this.
```python
def run():
start = time.time()
try:
with grpc.insecure_channel('172.20.22.12:5005') as channel:
stub = helloworld_pb2_grpc.GreeterStub(channel)
response = stub.SayHello(helloworld_pb2.HelloRequest(name='you'))
print("Greeter client received: " + response.message)
except grpc._channel._Rendezvous as err:
print("keepAlive check ping pong err in %s and caused by : %s"% ((time.time() - start) , err) )
```
result on server not worked is
```shell
# python greeter_client.py
keepAlive check ping pong err in 0.0137319564819 and caused by : failed to connect to all addresses
```
> cost only few ms and return
result on server port dropped by firewall is

> yea, cost 20 second and return
And I have done many tests, I found that some extreme cases take 2 minutes to return or no return
### What did you expect to see?
How can handle this 20 seconds to 2 seconds when a server is block by firewall, And I will do some action to release the firewall's interception.
### What did you see instead?
I read the channel params , but I found nothing .
Make sure you include information that can help us debug (full error message, exception listing, stack trace, logs).
See [TROUBLESHOOTING.md](https://github.com/grpc/grpc/blob/master/TROUBLESHOOTING.md) for how to diagnose problems better.
### Anything else we should know about your project / environment?
|
1.0
|
How to set connect timeout time when server is not avalible or blocked - <!--
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers here:
- grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
- StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
python
### What operating system (Linux, Windows,...) and version?
linux debian 9
### What runtime / compiler are you using (e.g. python version or version of gcc)
python2.7
### What did you do?
If possible, provide a recipe for reproducing the error. Try being specific and include code snippets if helpful.
just lanuch a client with:
1. server not worked
2. server worked , but firewall will drop server port sometimes
the client code like this.
```python
def run():
start = time.time()
try:
with grpc.insecure_channel('172.20.22.12:5005') as channel:
stub = helloworld_pb2_grpc.GreeterStub(channel)
response = stub.SayHello(helloworld_pb2.HelloRequest(name='you'))
print("Greeter client received: " + response.message)
except grpc._channel._Rendezvous as err:
print("keepAlive check ping pong err in %s and caused by : %s"% ((time.time() - start) , err) )
```
result on server not worked is
```shell
# python greeter_client.py
keepAlive check ping pong err in 0.0137319564819 and caused by : failed to connect to all addresses
```
> cost only few ms and return
result on server port dropped by firewall is

> yea, cost 20 second and return
And I have done many tests, I found that some extreme cases take 2 minutes to return or no return
### What did you expect to see?
How can handle this 20 seconds to 2 seconds when a server is block by firewall, And I will do some action to release the firewall's interception.
### What did you see instead?
I read the channel params , but I found nothing .
Make sure you include information that can help us debug (full error message, exception listing, stack trace, logs).
See [TROUBLESHOOTING.md](https://github.com/grpc/grpc/blob/master/TROUBLESHOOTING.md) for how to diagnose problems better.
### Anything else we should know about your project / environment?
|
non_test
|
how to set connect timeout time when server is not avalible or blocked this form is for bug reports and feature requests only for general questions and troubleshooting please ask look for answers here grpc io mailing list stackoverflow with grpc tag issues specific to grpc java grpc go grpc node grpc dart grpc web should be created in the repository they belong to e g what version of grpc and what language are you using python what operating system linux windows and version linux debian what runtime compiler are you using e g python version or version of gcc what did you do if possible provide a recipe for reproducing the error try being specific and include code snippets if helpful just lanuch a client with server not worked server worked but firewall will drop server port sometimes the client code like this python def run start time time try with grpc insecure channel as channel stub helloworld grpc greeterstub channel response stub sayhello helloworld hellorequest name you print greeter client received response message except grpc channel rendezvous as err print keepalive check ping pong err in s and caused by s time time start err result on server not worked is shell python greeter client py keepalive check ping pong err in and caused by failed to connect to all addresses cost only few ms and return result on server port dropped by firewall is yea cost second and return and i have done many tests i found that some extreme cases take minutes to return or no return what did you expect to see how can handle this seconds to seconds when a server is block by firewall and i will do some action to release the firewall s interception what did you see instead i read the channel params but i found nothing make sure you include information that can help us debug full error message exception listing stack trace logs see for how to diagnose problems better anything else we should know about your project environment
| 0
|
52,775
| 3,029,389,473
|
IssuesEvent
|
2015-08-04 12:19:00
|
guardian/frontend
|
https://api.github.com/repos/guardian/frontend
|
closed
|
Football - Default Match Data Displaying
|
bug football priority: high
|
When we don't receive a data feed for certain information (Lingua 1 for example only provides scores) we should only display what is received. In the example of possession the default displays 50% / 50% but this shouldn't be displayed at all if no data received.
http://www.theguardian.com/football/2015/apr/05/marseille-psg-ligue-1-match-report
|
1.0
|
Football - Default Match Data Displaying - When we don't receive a data feed for certain information (Lingua 1 for example only provides scores) we should only display what is received. In the example of possession the default displays 50% / 50% but this shouldn't be displayed at all if no data received.
http://www.theguardian.com/football/2015/apr/05/marseille-psg-ligue-1-match-report
|
non_test
|
football default match data displaying when we don t receive a data feed for certain information lingua for example only provides scores we should only display what is received in the example of possession the default displays but this shouldn t be displayed at all if no data received
| 0
|
42,704
| 5,460,276,576
|
IssuesEvent
|
2017-03-09 04:21:59
|
coreos/etcd
|
https://api.github.com/repos/coreos/etcd
|
closed
|
test: TestCtlV3MemberRemove
|
area/testing
|
```
--- FAIL: TestCtlV3MemberRemove (4.29s)
ctl_v3_member_test.go:91: read /dev/ptmx: input/output error (expected "9366cce900fbd794 removed from cluster 4529f4d7b68cdaf9", got ["Error: etcdserver: server stopped\r\n"])
```
|
1.0
|
test: TestCtlV3MemberRemove - ```
--- FAIL: TestCtlV3MemberRemove (4.29s)
ctl_v3_member_test.go:91: read /dev/ptmx: input/output error (expected "9366cce900fbd794 removed from cluster 4529f4d7b68cdaf9", got ["Error: etcdserver: server stopped\r\n"])
```
|
test
|
test fail ctl member test go read dev ptmx input output error expected removed from cluster got
| 1
|
78,688
| 15,586,064,772
|
IssuesEvent
|
2021-03-18 01:05:27
|
Mohib-hub/karate
|
https://api.github.com/repos/Mohib-hub/karate
|
opened
|
CVE-2020-35490 (High) detected in jackson-databind-2.9.8.jar
|
security vulnerability
|
## CVE-2020-35490 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: karate/examples/gatling/build.gradle</p>
<p>Path to vulnerable library: /tmp/ws-ua_20200323212715/downloadResource_20478f94-1633-47a1-ad79-827f8481d3e7/20200323212748/jackson-databind-2.9.8.jar,/tmp/ws-ua_20200323212715/downloadResource_20478f94-1633-47a1-ad79-827f8481d3e7/20200323212748/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- karate-gatling-0.9.5.jar (Root Library)
- gatling-charts-highcharts-3.0.2.jar
- gatling-charts-3.0.2.jar
- gatling-core-3.0.2.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.dbcp2.datasources.PerUserPoolDataSource.
<p>Publish Date: 2020-12-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-35490>CVE-2020-35490</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2986">https://github.com/FasterXML/jackson-databind/issues/2986</a></p>
<p>Release Date: 2020-12-17</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.8","packageFilePaths":["/examples/gatling/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"com.intuit.karate:karate-gatling:0.9.5;io.gatling.highcharts:gatling-charts-highcharts:3.0.2;io.gatling:gatling-charts:3.0.2;io.gatling:gatling-core:3.0.2;com.fasterxml.jackson.core:jackson-databind:2.9.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2020-35490","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.dbcp2.datasources.PerUserPoolDataSource.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-35490","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-35490 (High) detected in jackson-databind-2.9.8.jar - ## CVE-2020-35490 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: karate/examples/gatling/build.gradle</p>
<p>Path to vulnerable library: /tmp/ws-ua_20200323212715/downloadResource_20478f94-1633-47a1-ad79-827f8481d3e7/20200323212748/jackson-databind-2.9.8.jar,/tmp/ws-ua_20200323212715/downloadResource_20478f94-1633-47a1-ad79-827f8481d3e7/20200323212748/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- karate-gatling-0.9.5.jar (Root Library)
- gatling-charts-highcharts-3.0.2.jar
- gatling-charts-3.0.2.jar
- gatling-core-3.0.2.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.dbcp2.datasources.PerUserPoolDataSource.
<p>Publish Date: 2020-12-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-35490>CVE-2020-35490</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2986">https://github.com/FasterXML/jackson-databind/issues/2986</a></p>
<p>Release Date: 2020-12-17</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.8","packageFilePaths":["/examples/gatling/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"com.intuit.karate:karate-gatling:0.9.5;io.gatling.highcharts:gatling-charts-highcharts:3.0.2;io.gatling:gatling-charts:3.0.2;io.gatling:gatling-core:3.0.2;com.fasterxml.jackson.core:jackson-databind:2.9.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2020-35490","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.dbcp2.datasources.PerUserPoolDataSource.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-35490","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_test
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file karate examples gatling build gradle path to vulnerable library tmp ws ua downloadresource jackson databind jar tmp ws ua downloadresource jackson databind jar dependency hierarchy karate gatling jar root library gatling charts highcharts jar gatling charts jar gatling core jar x jackson databind jar vulnerable library vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache commons datasources peruserpooldatasource publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com intuit karate karate gatling io gatling highcharts gatling charts highcharts io gatling gatling charts io gatling gatling core com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache commons datasources peruserpooldatasource vulnerabilityurl
| 0
|
33,118
| 4,807,647,621
|
IssuesEvent
|
2016-11-02 22:07:09
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
If Property is defined in Custom config with different value already exists in hapxoy's config it just gets added instead of being overwritten.
|
area/LBRefactor kind/bug status/resolved status/to-test
|
Tested with Rancher Version: latest cattle build from lbrefactormetadata from https://github.com/alena1108/cattle.git
Steps to reproduce the problem:
While creating a balancer service , pass the following haproxy_cfg as config parameter.
```
default_cfg = "defaults\ntimeout client 30000\n"
global_cfg = "global\nmaxconn 5096\n"
frontend_cfg = "frontend " + port +" \ntimeout connect 3000"
haproxy_cfg = default_cfg + global_cfg + frontend_cfg
```
haproxy.config file has 2 entries for timeout client and maxconn instead of just having the entry that is passed in config:
<img width="698" alt="screen shot 2016-08-16 at 4 38 24 pm" src="https://cloud.githubusercontent.com/assets/4266958/17719516/f7f0e138-63cf-11e6-9679-bc6e1ee635be.png">
<img width="698" alt="screen shot 2016-08-16 at 4 38 15 pm" src="https://cloud.githubusercontent.com/assets/4266958/17719519/fce14aa2-63cf-11e6-8ee2-7a5ad4bbad82.png">
|
1.0
|
If Property is defined in Custom config with different value already exists in hapxoy's config it just gets added instead of being overwritten. - Tested with Rancher Version: latest cattle build from lbrefactormetadata from https://github.com/alena1108/cattle.git
Steps to reproduce the problem:
While creating a balancer service , pass the following haproxy_cfg as config parameter.
```
default_cfg = "defaults\ntimeout client 30000\n"
global_cfg = "global\nmaxconn 5096\n"
frontend_cfg = "frontend " + port +" \ntimeout connect 3000"
haproxy_cfg = default_cfg + global_cfg + frontend_cfg
```
haproxy.config file has 2 entries for timeout client and maxconn instead of just having the entry that is passed in config:
<img width="698" alt="screen shot 2016-08-16 at 4 38 24 pm" src="https://cloud.githubusercontent.com/assets/4266958/17719516/f7f0e138-63cf-11e6-9679-bc6e1ee635be.png">
<img width="698" alt="screen shot 2016-08-16 at 4 38 15 pm" src="https://cloud.githubusercontent.com/assets/4266958/17719519/fce14aa2-63cf-11e6-8ee2-7a5ad4bbad82.png">
|
test
|
if property is defined in custom config with different value already exists in hapxoy s config it just gets added instead of being overwritten tested with rancher version latest cattle build from lbrefactormetadata from steps to reproduce the problem while creating a balancer service pass the following haproxy cfg as config parameter default cfg defaults ntimeout client n global cfg global nmaxconn n frontend cfg frontend port ntimeout connect haproxy cfg default cfg global cfg frontend cfg haproxy config file has entries for timeout client and maxconn instead of just having the entry that is passed in config img width alt screen shot at pm src img width alt screen shot at pm src
| 1
|
150,134
| 11,948,921,381
|
IssuesEvent
|
2020-04-03 12:47:30
|
Oldes/Rebol-issues
|
https://api.github.com/repos/Oldes/Rebol-issues
|
closed
|
Calling an action made by oneself crashes R3
|
Oldes.resolved Status.important Test.written Type.bug
|
_Submitted by:_ **meijeru**
It is not forbidden to execute MAKE for an action! value (though of course this has not much sense). Same goes for native! and op!.
However, calling the action or native one has just made crashes REBOL
``` rebol
>> a: make action! [[][]]
== make action! [[][]]
>> a
** crash
;; also:
>> n: make native! [[][]]
== make native! [[][]]
>> n
** crash
```
---
<sup>**Imported from:** **[CureCode](https://www.curecode.org/rebol3/ticket.rsp?id=1051)** [ Version: alpha 66 Type: Bug Platform: All Category: n/a Reproduce: Always Fixed-in:alpha 67 ]</sup>
<sup>**Imported from**: https://github.com/rebol/rebol-issues/issues/1051</sup>
Comments:
---
> **Rebolbot** commented on Jul 6, 2009:
_Submitted by:_ **BrianH**
The error is that make action! or native! [[][]] works at all - it should throw an error.
The ACTION and NATIVE functions should be the only way to create values of these types. If the MAKE action works badly with these specs, then trial and error would eventually lead to specs that work, which would allow recreation of these functions in REBOL code when they are otherwise intentionally inaccessible. This would be a major security hole.
As it is, the behavior of the make actions of these types suggests that the resulting functions access memory badly. We are lucky that it crashes - the situation is worse for op! #1052.
---
> **Rebolbot** commented on Jul 6, 2009:
_Submitted by:_ **Carl**
For op!: see my comment in #1052.
---
> **Rebolbot** mentioned this issue on Jan 12, 2016:
> [Calling an OP one has made oneself gives strange error](https://github.com/Oldes/Rebol-issues/issues/1052)
---
> **Rebolbot** added **Type.bug** and **Status.important** on Jan 12, 2016
---
> **Oldes** added a commit to **[Oldes/Rebol3](https://github.com/Oldes/Rebol3/)** that referenced this issue on Apr 29, 2019:
> [FIX: Calling an action made by oneself crashes R3](https://github.com/Oldes/Rebol3/commit/0dae54bbe2b6d3c6a59edebec0ff0035b69f1a9f)
---
|
1.0
|
Calling an action made by oneself crashes R3 - _Submitted by:_ **meijeru**
It is not forbidden to execute MAKE for an action! value (though of course this has not much sense). Same goes for native! and op!.
However, calling the action or native one has just made crashes REBOL
``` rebol
>> a: make action! [[][]]
== make action! [[][]]
>> a
** crash
;; also:
>> n: make native! [[][]]
== make native! [[][]]
>> n
** crash
```
---
<sup>**Imported from:** **[CureCode](https://www.curecode.org/rebol3/ticket.rsp?id=1051)** [ Version: alpha 66 Type: Bug Platform: All Category: n/a Reproduce: Always Fixed-in:alpha 67 ]</sup>
<sup>**Imported from**: https://github.com/rebol/rebol-issues/issues/1051</sup>
Comments:
---
> **Rebolbot** commented on Jul 6, 2009:
_Submitted by:_ **BrianH**
The error is that make action! or native! [[][]] works at all - it should throw an error.
The ACTION and NATIVE functions should be the only way to create values of these types. If the MAKE action works badly with these specs, then trial and error would eventually lead to specs that work, which would allow recreation of these functions in REBOL code when they are otherwise intentionally inaccessible. This would be a major security hole.
As it is, the behavior of the make actions of these types suggests that the resulting functions access memory badly. We are lucky that it crashes - the situation is worse for op! #1052.
---
> **Rebolbot** commented on Jul 6, 2009:
_Submitted by:_ **Carl**
For op!: see my comment in #1052.
---
> **Rebolbot** mentioned this issue on Jan 12, 2016:
> [Calling an OP one has made oneself gives strange error](https://github.com/Oldes/Rebol-issues/issues/1052)
---
> **Rebolbot** added **Type.bug** and **Status.important** on Jan 12, 2016
---
> **Oldes** added a commit to **[Oldes/Rebol3](https://github.com/Oldes/Rebol3/)** that referenced this issue on Apr 29, 2019:
> [FIX: Calling an action made by oneself crashes R3](https://github.com/Oldes/Rebol3/commit/0dae54bbe2b6d3c6a59edebec0ff0035b69f1a9f)
---
|
test
|
calling an action made by oneself crashes submitted by meijeru it is not forbidden to execute make for an action value though of course this has not much sense same goes for native and op however calling the action or native one has just made crashes rebol rebol a make action make action a crash also n make native make native n crash imported from imported from comments rebolbot commented on jul submitted by brianh the error is that make action or native works at all it should throw an error the action and native functions should be the only way to create values of these types if the make action works badly with these specs then trial and error would eventually lead to specs that work which would allow recreation of these functions in rebol code when they are otherwise intentionally inaccessible this would be a major security hole as it is the behavior of the make actions of these types suggests that the resulting functions access memory badly we are lucky that it crashes the situation is worse for op rebolbot commented on jul submitted by carl for op see my comment in rebolbot mentioned this issue on jan rebolbot added type bug and status important on jan oldes added a commit to that referenced this issue on apr
| 1
|
287,097
| 24,807,779,987
|
IssuesEvent
|
2022-10-25 06:54:50
|
ClickHouse/ClickHouse
|
https://api.github.com/repos/ClickHouse/ClickHouse
|
closed
|
stack-use-after-scope in ConvertImpl
|
testing
|
```
==645==ERROR: AddressSanitizer: stack-use-after-scope on address 0x7f832aa21980 at pc 0x000015a9ca72 bp 0x7f862f90dfd0 sp 0x7f862f90dfc8
WRITE of size 1 at 0x7f832aa21980 thread T962 (QueryPipelineEx)
#0 0x15a9ca71 in DB::ConvertImpl<DB::DataTypeNumber<char8_t>, DB::DataTypeString, DB::NameToString, DB::ConvertDefaultBehaviorTag>::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) (/usr/bin/clickhouse+0x15a9ca71) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89)
#1 0x15a97217 in bool DB::callOnIndexAndDataType<DB::DataTypeString, DB::FunctionConvert<DB::DataTypeString, DB::NameToString, DB::ToStringMonotonicity>::executeInternal(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const::'lambda'(auto const&, auto const&)&, DB::ConvertDefaultBehaviorTag>(DB::TypeIndex, auto&&, DB::ConvertDefaultBehaviorTag&&) (/usr/bin/clickhouse+0x15a97217) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89)
#2 0x15a96904 in DB::FunctionConvert<DB::DataTypeString, DB::NameToString, DB::ToStringMonotonicity>::executeInternal(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/usr/bin/clickhouse+0x15a96904) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89)
#3 0x15a946b3 in DB::FunctionConvert<DB::DataTypeString, DB::NameToString, DB::ToStringMonotonicity>::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/usr/bin/clickhouse+0x15a946b3) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89)
#4 0x150e1cab in DB::FunctionToExecutableFunctionAdaptor::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/usr/bin/clickhouse+0x150e1cab) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89)
#5 0x2c39465b in DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:248:15
#6 0x2c396276 in DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:302:22
#7 0x2c39a68a in DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:372:16
#8 0x2e2b37cc in DB::executeAction(DB::ExpressionActions::Action const&, DB::(anonymous namespace)::ExecutionContext&, bool) build_docker/../src/Interpreters/ExpressionActions.cpp:607:60
#9 0x2e2b37cc in DB::ExpressionActions::execute(DB::Block&, unsigned long&, bool) const build_docker/../src/Interpreters/ExpressionActions.cpp:724:13
#10 0x2e2b61ca in DB::ExpressionActions::execute(DB::Block&, bool) const build_docker/../src/Interpreters/ExpressionActions.cpp:768:5
#11 0x2e39d532 in DB::ExecutableFunctionExpression::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const build_docker/../src/Functions/FunctionsMiscellaneous.h:48:29
#12 0x2c39465b in DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:248:15
#13 0x2c396276 in DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:302:22
#14 0x2c39a68a in DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:372:16
#15 0x150e0c36 in DB::IFunctionBase::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const (/usr/bin/clickhouse+0x150e0c36) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89)
#16 0x30334882 in DB::ColumnFunction::reduce() const build_docker/../src/Columns/ColumnFunction.cpp:272:28
#17 0x24e4a2f5 in DB::FunctionArrayMapped<DB::ArrayMapImpl, DB::NameArrayMap>::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/usr/bin/clickhouse+0x24e4a2f5) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89)
#18 0x150e1cab in DB::FunctionToExecutableFunctionAdaptor::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/usr/bin/clickhouse+0x150e1cab) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89)
#19 0x2c39465b in DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:248:15
#20 0x2c396276 in DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:302:22
#21 0x2c39a68a in DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:372:16
#22 0x150e0c36 in DB::IFunctionBase::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const (/usr/bin/clickhouse+0x150e0c36) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89)
#23 0x30334882 in DB::ColumnFunction::reduce() const build_docker/../src/Columns/ColumnFunction.cpp:272:28
#24 0x30334251 in DB::ColumnFunction::reduce() const build_docker/../src/Columns/ColumnFunction.cpp:262:28
#25 0x3081f2eb in DB::maskedExecute(DB::ColumnWithTypeAndName&, DB::PODArray<char8_t, 4096ul, Allocator<false, false>, 15ul, 16ul> const&, DB::MaskInfo const&) build_docker/../src/Columns/MaskOperations.cpp:297:35
#26 0x1cf13cca in DB::(anonymous namespace)::FunctionIf::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const if.cpp
#27 0x150e1cab in DB::FunctionToExecutableFunctionAdaptor::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/usr/bin/clickhouse+0x150e1cab) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89)
#28 0x2c39465b in DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:248:15
#29 0x2c396276 in DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:302:22
#30 0x2c39a68a in DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:372:16
#31 0x2e2b37cc in DB::executeAction(DB::ExpressionActions::Action const&, DB::(anonymous namespace)::ExecutionContext&, bool) build_docker/../src/Interpreters/ExpressionActions.cpp:607:60
#32 0x2e2b37cc in DB::ExpressionActions::execute(DB::Block&, unsigned long&, bool) const build_docker/../src/Interpreters/ExpressionActions.cpp:724:13
#33 0x328120aa in DB::ExpressionTransform::transform(DB::Chunk&) build_docker/../src/Processors/Transforms/ExpressionTransform.cpp:23:17
#34 0x258874dd in DB::ISimpleTransform::transform(DB::Chunk&, DB::Chunk&) build_docker/../src/Processors/ISimpleTransform.h:32:9
#35 0x3221930c in DB::ISimpleTransform::work() build_docker/../src/Processors/ISimpleTransform.cpp:89:9
#36 0x3226e56d in DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:47:26
#37 0x3226e56d in DB::ExecutionThreadContext::executeTask() build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:92:9
#38 0x3224d904 in DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) build_docker/../src/Processors/Executors/PipelineExecutor.cpp:228:26
#39 0x3225134b in DB::PipelineExecutor::executeSingleThread(unsigned long) build_docker/../src/Processors/Executors/PipelineExecutor.cpp:194:5
#40 0x3225134b in DB::PipelineExecutor::spawnThreads()::$_0::operator()() const build_docker/../src/Processors/Executors/PipelineExecutor.cpp:315:17
#41 0x3225134b in decltype(static_cast<DB::PipelineExecutor::spawnThreads()::$_0&>(fp)()) std::__1::__invoke_constexpr<DB::PipelineExecutor::spawnThreads()::$_0&>(DB::PipelineExecutor::spawnThreads()::$_0&) build_docker/../contrib/libcxx/include/type_traits:3648:23
#42 0x3225134b in decltype(auto) std::__1::__apply_tuple_impl<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) build_docker/../contrib/libcxx/include/tuple:1595:1
#43 0x3225134b in decltype(auto) std::__1::apply<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&) build_docker/../contrib/libcxx/include/tuple:1604:1
#44 0x3225134b in ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()::operator()() build_docker/../src/Common/ThreadPool.h:193:13
#45 0x3225134b in decltype(static_cast<DB::PipelineExecutor::spawnThreads()::$_0>(fp)()) std::__1::__invoke<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&>(DB::PipelineExecutor::spawnThreads()::$_0&&) build_docker/../contrib/libcxx/include/type_traits:3640:23
#46 0x3225134b in void std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&) build_docker/../contrib/libcxx/include/__functional/invoke.h:61:9
#47 0x3225134b in std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'(), void ()>::operator()() build_docker/../contrib/libcxx/include/__functional/function.h:230:12
#48 0x3225134b in void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'(), void ()>>(std::__1::__function::__policy_storage const*) build_docker/../contrib/libcxx/include/__functional/function.h:711:16
#49 0xdedfe1f in std::__1::__function::__policy_func<void ()>::operator()() const build_docker/../contrib/libcxx/include/__functional/function.h:843:16
#50 0xdedfe1f in std::__1::function<void ()>::operator()() const build_docker/../contrib/libcxx/include/__functional/function.h:1184:12
#51 0xdedfe1f in ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) build_docker/../src/Common/ThreadPool.cpp:294:17
#52 0xdee990c in void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>, bool)::'lambda0'()::operator()() const build_docker/../src/Common/ThreadPool.cpp:144:73
#53 0xdee990c in decltype(static_cast<void>(fp)()) std::__1::__invoke<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&) build_docker/../contrib/libcxx/include/type_traits:3640:23
#54 0xdee990c in void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>, bool)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>, bool)::'lambda0'()>&, std::__1::__tuple_indices<>) build_docker/../contrib/libcxx/include/thread:282:5
#55 0xdee990c in void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>, bool)::'lambda0'()>>(void*) build_docker/../contrib/libcxx/include/thread:293:5
#56 0x7f8b9d3ba608 in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x8608) (BuildId: 7b4536f41cdaa5888408e82d0836e33dcf436466)
#57 0x7f8b9d2df132 in __clone (/lib/x86_64-linux-gnu/libc.so.6+0x11f132) (BuildId: 1878e6b475720c7c51969e69ab2d276fae6d1dee)
Address 0x7f832aa21980 is a wild pointer inside of access range of size 0x000000000001.
SUMMARY: AddressSanitizer: stack-use-after-scope (/usr/bin/clickhouse+0x15a9ca71) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89) in DB::ConvertImpl<DB::DataTypeNumber<char8_t>, DB::DataTypeString, DB::NameToString, DB::ConvertDefaultBehaviorTag>::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long)
Shadow bytes around the buggy address:
0x0ff0e553c2e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff0e553c2f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff0e553c300: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff0e553c310: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff0e553c320: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0ff0e553c330:[f8]00 00 00 f8 00 00 00 00 00 00 00 00 00 00 00
0x0ff0e553c340: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff0e553c350: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff0e553c360: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff0e553c370: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff0e553c380: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
```
https://s3.amazonaws.com/clickhouse-test-reports/40998/b8a0761ca2edb0b3ebdb818c1a321d43b13c8c4e/stateless_tests__asan__[1/2].html
Actually this, like #41500 does not looks like a real thing, but pops up after upgrade to clang/llvm 15 - #41046
|
1.0
|
stack-use-after-scope in ConvertImpl - ```
==645==ERROR: AddressSanitizer: stack-use-after-scope on address 0x7f832aa21980 at pc 0x000015a9ca72 bp 0x7f862f90dfd0 sp 0x7f862f90dfc8
WRITE of size 1 at 0x7f832aa21980 thread T962 (QueryPipelineEx)
#0 0x15a9ca71 in DB::ConvertImpl<DB::DataTypeNumber<char8_t>, DB::DataTypeString, DB::NameToString, DB::ConvertDefaultBehaviorTag>::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) (/usr/bin/clickhouse+0x15a9ca71) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89)
#1 0x15a97217 in bool DB::callOnIndexAndDataType<DB::DataTypeString, DB::FunctionConvert<DB::DataTypeString, DB::NameToString, DB::ToStringMonotonicity>::executeInternal(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const::'lambda'(auto const&, auto const&)&, DB::ConvertDefaultBehaviorTag>(DB::TypeIndex, auto&&, DB::ConvertDefaultBehaviorTag&&) (/usr/bin/clickhouse+0x15a97217) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89)
#2 0x15a96904 in DB::FunctionConvert<DB::DataTypeString, DB::NameToString, DB::ToStringMonotonicity>::executeInternal(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/usr/bin/clickhouse+0x15a96904) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89)
#3 0x15a946b3 in DB::FunctionConvert<DB::DataTypeString, DB::NameToString, DB::ToStringMonotonicity>::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/usr/bin/clickhouse+0x15a946b3) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89)
#4 0x150e1cab in DB::FunctionToExecutableFunctionAdaptor::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/usr/bin/clickhouse+0x150e1cab) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89)
#5 0x2c39465b in DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:248:15
#6 0x2c396276 in DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:302:22
#7 0x2c39a68a in DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:372:16
#8 0x2e2b37cc in DB::executeAction(DB::ExpressionActions::Action const&, DB::(anonymous namespace)::ExecutionContext&, bool) build_docker/../src/Interpreters/ExpressionActions.cpp:607:60
#9 0x2e2b37cc in DB::ExpressionActions::execute(DB::Block&, unsigned long&, bool) const build_docker/../src/Interpreters/ExpressionActions.cpp:724:13
#10 0x2e2b61ca in DB::ExpressionActions::execute(DB::Block&, bool) const build_docker/../src/Interpreters/ExpressionActions.cpp:768:5
#11 0x2e39d532 in DB::ExecutableFunctionExpression::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const build_docker/../src/Functions/FunctionsMiscellaneous.h:48:29
#12 0x2c39465b in DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:248:15
#13 0x2c396276 in DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:302:22
#14 0x2c39a68a in DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:372:16
#15 0x150e0c36 in DB::IFunctionBase::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const (/usr/bin/clickhouse+0x150e0c36) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89)
#16 0x30334882 in DB::ColumnFunction::reduce() const build_docker/../src/Columns/ColumnFunction.cpp:272:28
#17 0x24e4a2f5 in DB::FunctionArrayMapped<DB::ArrayMapImpl, DB::NameArrayMap>::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/usr/bin/clickhouse+0x24e4a2f5) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89)
#18 0x150e1cab in DB::FunctionToExecutableFunctionAdaptor::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/usr/bin/clickhouse+0x150e1cab) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89)
#19 0x2c39465b in DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:248:15
#20 0x2c396276 in DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:302:22
#21 0x2c39a68a in DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:372:16
#22 0x150e0c36 in DB::IFunctionBase::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const (/usr/bin/clickhouse+0x150e0c36) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89)
#23 0x30334882 in DB::ColumnFunction::reduce() const build_docker/../src/Columns/ColumnFunction.cpp:272:28
#24 0x30334251 in DB::ColumnFunction::reduce() const build_docker/../src/Columns/ColumnFunction.cpp:262:28
#25 0x3081f2eb in DB::maskedExecute(DB::ColumnWithTypeAndName&, DB::PODArray<char8_t, 4096ul, Allocator<false, false>, 15ul, 16ul> const&, DB::MaskInfo const&) build_docker/../src/Columns/MaskOperations.cpp:297:35
#26 0x1cf13cca in DB::(anonymous namespace)::FunctionIf::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const if.cpp
#27 0x150e1cab in DB::FunctionToExecutableFunctionAdaptor::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/usr/bin/clickhouse+0x150e1cab) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89)
#28 0x2c39465b in DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:248:15
#29 0x2c396276 in DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:302:22
#30 0x2c39a68a in DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:372:16
#31 0x2e2b37cc in DB::executeAction(DB::ExpressionActions::Action const&, DB::(anonymous namespace)::ExecutionContext&, bool) build_docker/../src/Interpreters/ExpressionActions.cpp:607:60
#32 0x2e2b37cc in DB::ExpressionActions::execute(DB::Block&, unsigned long&, bool) const build_docker/../src/Interpreters/ExpressionActions.cpp:724:13
#33 0x328120aa in DB::ExpressionTransform::transform(DB::Chunk&) build_docker/../src/Processors/Transforms/ExpressionTransform.cpp:23:17
#34 0x258874dd in DB::ISimpleTransform::transform(DB::Chunk&, DB::Chunk&) build_docker/../src/Processors/ISimpleTransform.h:32:9
#35 0x3221930c in DB::ISimpleTransform::work() build_docker/../src/Processors/ISimpleTransform.cpp:89:9
#36 0x3226e56d in DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:47:26
#37 0x3226e56d in DB::ExecutionThreadContext::executeTask() build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:92:9
#38 0x3224d904 in DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) build_docker/../src/Processors/Executors/PipelineExecutor.cpp:228:26
#39 0x3225134b in DB::PipelineExecutor::executeSingleThread(unsigned long) build_docker/../src/Processors/Executors/PipelineExecutor.cpp:194:5
#40 0x3225134b in DB::PipelineExecutor::spawnThreads()::$_0::operator()() const build_docker/../src/Processors/Executors/PipelineExecutor.cpp:315:17
#41 0x3225134b in decltype(static_cast<DB::PipelineExecutor::spawnThreads()::$_0&>(fp)()) std::__1::__invoke_constexpr<DB::PipelineExecutor::spawnThreads()::$_0&>(DB::PipelineExecutor::spawnThreads()::$_0&) build_docker/../contrib/libcxx/include/type_traits:3648:23
#42 0x3225134b in decltype(auto) std::__1::__apply_tuple_impl<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) build_docker/../contrib/libcxx/include/tuple:1595:1
#43 0x3225134b in decltype(auto) std::__1::apply<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&) build_docker/../contrib/libcxx/include/tuple:1604:1
#44 0x3225134b in ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()::operator()() build_docker/../src/Common/ThreadPool.h:193:13
#45 0x3225134b in decltype(static_cast<DB::PipelineExecutor::spawnThreads()::$_0>(fp)()) std::__1::__invoke<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&>(DB::PipelineExecutor::spawnThreads()::$_0&&) build_docker/../contrib/libcxx/include/type_traits:3640:23
#46 0x3225134b in void std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&) build_docker/../contrib/libcxx/include/__functional/invoke.h:61:9
#47 0x3225134b in std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'(), void ()>::operator()() build_docker/../contrib/libcxx/include/__functional/function.h:230:12
#48 0x3225134b in void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'(), void ()>>(std::__1::__function::__policy_storage const*) build_docker/../contrib/libcxx/include/__functional/function.h:711:16
#49 0xdedfe1f in std::__1::__function::__policy_func<void ()>::operator()() const build_docker/../contrib/libcxx/include/__functional/function.h:843:16
#50 0xdedfe1f in std::__1::function<void ()>::operator()() const build_docker/../contrib/libcxx/include/__functional/function.h:1184:12
#51 0xdedfe1f in ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) build_docker/../src/Common/ThreadPool.cpp:294:17
#52 0xdee990c in void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>, bool)::'lambda0'()::operator()() const build_docker/../src/Common/ThreadPool.cpp:144:73
#53 0xdee990c in decltype(static_cast<void>(fp)()) std::__1::__invoke<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&) build_docker/../contrib/libcxx/include/type_traits:3640:23
#54 0xdee990c in void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>, bool)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>, bool)::'lambda0'()>&, std::__1::__tuple_indices<>) build_docker/../contrib/libcxx/include/thread:282:5
#55 0xdee990c in void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>, bool)::'lambda0'()>>(void*) build_docker/../contrib/libcxx/include/thread:293:5
#56 0x7f8b9d3ba608 in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x8608) (BuildId: 7b4536f41cdaa5888408e82d0836e33dcf436466)
#57 0x7f8b9d2df132 in __clone (/lib/x86_64-linux-gnu/libc.so.6+0x11f132) (BuildId: 1878e6b475720c7c51969e69ab2d276fae6d1dee)
Address 0x7f832aa21980 is a wild pointer inside of access range of size 0x000000000001.
SUMMARY: AddressSanitizer: stack-use-after-scope (/usr/bin/clickhouse+0x15a9ca71) (BuildId: d31a394980731774a11b5afc634e176ab6b44e89) in DB::ConvertImpl<DB::DataTypeNumber<char8_t>, DB::DataTypeString, DB::NameToString, DB::ConvertDefaultBehaviorTag>::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long)
Shadow bytes around the buggy address:
0x0ff0e553c2e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff0e553c2f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff0e553c300: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff0e553c310: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff0e553c320: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0ff0e553c330:[f8]00 00 00 f8 00 00 00 00 00 00 00 00 00 00 00
0x0ff0e553c340: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff0e553c350: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff0e553c360: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff0e553c370: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0ff0e553c380: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
```
https://s3.amazonaws.com/clickhouse-test-reports/40998/b8a0761ca2edb0b3ebdb818c1a321d43b13c8c4e/stateless_tests__asan__[1/2].html
Actually this, like #41500 does not looks like a real thing, but pops up after upgrade to clang/llvm 15 - #41046
|
test
|
stack use after scope in convertimpl error addresssanitizer stack use after scope on address at pc bp sp write of size at thread querypipelineex in db convertimpl db datatypestring db nametostring db convertdefaultbehaviortag execute std vector const std shared ptr const unsigned long usr bin clickhouse buildid in bool db callonindexanddatatype executeinternal std vector const std shared ptr const unsigned long const lambda auto const auto const db convertdefaultbehaviortag db typeindex auto db convertdefaultbehaviortag usr bin clickhouse buildid in db functionconvert executeinternal std vector const std shared ptr const unsigned long const usr bin clickhouse buildid in db functionconvert executeimpl std vector const std shared ptr const unsigned long const usr bin clickhouse buildid in db functiontoexecutablefunctionadaptor executeimpl std vector const std shared ptr const unsigned long const usr bin clickhouse buildid in db iexecutablefunction executewithoutlowcardinalitycolumns std vector const std shared ptr const unsigned long bool const build docker src functions ifunction cpp in db iexecutablefunction executewithoutsparsecolumns std vector const std shared ptr const unsigned long bool const build docker src functions ifunction cpp in db iexecutablefunction execute std vector const std shared ptr const unsigned long bool const build docker src functions ifunction cpp in db executeaction db expressionactions action const db anonymous namespace executioncontext bool build docker src interpreters expressionactions cpp in db expressionactions execute db block unsigned long bool const build docker src interpreters expressionactions cpp in db expressionactions execute db block bool const build docker src interpreters expressionactions cpp in db executablefunctionexpression executeimpl std vector const std shared ptr const unsigned long const build docker src functions functionsmiscellaneous h in db iexecutablefunction executewithoutlowcardinalitycolumns std vector const std shared ptr const unsigned long bool const build docker src functions ifunction cpp in db iexecutablefunction executewithoutsparsecolumns std vector const std shared ptr const unsigned long bool const build docker src functions ifunction cpp in db iexecutablefunction execute std vector const std shared ptr const unsigned long bool const build docker src functions ifunction cpp in db ifunctionbase execute std vector const std shared ptr const unsigned long bool const usr bin clickhouse buildid in db columnfunction reduce const build docker src columns columnfunction cpp in db functionarraymapped executeimpl std vector const std shared ptr const unsigned long const usr bin clickhouse buildid in db functiontoexecutablefunctionadaptor executeimpl std vector const std shared ptr const unsigned long const usr bin clickhouse buildid in db iexecutablefunction executewithoutlowcardinalitycolumns std vector const std shared ptr const unsigned long bool const build docker src functions ifunction cpp in db iexecutablefunction executewithoutsparsecolumns std vector const std shared ptr const unsigned long bool const build docker src functions ifunction cpp in db iexecutablefunction execute std vector const std shared ptr const unsigned long bool const build docker src functions ifunction cpp in db ifunctionbase execute std vector const std shared ptr const unsigned long bool const usr bin clickhouse buildid in db columnfunction reduce const build docker src columns columnfunction cpp in db columnfunction reduce const build docker src columns columnfunction cpp in db maskedexecute db columnwithtypeandname db podarray const db maskinfo const build docker src columns maskoperations cpp in db anonymous namespace functionif executeimpl std vector const std shared ptr const unsigned long const if cpp in db functiontoexecutablefunctionadaptor executeimpl std vector const std shared ptr const unsigned long const usr bin clickhouse buildid in db iexecutablefunction executewithoutlowcardinalitycolumns std vector const std shared ptr const unsigned long bool const build docker src functions ifunction cpp in db iexecutablefunction executewithoutsparsecolumns std vector const std shared ptr const unsigned long bool const build docker src functions ifunction cpp in db iexecutablefunction execute std vector const std shared ptr const unsigned long bool const build docker src functions ifunction cpp in db executeaction db expressionactions action const db anonymous namespace executioncontext bool build docker src interpreters expressionactions cpp in db expressionactions execute db block unsigned long bool const build docker src interpreters expressionactions cpp in db expressiontransform transform db chunk build docker src processors transforms expressiontransform cpp in db isimpletransform transform db chunk db chunk build docker src processors isimpletransform h in db isimpletransform work build docker src processors isimpletransform cpp in db executejob db executinggraph node db readprogresscallback build docker src processors executors executionthreadcontext cpp in db executionthreadcontext executetask build docker src processors executors executionthreadcontext cpp in db pipelineexecutor executestepimpl unsigned long std atomic build docker src processors executors pipelineexecutor cpp in db pipelineexecutor executesinglethread unsigned long build docker src processors executors pipelineexecutor cpp in db pipelineexecutor spawnthreads operator const build docker src processors executors pipelineexecutor cpp in decltype static cast fp std invoke constexpr db pipelineexecutor spawnthreads build docker contrib libcxx include type traits in decltype auto std apply tuple impl db pipelineexecutor spawnthreads std tuple std tuple indices build docker contrib libcxx include tuple in decltype auto std apply db pipelineexecutor spawnthreads std tuple build docker contrib libcxx include tuple in threadfromglobalpoolimpl threadfromglobalpoolimpl db pipelineexecutor spawnthreads lambda operator build docker src common threadpool h in decltype static cast fp std invoke threadfromglobalpoolimpl db pipelineexecutor spawnthreads lambda db pipelineexecutor spawnthreads build docker contrib libcxx include type traits in void std invoke void return wrapper call threadfromglobalpoolimpl db pipelineexecutor spawnthreads lambda threadfromglobalpoolimpl threadfromglobalpoolimpl db pipelineexecutor spawnthreads lambda build docker contrib libcxx include functional invoke h in std function default alloc func threadfromglobalpoolimpl db pipelineexecutor spawnthreads lambda void operator build docker contrib libcxx include functional function h in void std function policy invoker call impl threadfromglobalpoolimpl db pipelineexecutor spawnthreads lambda void std function policy storage const build docker contrib libcxx include functional function h in std function policy func operator const build docker contrib libcxx include functional function h in std function operator const build docker contrib libcxx include functional function h in threadpoolimpl worker std list iterator build docker src common threadpool cpp in void threadpoolimpl scheduleimpl std function int std optional bool operator const build docker src common threadpool cpp in decltype static cast fp std invoke scheduleimpl std function int std optional bool void build docker contrib libcxx include type traits in void std thread execute void threadpoolimpl scheduleimpl std function int std optional bool std tuple scheduleimpl std function int std optional bool std tuple indices build docker contrib libcxx include thread in void std thread proxy void threadpoolimpl scheduleimpl std function int std optional bool void build docker contrib libcxx include thread in start thread lib linux gnu libpthread so buildid in clone lib linux gnu libc so buildid address is a wild pointer inside of access range of size summary addresssanitizer stack use after scope usr bin clickhouse buildid in db convertimpl db datatypestring db nametostring db convertdefaultbehaviortag execute std vector const std shared ptr const unsigned long shadow bytes around the buggy address shadow byte legend one shadow byte represents application bytes addressable partially addressable heap left redzone fa freed heap region fd stack left redzone stack mid redzone stack right redzone stack after return stack use after scope global redzone global init order poisoned by user container overflow fc array cookie ac intra object redzone bb asan internal fe left alloca redzone ca right alloca redzone cb html actually this like does not looks like a real thing but pops up after upgrade to clang llvm
| 1
|
224,971
| 17,786,904,412
|
IssuesEvent
|
2021-08-31 12:12:40
|
WoWManiaUK/Blackwing-Lair
|
https://api.github.com/repos/WoWManiaUK/Blackwing-Lair
|
opened
|
[Quest] Trinket quests - Darkmoon
|
Confirmed By Tester
|
**Links:**
Darkmoon Tsunami Deck http://cata.cavernoftime.com/quest=27666
Darkmoon Earthquake Deck http://cata.cavernoftime.com/quest=27667
Darkmoon Hurricane Deck http://cata.cavernoftime.com/quest=27665
Darkmoon Volcanic Deck http://cata.cavernoftime.com/quest=27664
**What is happening:**
after researching another report i came across information that these quests are meant to be repeatable
- [ ] Atm you can only do each quest once
then player gets the error message <you have already completed this quest>
**What should happen:**
Should be repeatable as long as a player has the mats/decks farmed thoughout the month and then can be delivered to the vender during the Darkmoon Faire event
**Other Information:**

|
1.0
|
[Quest] Trinket quests - Darkmoon - **Links:**
Darkmoon Tsunami Deck http://cata.cavernoftime.com/quest=27666
Darkmoon Earthquake Deck http://cata.cavernoftime.com/quest=27667
Darkmoon Hurricane Deck http://cata.cavernoftime.com/quest=27665
Darkmoon Volcanic Deck http://cata.cavernoftime.com/quest=27664
**What is happening:**
after researching another report i came across information that these quests are meant to be repeatable
- [ ] Atm you can only do each quest once
then player gets the error message <you have already completed this quest>
**What should happen:**
Should be repeatable as long as a player has the mats/decks farmed thoughout the month and then can be delivered to the vender during the Darkmoon Faire event
**Other Information:**

|
test
|
trinket quests darkmoon links darkmoon tsunami deck darkmoon earthquake deck darkmoon hurricane deck darkmoon volcanic deck what is happening after researching another report i came across information that these quests are meant to be repeatable atm you can only do each quest once then player gets the error message what should happen should be repeatable as long as a player has the mats decks farmed thoughout the month and then can be delivered to the vender during the darkmoon faire event other information
| 1
|
12,793
| 9,957,209,299
|
IssuesEvent
|
2019-07-05 16:01:04
|
Tribler/tribler
|
https://api.github.com/repos/Tribler/tribler
|
closed
|
Sonar Cloud PR analysis response is not correct
|
broken infrastructure
|
Sonar cloud is performing the code analysis with the proper quality gates but the response is not propagated properly to Github status. It could likely because of a plugin issue in Jenkins.


|
1.0
|
Sonar Cloud PR analysis response is not correct - Sonar cloud is performing the code analysis with the proper quality gates but the response is not propagated properly to Github status. It could likely because of a plugin issue in Jenkins.


|
non_test
|
sonar cloud pr analysis response is not correct sonar cloud is performing the code analysis with the proper quality gates but the response is not propagated properly to github status it could likely because of a plugin issue in jenkins
| 0
|
450,629
| 31,933,098,545
|
IssuesEvent
|
2023-09-19 08:44:44
|
kartoza/cplus-plugin
|
https://api.github.com/repos/kartoza/cplus-plugin
|
closed
|
User Story 10: User Defined PWL (Priority Weighting Layer)
|
Size: 1 development documentation
|
User Defined PWL development documentation. Document the processes and implementation of user created PWL.

|
1.0
|
User Story 10: User Defined PWL (Priority Weighting Layer) - User Defined PWL development documentation. Document the processes and implementation of user created PWL.

|
non_test
|
user story user defined pwl priority weighting layer user defined pwl development documentation document the processes and implementation of user created pwl
| 0
|
249,021
| 21,094,951,199
|
IssuesEvent
|
2022-04-04 09:25:26
|
arturo-lang/arturo
|
https://api.github.com/repos/arturo-lang/arturo
|
closed
|
[Core\pop] verify functionality
|
library unit-test todo easy stale
|
[Core\pop] verify functionality
https://github.com/arturo-lang/arturo/blob/a971add892fe3d675b3320f356cf2d96179e2a22/src/library/Core.nim#L338
```text
VNULL
# TODO(Core\pop) verify functionality
# labels: library, unit-test,easy
builtin "pop",
alias = unaliased,
rule = PrefixPrecedence,
```
56b192e37bc6e6d5d326a41e7de388310b57ced1
|
1.0
|
[Core\pop] verify functionality - [Core\pop] verify functionality
https://github.com/arturo-lang/arturo/blob/a971add892fe3d675b3320f356cf2d96179e2a22/src/library/Core.nim#L338
```text
VNULL
# TODO(Core\pop) verify functionality
# labels: library, unit-test,easy
builtin "pop",
alias = unaliased,
rule = PrefixPrecedence,
```
56b192e37bc6e6d5d326a41e7de388310b57ced1
|
test
|
verify functionality verify functionality text vnull todo core pop verify functionality labels library unit test easy builtin pop alias unaliased rule prefixprecedence
| 1
|
321,532
| 27,537,058,721
|
IssuesEvent
|
2023-03-07 04:52:31
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
Test failure: graceful-node-shutdown (GracefulNodeShutdown [Serial] [NodeFeature:GracefulNodeShutdown] [NodeFeature:GracefulNodeShutdownBasedOnPodPriority] when gracefully shutting down with Pod priority should be able to gracefully shutdown pods with various grace periods) - Failure cluster [3ee80f0d...]
|
sig/node kind/failing-test triage/accepted
|
### Failure cluster [3ee80f0d44deded0a628](https://go.k8s.io/triage#3ee80f0d44deded0a628)
##### Error text:
```
[FAILED] Timed out after 10.000s.
Expected success, but got an error:
<*errors.errorString | 0xc0019c4be0>: {
s: "pod (graceful-node-shutdown-2936/period-a-5-f099b778-08ba-40f0-a543-0f4d20bafdd5) should be shutdown, reason: ",
}
pod (graceful-node-shutdown-2936/period-a-5-f099b778-08ba-40f0-a543-0f4d20bafdd5) should be shutdown, reason:
In [It] at: test/e2e_node/node_shutdown_linux_test.go:539 @ 03/04/23 06:06:13.522
```
#### Recent failures:
[3/6/2023, 11:34:56 AM ci-kubernetes-node-swap-ubuntu-serial](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-node-swap-ubuntu-serial/1632826614901379072)
[3/6/2023, 11:21:00 AM ci-kubernetes-node-kubelet-serial-cri-o](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial-cri-o/1632823595031859200)
[3/6/2023, 10:20:47 AM ci-kubernetes-node-kubelet-serial-containerd](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial-containerd/1632808440537550848)
[3/6/2023, 9:47:47 AM ci-cos-cgroupv1-containerd-node-e2e-serial](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-cos-cgroupv1-containerd-node-e2e-serial/1632800135534612480)
[3/6/2023, 7:35:45 AM ci-kubernetes-node-swap-ubuntu-serial](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-node-swap-ubuntu-serial/1632766161345056768)
/kind failing-test
<!-- If this is a flake, please add: /kind flake -->
/sig node
|
1.0
|
Test failure: graceful-node-shutdown (GracefulNodeShutdown [Serial] [NodeFeature:GracefulNodeShutdown] [NodeFeature:GracefulNodeShutdownBasedOnPodPriority] when gracefully shutting down with Pod priority should be able to gracefully shutdown pods with various grace periods) - Failure cluster [3ee80f0d...] - ### Failure cluster [3ee80f0d44deded0a628](https://go.k8s.io/triage#3ee80f0d44deded0a628)
##### Error text:
```
[FAILED] Timed out after 10.000s.
Expected success, but got an error:
<*errors.errorString | 0xc0019c4be0>: {
s: "pod (graceful-node-shutdown-2936/period-a-5-f099b778-08ba-40f0-a543-0f4d20bafdd5) should be shutdown, reason: ",
}
pod (graceful-node-shutdown-2936/period-a-5-f099b778-08ba-40f0-a543-0f4d20bafdd5) should be shutdown, reason:
In [It] at: test/e2e_node/node_shutdown_linux_test.go:539 @ 03/04/23 06:06:13.522
```
#### Recent failures:
[3/6/2023, 11:34:56 AM ci-kubernetes-node-swap-ubuntu-serial](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-node-swap-ubuntu-serial/1632826614901379072)
[3/6/2023, 11:21:00 AM ci-kubernetes-node-kubelet-serial-cri-o](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial-cri-o/1632823595031859200)
[3/6/2023, 10:20:47 AM ci-kubernetes-node-kubelet-serial-containerd](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial-containerd/1632808440537550848)
[3/6/2023, 9:47:47 AM ci-cos-cgroupv1-containerd-node-e2e-serial](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-cos-cgroupv1-containerd-node-e2e-serial/1632800135534612480)
[3/6/2023, 7:35:45 AM ci-kubernetes-node-swap-ubuntu-serial](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-node-swap-ubuntu-serial/1632766161345056768)
/kind failing-test
<!-- If this is a flake, please add: /kind flake -->
/sig node
|
test
|
test failure graceful node shutdown gracefulnodeshutdown when gracefully shutting down with pod priority should be able to gracefully shutdown pods with various grace periods failure cluster failure cluster error text timed out after expected success but got an error s pod graceful node shutdown period a should be shutdown reason pod graceful node shutdown period a should be shutdown reason in at test node node shutdown linux test go recent failures kind failing test sig node
| 1
|
24,947
| 2,674,860,939
|
IssuesEvent
|
2015-03-25 08:02:22
|
cs2103jan2015-f13-2j/main
|
https://api.github.com/repos/cs2103jan2015-f13-2j/main
|
closed
|
Refactor logic.Engine & data.FileIO
|
priority.medium
|
There are several portions that have similar codes. Extract them out into methods that can be re-use. SLAP should be our guideline.
|
1.0
|
Refactor logic.Engine & data.FileIO - There are several portions that have similar codes. Extract them out into methods that can be re-use. SLAP should be our guideline.
|
non_test
|
refactor logic engine data fileio there are several portions that have similar codes extract them out into methods that can be re use slap should be our guideline
| 0
|
536,204
| 15,705,888,700
|
IssuesEvent
|
2021-03-26 16:42:44
|
cloudskiff/driftctl
|
https://api.github.com/repos/cloudskiff/driftctl
|
closed
|
Add support for http state backend
|
good first issue kind/enhancement priority/3
|
**Description**
<!-- A clear and concise description of the new feature. -->
As I really like GitLab's Managed Terraform state offering (my team developed it), my initial idea was that it would be great to have support for it with `driftctl` in a similar fashion as an `s3` bucket can be specified.
The GitLab Managed Terraform state is actually just an http backend where GitLab CI receives the environment variables that help with setting up the backend, and the CI takes care of a properly argumented `terraform init` call. So, the feature is about providing http state backend support.
It would be even better if the necessary settings would come from the `backend` config. Thus it might be related to https://github.com/cloudskiff/driftctl/issues/88
**Example**
<!-- A simple example of the new feature in action
If the new feature changes an existing feature, include a simple before/after comparison. -->
```
driftctl scan --from tfstate+https://gitlab.com/api/v4/projects/<YOUR-PROJECT-ID>/terraform/state/<YOUR-STATE-NAME>"
```
Is this something that could be interesting to the `driftctl` community?
|
1.0
|
Add support for http state backend - **Description**
<!-- A clear and concise description of the new feature. -->
As I really like GitLab's Managed Terraform state offering (my team developed it), my initial idea was that it would be great to have support for it with `driftctl` in a similar fashion as an `s3` bucket can be specified.
The GitLab Managed Terraform state is actually just an http backend where GitLab CI receives the environment variables that help with setting up the backend, and the CI takes care of a properly argumented `terraform init` call. So, the feature is about providing http state backend support.
It would be even better if the necessary settings would come from the `backend` config. Thus it might be related to https://github.com/cloudskiff/driftctl/issues/88
**Example**
<!-- A simple example of the new feature in action
If the new feature changes an existing feature, include a simple before/after comparison. -->
```
driftctl scan --from tfstate+https://gitlab.com/api/v4/projects/<YOUR-PROJECT-ID>/terraform/state/<YOUR-STATE-NAME>"
```
Is this something that could be interesting to the `driftctl` community?
|
non_test
|
add support for http state backend description as i really like gitlab s managed terraform state offering my team developed it my initial idea was that it would be great to have support for it with driftctl in a similar fashion as an bucket can be specified the gitlab managed terraform state is actually just an http backend where gitlab ci receives the environment variables that help with setting up the backend and the ci takes care of a properly argumented terraform init call so the feature is about providing http state backend support it would be even better if the necessary settings would come from the backend config thus it might be related to example a simple example of the new feature in action if the new feature changes an existing feature include a simple before after comparison driftctl scan from tfstate is this something that could be interesting to the driftctl community
| 0
|
119,588
| 17,620,640,451
|
IssuesEvent
|
2021-08-18 14:55:37
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
opened
|
[RAC][Observability] Alert selection enabled when no capabilities to do actions
|
bug v8.0.0 impact:high Team:Threat Hunting Team: SecuritySolution v7.15.0
|
**Describe the bug:**
All alerts in the table have the checkbox enabled.
Alerts with no save permission should not have the checkbox enabled, the user can not do any write action to them.
The count number in the "Select all X alerts" button should also reflect only updatable alerts.
https://user-images.githubusercontent.com/17747913/129920961-2c178f2d-44ce-4627-812a-3dfe74b5a3c9.mov
**Steps to reproduce:**
1. Open alerts in Observability
2. Check some alerts do not have individual updates actions
3. See all alerts are selectable to do the bulk update
4. See the "Select all X alerts" number includes alerts that are not updatable.
**Expected behavior:**
- Checkboxes are disabled for alerts that the users do not have _save_ permission.
- If there isn't any write allowed alert on the page, the checkboxes do not appear.
- "Select all X alerts" button should not consider the alerts that are not updatable.
|
True
|
[RAC][Observability] Alert selection enabled when no capabilities to do actions - **Describe the bug:**
All alerts in the table have the checkbox enabled.
Alerts with no save permission should not have the checkbox enabled, the user can not do any write action to them.
The count number in the "Select all X alerts" button should also reflect only updatable alerts.
https://user-images.githubusercontent.com/17747913/129920961-2c178f2d-44ce-4627-812a-3dfe74b5a3c9.mov
**Steps to reproduce:**
1. Open alerts in Observability
2. Check some alerts do not have individual updates actions
3. See all alerts are selectable to do the bulk update
4. See the "Select all X alerts" number includes alerts that are not updatable.
**Expected behavior:**
- Checkboxes are disabled for alerts that the users do not have _save_ permission.
- If there isn't any write allowed alert on the page, the checkboxes do not appear.
- "Select all X alerts" button should not consider the alerts that are not updatable.
|
non_test
|
alert selection enabled when no capabilities to do actions describe the bug all alerts in the table have the checkbox enabled alerts with no save permission should not have the checkbox enabled the user can not do any write action to them the count number in the select all x alerts button should also reflect only updatable alerts steps to reproduce open alerts in observability check some alerts do not have individual updates actions see all alerts are selectable to do the bulk update see the select all x alerts number includes alerts that are not updatable expected behavior checkboxes are disabled for alerts that the users do not have save permission if there isn t any write allowed alert on the page the checkboxes do not appear select all x alerts button should not consider the alerts that are not updatable
| 0
|
62,278
| 6,792,928,457
|
IssuesEvent
|
2017-11-01 03:49:53
|
brave/browser-laptop
|
https://api.github.com/repos/brave/browser-laptop
|
opened
|
Manual test run on Linux for 0.19.x Hotfix 3 (Release channel)
|
OS/unix-like/linux release-notes/exclude tests
|
## Per release specialty tests
- [ ] Contribution amounts were not updated during BTC => BAT conversion. ([#11719](https://github.com/brave/browser-laptop/issues/11719))
- [ ] Websockets connection issue. ([#11716](https://github.com/brave/browser-laptop/issues/11716))
- [ ] Error: ENOENT: no such file or directory, access '/Users/kjozwiak/Library/Application Support/brave/ledger-synopsis.json' . ([#11674](https://github.com/brave/browser-laptop/issues/11674))
- [ ] Error: ENOENT: no such file or directory, access 'profile\ledger-newstate.json' while upgrading. ([#11669](https://github.com/brave/browser-laptop/issues/11669))
- [ ] Unable to highlight sync words for copying. ([#11641](https://github.com/brave/browser-laptop/issues/11641))
- [ ] Backup Wallet notification shows no empty overlay modal. ([#11639](https://github.com/brave/browser-laptop/issues/11639))
- [ ] Publisher not added if revisit happens. ([#11633](https://github.com/brave/browser-laptop/issues/11633))
- [ ] Fix buttons wrap on about:preferences#payments (l10n). ([#11580](https://github.com/brave/browser-laptop/issues/11580))
- [ ] Move brave/ad-block and brave/tracking-protection deps to muon. ([#11352](https://github.com/brave/browser-laptop/issues/11352))
- [ ] HTTPS Everywhere breaks lat.ms shortlinks. ([#11303](https://github.com/brave/browser-laptop/issues/11303))
## Installer
- [ ] Check that installer is close to the size of last release.
- [ ] Check signature: If OS Run `spctl --assess --verbose /Applications/Brave.app/` and make sure it returns `accepted`. If Windows right click on the installer exe and go to Properties, go to the Digital Signatures tab and double click on the signature. Make sure it says "The digital signature is OK" in the popup window.
- [ ] Check Brave, muon, and libchromiumcontent version in About and make sure it is EXACTLY as expected.
## Data
- [ ] Make sure that data from the last version appears in the new version OK.
- [ ] With data from the last version, test that
- [ ] cookies are preserved
- [ ] pinned tabs can be opened
- [ ] pinned tabs can be unpinned
- [ ] unpinned tabs can be re-pinned
- [ ] opened tabs can be reloaded
- [ ] bookmarks on the bookmark toolbar can be opened
- [ ] bookmarks in the bookmark folder toolbar can be opened
## Last changeset test
- [ ] Test what is covered by the last changeset (you can find this by clicking on the SHA in about:brave).
## Widevine/Netflix test
- [ ] Test that you can log into Netflix and start a show.
## Ledger
- [ ] Verify wallet is auto created after enabling payments
- [ ] Verify monthly budget and account balance shows correct BAT and USD value
- [ ] Click on `add funds` and click on each currency and verify it shows wallet address and QR Code
- [ ] Verify that Brave BAT wallet address can be copied
- [ ] Verify adding funds via any of the currencies flows into BAT Wallet after specified amount of time
- [ ] Verify adding funds to an existing wallet with amount, adjusts the BAT value appropriately
- [ ] Change min visit and min time in advance setting and verify if the publisher list gets updated based on new setting
- [ ] Visit nytimes.com for a few seconds and make sure it shows up in the Payments table.
- [ ] Check that disabling payments and enabling them again does not lose state.
- [ ] Upgrade from older version
- [ ] Verify the wallet overlay is shown when wallet transition is happening upon upgrade
- [ ] Verify transition overlay is shown post upgrade even if the payment is disabled before upgrade
- [ ] Verify publishers list is not lost after upgrade when payment is disabled in the older version
## Sync
- [ ] Verify you are able to sync two devices using the secret code
- [ ] Visit a site on device 1 and change shield setting, ensure that the saved site preference is synced to device 2
- [ ] Enable Browsing history sync on device 1, ensure the history is shown on device 2
- [ ] Import/Add bookmarks on device 1, ensure it is synced on device 2
- [ ] Ensure imported bookmark folder structure is maintained on device 2
- [ ] Ensure bookmark favicons are shown after sync
## About pages
- [ ] Test that about:adblock loads
- [ ] Test that about:autofill loads
- [ ] Test that about:bookmarks loads bookmarks
- [ ] Test that about:downloads loads downloads
- [ ] Test that about:extensions loads
- [ ] Test that about:history loads history
- [ ] Test that about:passwords loads
- [ ] Test that about:styles loads
- [ ] Test that about:welcome loads
- [ ] Test that about:preferences changing a preference takes effect right away
- [ ] Test that about:preferences language change takes effect on re-start
## Bookmarks
- [ ] Test that creating a bookmark on the bookmarks toolbar with the star button works
- [ ] Test that creating a bookmark on the bookmarks toolbar by dragging the un/lock icon works
- [ ] Test that creating a bookmark folder on the bookmarks toolbar works
- [ ] Test that moving a bookmark into a folder by drag and drop on the bookmarks folder works
- [ ] Test that clicking a bookmark in the toolbar loads the bookmark.
- [ ] Test that clicking a bookmark in a bookmark toolbar folder loads the bookmark.
- [ ] Test that a bookmark on the bookmark toolbar can be removed via context menu
- [ ] Test that a bookmark in a bookmark folder on the bookmark toolbar can be removed via context menu
- [ ] Test that a bookmark subfolder can be removed via context menu
- [ ] Test that a bookmark folder on the bookmark toolbar can be removed via context menu
## Context menus
- [ ] Make sure context menu items in the URL bar work
- [ ] Make sure context menu items on content work with no selected text.
- [ ] Make sure context menu items on content work with selected text.
- [ ] Make sure context menu items on content work inside an editable control on `about:styles` (input, textarea, or contenteditable).
## Find on page
- [ ] Ensure search box is shown with shortcut
- [ ] Test successful find
- [ ] Test forward and backward find navigation
- [ ] Test failed find shows 0 results
- [ ] Test match case find
## Geolocation
- [ ] Check that https://developer.mozilla.org/en-US/docs/Web/API/Geolocation/Using_geolocation works
## Site hacks
- [ ] Test https://www.twitch.tv/adobe sub-page loads a video and you can play it
## Downloads
- [ ] Test downloading a file works and that all actions on the download item works.
## Fullscreen
- [ ] Test that entering full screen window works View -> Toggle Full Screen. And exit back (Not Esc).
- [ ] Test that entering HTML5 full screen works. And Esc to go back. (youtube.com)
## Tabs, Pinning and Tear off tabs
- [ ] Test that tabs are pinnable
- [ ] Test that tabs are unpinnable
- [ ] Test that tabs are draggable to same tabset
- [ ] Test that tabs are draggable to alternate tabset
- [ ] Test that tabs can be teared off into a new window
- [ ] Test that you are able to reattach a tab that is teared off into a new window
- [ ] Test that tab pages can be closed
- [ ] Test that tab pages can be muted
## Zoom
- [ ] Test zoom in / out shortcut works
- [ ] Test hamburger menu zooms.
- [ ] Test zoom saved when you close the browser and restore on a single site.
- [ ] Test zoom saved when you navigate within a single origin site.
- [ ] Test that navigating to a different origin resets the zoom
## Bravery settings
- [ ] Check that HTTPS Everywhere works by loading https://https-everywhere.badssl.com/
- [ ] Turning HTTPS Everywhere off and shields off both disable the redirect to https://https-everywhere.badssl.com/
- [ ] Check that ad replacement works on http://slashdot.org
- [ ] Check that toggling to blocking and allow ads works as expected.
- [ ] Test that clicking through a cert error in https://badssl.com/ works.
- [ ] Test that Safe Browsing works (https://www.raisegame.com/)
- [ ] Turning Safe Browsing off and shields off both disable safe browsing for https://www.raisegame.com/.
- [ ] Visit https://brianbondy.com/ and then turn on script blocking, nothing should load. Allow it from the script blocking UI in the URL bar and it should work.
- [ ] Test that about:preferences default Bravery settings take effect on pages with no site settings.
- [ ] Test that turning on fingerprinting protection in about:preferences shows 3 fingerprints blocked at https://jsfiddle.net/bkf50r8v/13/. Test that turning it off in the Bravery menu shows 0 fingerprints blocked.
- [ ] Test that 3rd party storage results are blank at https://jsfiddle.net/7ke9r14a/9/ when 3rd party cookies are blocked and not blank when 3rd party cookies are unblocked.
- [ ] Test that audio fingerprint is blocked at https://audiofingerprint.openwpm.com/ when fingerprinting protection is on.
- [ ] Test that browser is not detected on https://extensions.inrialpes.fr/brave/
## Content tests
- [ ] Go to https://brianbondy.com/ and click on the twitter icon on the top right. Test that context menus work in the new twitter tab.
- [ ] Load twitter and click on a tweet so the popup div shows. Click to dismiss and repeat with another div. Make sure it shows.
- [ ] Go to https://www.bennish.net/web-notifications.html and test that clicking on 'Show' pops up a notification asking for permission. Make sure that clicking 'Deny' leads to no notifications being shown.
- [ ] Go to https://trac.torproject.org/projects/tor/login and make sure that the password can be saved. Make sure the saved password shows up in `about:passwords`. Then reload https://trac.torproject.org/projects/tor/login and make sure the password is autofilled.
- [ ] Open `about:styles` and type some misspellings on a textbox, make sure they are underlined.
- [ ] Make sure that right clicking on a word with suggestions gives a suggestion and that clicking on the suggestion replaces the text.
- [ ] Make sure that Command + Click (Control + Click on Windows, Control + Click on Ubuntu) on a link opens a new tab but does NOT switch to it. Click on it and make sure it is already loaded.
- [ ] Open an email on http://mail.google.com/ or inbox.google.com and click on a link. Make sure it works.
- [ ] Test that PDF is loaded at http://www.orimi.com/pdf-test.pdf
- [ ] Test that https://mixed-script.badssl.com/ shows up as grey not red (no mixed content scripts are run).
## Flash tests
- [ ] Turn on Flash in about:preferences#security. Test that clicking on 'Install Flash' banner on myspace.com shows a notification to allow Flash and that the banner disappears when 'Allow' is clicked.
- [ ] Test that flash placeholder appears on http://www.homestarrunner.com
## Autofill tests
- [ ] Test that autofill works on http://www.roboform.com/filling-test-all-fields
## Session storage
Do not forget to make a backup of your entire `~/Library/Application\ Support/Brave` folder.
- [ ] Temporarily move away your `~/Library/Application\ Support/Brave/session-store-1` and test that clean session storage works. (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
- [ ] Test that windows and tabs restore when closed, including active tab.
- [ ] Move away your entire `~/Library/Application\ Support/Brave` folder (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
## Cookie and Cache
- [ ] Make a backup of your profile, turn on all clearing in preferences and shut down. Make sure when you bring the browser back up everything is gone that is specified.
- [ ] Go to http://samy.pl/evercookie/ and set an evercookie. Check that going to prefs, clearing site data and cache, and going back to the Evercookie site does not remember the old evercookie value.
## Update tests
- [ ] Test that updating using `BRAVE_UPDATE_VERSION=0.8.3` env variable works correctly.
|
1.0
|
Manual test run on Linux for 0.19.x Hotfix 3 (Release channel) - ## Per release specialty tests
- [ ] Contribution amounts were not updated during BTC => BAT conversion. ([#11719](https://github.com/brave/browser-laptop/issues/11719))
- [ ] Websockets connection issue. ([#11716](https://github.com/brave/browser-laptop/issues/11716))
- [ ] Error: ENOENT: no such file or directory, access '/Users/kjozwiak/Library/Application Support/brave/ledger-synopsis.json' . ([#11674](https://github.com/brave/browser-laptop/issues/11674))
- [ ] Error: ENOENT: no such file or directory, access 'profile\ledger-newstate.json' while upgrading. ([#11669](https://github.com/brave/browser-laptop/issues/11669))
- [ ] Unable to highlight sync words for copying. ([#11641](https://github.com/brave/browser-laptop/issues/11641))
- [ ] Backup Wallet notification shows no empty overlay modal. ([#11639](https://github.com/brave/browser-laptop/issues/11639))
- [ ] Publisher not added if revisit happens. ([#11633](https://github.com/brave/browser-laptop/issues/11633))
- [ ] Fix buttons wrap on about:preferences#payments (l10n). ([#11580](https://github.com/brave/browser-laptop/issues/11580))
- [ ] Move brave/ad-block and brave/tracking-protection deps to muon. ([#11352](https://github.com/brave/browser-laptop/issues/11352))
- [ ] HTTPS Everywhere breaks lat.ms shortlinks. ([#11303](https://github.com/brave/browser-laptop/issues/11303))
## Installer
- [ ] Check that installer is close to the size of last release.
- [ ] Check signature: If OS Run `spctl --assess --verbose /Applications/Brave.app/` and make sure it returns `accepted`. If Windows right click on the installer exe and go to Properties, go to the Digital Signatures tab and double click on the signature. Make sure it says "The digital signature is OK" in the popup window.
- [ ] Check Brave, muon, and libchromiumcontent version in About and make sure it is EXACTLY as expected.
## Data
- [ ] Make sure that data from the last version appears in the new version OK.
- [ ] With data from the last version, test that
- [ ] cookies are preserved
- [ ] pinned tabs can be opened
- [ ] pinned tabs can be unpinned
- [ ] unpinned tabs can be re-pinned
- [ ] opened tabs can be reloaded
- [ ] bookmarks on the bookmark toolbar can be opened
- [ ] bookmarks in the bookmark folder toolbar can be opened
## Last changeset test
- [ ] Test what is covered by the last changeset (you can find this by clicking on the SHA in about:brave).
## Widevine/Netflix test
- [ ] Test that you can log into Netflix and start a show.
## Ledger
- [ ] Verify wallet is auto created after enabling payments
- [ ] Verify monthly budget and account balance shows correct BAT and USD value
- [ ] Click on `add funds` and click on each currency and verify it shows wallet address and QR Code
- [ ] Verify that Brave BAT wallet address can be copied
- [ ] Verify adding funds via any of the currencies flows into BAT Wallet after specified amount of time
- [ ] Verify adding funds to an existing wallet with amount, adjusts the BAT value appropriately
- [ ] Change min visit and min time in advance setting and verify if the publisher list gets updated based on new setting
- [ ] Visit nytimes.com for a few seconds and make sure it shows up in the Payments table.
- [ ] Check that disabling payments and enabling them again does not lose state.
- [ ] Upgrade from older version
- [ ] Verify the wallet overlay is shown when wallet transition is happening upon upgrade
- [ ] Verify transition overlay is shown post upgrade even if the payment is disabled before upgrade
- [ ] Verify publishers list is not lost after upgrade when payment is disabled in the older version
## Sync
- [ ] Verify you are able to sync two devices using the secret code
- [ ] Visit a site on device 1 and change shield setting, ensure that the saved site preference is synced to device 2
- [ ] Enable Browsing history sync on device 1, ensure the history is shown on device 2
- [ ] Import/Add bookmarks on device 1, ensure it is synced on device 2
- [ ] Ensure imported bookmark folder structure is maintained on device 2
- [ ] Ensure bookmark favicons are shown after sync
## About pages
- [ ] Test that about:adblock loads
- [ ] Test that about:autofill loads
- [ ] Test that about:bookmarks loads bookmarks
- [ ] Test that about:downloads loads downloads
- [ ] Test that about:extensions loads
- [ ] Test that about:history loads history
- [ ] Test that about:passwords loads
- [ ] Test that about:styles loads
- [ ] Test that about:welcome loads
- [ ] Test that about:preferences changing a preference takes effect right away
- [ ] Test that about:preferences language change takes effect on re-start
## Bookmarks
- [ ] Test that creating a bookmark on the bookmarks toolbar with the star button works
- [ ] Test that creating a bookmark on the bookmarks toolbar by dragging the un/lock icon works
- [ ] Test that creating a bookmark folder on the bookmarks toolbar works
- [ ] Test that moving a bookmark into a folder by drag and drop on the bookmarks folder works
- [ ] Test that clicking a bookmark in the toolbar loads the bookmark.
- [ ] Test that clicking a bookmark in a bookmark toolbar folder loads the bookmark.
- [ ] Test that a bookmark on the bookmark toolbar can be removed via context menu
- [ ] Test that a bookmark in a bookmark folder on the bookmark toolbar can be removed via context menu
- [ ] Test that a bookmark subfolder can be removed via context menu
- [ ] Test that a bookmark folder on the bookmark toolbar can be removed via context menu
## Context menus
- [ ] Make sure context menu items in the URL bar work
- [ ] Make sure context menu items on content work with no selected text.
- [ ] Make sure context menu items on content work with selected text.
- [ ] Make sure context menu items on content work inside an editable control on `about:styles` (input, textarea, or contenteditable).
## Find on page
- [ ] Ensure search box is shown with shortcut
- [ ] Test successful find
- [ ] Test forward and backward find navigation
- [ ] Test failed find shows 0 results
- [ ] Test match case find
## Geolocation
- [ ] Check that https://developer.mozilla.org/en-US/docs/Web/API/Geolocation/Using_geolocation works
## Site hacks
- [ ] Test https://www.twitch.tv/adobe sub-page loads a video and you can play it
## Downloads
- [ ] Test downloading a file works and that all actions on the download item works.
## Fullscreen
- [ ] Test that entering full screen window works View -> Toggle Full Screen. And exit back (Not Esc).
- [ ] Test that entering HTML5 full screen works. And Esc to go back. (youtube.com)
## Tabs, Pinning and Tear off tabs
- [ ] Test that tabs are pinnable
- [ ] Test that tabs are unpinnable
- [ ] Test that tabs are draggable to same tabset
- [ ] Test that tabs are draggable to alternate tabset
- [ ] Test that tabs can be teared off into a new window
- [ ] Test that you are able to reattach a tab that is teared off into a new window
- [ ] Test that tab pages can be closed
- [ ] Test that tab pages can be muted
## Zoom
- [ ] Test zoom in / out shortcut works
- [ ] Test hamburger menu zooms.
- [ ] Test zoom saved when you close the browser and restore on a single site.
- [ ] Test zoom saved when you navigate within a single origin site.
- [ ] Test that navigating to a different origin resets the zoom
## Bravery settings
- [ ] Check that HTTPS Everywhere works by loading https://https-everywhere.badssl.com/
- [ ] Turning HTTPS Everywhere off and shields off both disable the redirect to https://https-everywhere.badssl.com/
- [ ] Check that ad replacement works on http://slashdot.org
- [ ] Check that toggling to blocking and allow ads works as expected.
- [ ] Test that clicking through a cert error in https://badssl.com/ works.
- [ ] Test that Safe Browsing works (https://www.raisegame.com/)
- [ ] Turning Safe Browsing off and shields off both disable safe browsing for https://www.raisegame.com/.
- [ ] Visit https://brianbondy.com/ and then turn on script blocking, nothing should load. Allow it from the script blocking UI in the URL bar and it should work.
- [ ] Test that about:preferences default Bravery settings take effect on pages with no site settings.
- [ ] Test that turning on fingerprinting protection in about:preferences shows 3 fingerprints blocked at https://jsfiddle.net/bkf50r8v/13/. Test that turning it off in the Bravery menu shows 0 fingerprints blocked.
- [ ] Test that 3rd party storage results are blank at https://jsfiddle.net/7ke9r14a/9/ when 3rd party cookies are blocked and not blank when 3rd party cookies are unblocked.
- [ ] Test that audio fingerprint is blocked at https://audiofingerprint.openwpm.com/ when fingerprinting protection is on.
- [ ] Test that browser is not detected on https://extensions.inrialpes.fr/brave/
## Content tests
- [ ] Go to https://brianbondy.com/ and click on the twitter icon on the top right. Test that context menus work in the new twitter tab.
- [ ] Load twitter and click on a tweet so the popup div shows. Click to dismiss and repeat with another div. Make sure it shows.
- [ ] Go to https://www.bennish.net/web-notifications.html and test that clicking on 'Show' pops up a notification asking for permission. Make sure that clicking 'Deny' leads to no notifications being shown.
- [ ] Go to https://trac.torproject.org/projects/tor/login and make sure that the password can be saved. Make sure the saved password shows up in `about:passwords`. Then reload https://trac.torproject.org/projects/tor/login and make sure the password is autofilled.
- [ ] Open `about:styles` and type some misspellings on a textbox, make sure they are underlined.
- [ ] Make sure that right clicking on a word with suggestions gives a suggestion and that clicking on the suggestion replaces the text.
- [ ] Make sure that Command + Click (Control + Click on Windows, Control + Click on Ubuntu) on a link opens a new tab but does NOT switch to it. Click on it and make sure it is already loaded.
- [ ] Open an email on http://mail.google.com/ or inbox.google.com and click on a link. Make sure it works.
- [ ] Test that PDF is loaded at http://www.orimi.com/pdf-test.pdf
- [ ] Test that https://mixed-script.badssl.com/ shows up as grey not red (no mixed content scripts are run).
## Flash tests
- [ ] Turn on Flash in about:preferences#security. Test that clicking on 'Install Flash' banner on myspace.com shows a notification to allow Flash and that the banner disappears when 'Allow' is clicked.
- [ ] Test that flash placeholder appears on http://www.homestarrunner.com
## Autofill tests
- [ ] Test that autofill works on http://www.roboform.com/filling-test-all-fields
## Session storage
Do not forget to make a backup of your entire `~/Library/Application\ Support/Brave` folder.
- [ ] Temporarily move away your `~/Library/Application\ Support/Brave/session-store-1` and test that clean session storage works. (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
- [ ] Test that windows and tabs restore when closed, including active tab.
- [ ] Move away your entire `~/Library/Application\ Support/Brave` folder (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
## Cookie and Cache
- [ ] Make a backup of your profile, turn on all clearing in preferences and shut down. Make sure when you bring the browser back up everything is gone that is specified.
- [ ] Go to http://samy.pl/evercookie/ and set an evercookie. Check that going to prefs, clearing site data and cache, and going back to the Evercookie site does not remember the old evercookie value.
## Update tests
- [ ] Test that updating using `BRAVE_UPDATE_VERSION=0.8.3` env variable works correctly.
|
test
|
manual test run on linux for x hotfix release channel per release specialty tests contribution amounts were not updated during btc bat conversion websockets connection issue error enoent no such file or directory access users kjozwiak library application support brave ledger synopsis json error enoent no such file or directory access profile ledger newstate json while upgrading unable to highlight sync words for copying backup wallet notification shows no empty overlay modal publisher not added if revisit happens fix buttons wrap on about preferences payments move brave ad block and brave tracking protection deps to muon https everywhere breaks lat ms shortlinks installer check that installer is close to the size of last release check signature if os run spctl assess verbose applications brave app and make sure it returns accepted if windows right click on the installer exe and go to properties go to the digital signatures tab and double click on the signature make sure it says the digital signature is ok in the popup window check brave muon and libchromiumcontent version in about and make sure it is exactly as expected data make sure that data from the last version appears in the new version ok with data from the last version test that cookies are preserved pinned tabs can be opened pinned tabs can be unpinned unpinned tabs can be re pinned opened tabs can be reloaded bookmarks on the bookmark toolbar can be opened bookmarks in the bookmark folder toolbar can be opened last changeset test test what is covered by the last changeset you can find this by clicking on the sha in about brave widevine netflix test test that you can log into netflix and start a show ledger verify wallet is auto created after enabling payments verify monthly budget and account balance shows correct bat and usd value click on add funds and click on each currency and verify it shows wallet address and qr code verify that brave bat wallet address can be copied verify adding funds via any of the currencies flows into bat wallet after specified amount of time verify adding funds to an existing wallet with amount adjusts the bat value appropriately change min visit and min time in advance setting and verify if the publisher list gets updated based on new setting visit nytimes com for a few seconds and make sure it shows up in the payments table check that disabling payments and enabling them again does not lose state upgrade from older version verify the wallet overlay is shown when wallet transition is happening upon upgrade verify transition overlay is shown post upgrade even if the payment is disabled before upgrade verify publishers list is not lost after upgrade when payment is disabled in the older version sync verify you are able to sync two devices using the secret code visit a site on device and change shield setting ensure that the saved site preference is synced to device enable browsing history sync on device ensure the history is shown on device import add bookmarks on device ensure it is synced on device ensure imported bookmark folder structure is maintained on device ensure bookmark favicons are shown after sync about pages test that about adblock loads test that about autofill loads test that about bookmarks loads bookmarks test that about downloads loads downloads test that about extensions loads test that about history loads history test that about passwords loads test that about styles loads test that about welcome loads test that about preferences changing a preference takes effect right away test that about preferences language change takes effect on re start bookmarks test that creating a bookmark on the bookmarks toolbar with the star button works test that creating a bookmark on the bookmarks toolbar by dragging the un lock icon works test that creating a bookmark folder on the bookmarks toolbar works test that moving a bookmark into a folder by drag and drop on the bookmarks folder works test that clicking a bookmark in the toolbar loads the bookmark test that clicking a bookmark in a bookmark toolbar folder loads the bookmark test that a bookmark on the bookmark toolbar can be removed via context menu test that a bookmark in a bookmark folder on the bookmark toolbar can be removed via context menu test that a bookmark subfolder can be removed via context menu test that a bookmark folder on the bookmark toolbar can be removed via context menu context menus make sure context menu items in the url bar work make sure context menu items on content work with no selected text make sure context menu items on content work with selected text make sure context menu items on content work inside an editable control on about styles input textarea or contenteditable find on page ensure search box is shown with shortcut test successful find test forward and backward find navigation test failed find shows results test match case find geolocation check that works site hacks test sub page loads a video and you can play it downloads test downloading a file works and that all actions on the download item works fullscreen test that entering full screen window works view toggle full screen and exit back not esc test that entering full screen works and esc to go back youtube com tabs pinning and tear off tabs test that tabs are pinnable test that tabs are unpinnable test that tabs are draggable to same tabset test that tabs are draggable to alternate tabset test that tabs can be teared off into a new window test that you are able to reattach a tab that is teared off into a new window test that tab pages can be closed test that tab pages can be muted zoom test zoom in out shortcut works test hamburger menu zooms test zoom saved when you close the browser and restore on a single site test zoom saved when you navigate within a single origin site test that navigating to a different origin resets the zoom bravery settings check that https everywhere works by loading turning https everywhere off and shields off both disable the redirect to check that ad replacement works on check that toggling to blocking and allow ads works as expected test that clicking through a cert error in works test that safe browsing works turning safe browsing off and shields off both disable safe browsing for visit and then turn on script blocking nothing should load allow it from the script blocking ui in the url bar and it should work test that about preferences default bravery settings take effect on pages with no site settings test that turning on fingerprinting protection in about preferences shows fingerprints blocked at test that turning it off in the bravery menu shows fingerprints blocked test that party storage results are blank at when party cookies are blocked and not blank when party cookies are unblocked test that audio fingerprint is blocked at when fingerprinting protection is on test that browser is not detected on content tests go to and click on the twitter icon on the top right test that context menus work in the new twitter tab load twitter and click on a tweet so the popup div shows click to dismiss and repeat with another div make sure it shows go to and test that clicking on show pops up a notification asking for permission make sure that clicking deny leads to no notifications being shown go to and make sure that the password can be saved make sure the saved password shows up in about passwords then reload and make sure the password is autofilled open about styles and type some misspellings on a textbox make sure they are underlined make sure that right clicking on a word with suggestions gives a suggestion and that clicking on the suggestion replaces the text make sure that command click control click on windows control click on ubuntu on a link opens a new tab but does not switch to it click on it and make sure it is already loaded open an email on or inbox google com and click on a link make sure it works test that pdf is loaded at test that shows up as grey not red no mixed content scripts are run flash tests turn on flash in about preferences security test that clicking on install flash banner on myspace com shows a notification to allow flash and that the banner disappears when allow is clicked test that flash placeholder appears on autofill tests test that autofill works on session storage do not forget to make a backup of your entire library application support brave folder temporarily move away your library application support brave session store and test that clean session storage works appdata brave in windows config brave in ubuntu test that windows and tabs restore when closed including active tab move away your entire library application support brave folder appdata brave in windows config brave in ubuntu cookie and cache make a backup of your profile turn on all clearing in preferences and shut down make sure when you bring the browser back up everything is gone that is specified go to and set an evercookie check that going to prefs clearing site data and cache and going back to the evercookie site does not remember the old evercookie value update tests test that updating using brave update version env variable works correctly
| 1
|
190,703
| 22,155,140,513
|
IssuesEvent
|
2022-06-03 21:33:21
|
vincenzodistasio97/ReactSocial
|
https://api.github.com/repos/vincenzodistasio97/ReactSocial
|
opened
|
CVE-2018-11697 (High) detected in node-sass-4.13.0.tgz
|
security vulnerability
|
## CVE-2018-11697 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sass-4.13.0.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.13.0.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.13.0.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- :x: **node-sass-4.13.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/ReactSocial/commit/1193d502cdfb37123f347a4ceb67eb9b1fdad386">1193d502cdfb37123f347a4ceb67eb9b1fdad386</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. An out-of-bounds read of a memory region was found in the function Sass::Prelexer::exactly() which could be leveraged by an attacker to disclose information or manipulated to read from unmapped memory causing a denial of service.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11697>CVE-2018-11697</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sass/libsass/releases/tag/3.6.0">https://github.com/sass/libsass/releases/tag/3.6.0</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: 4.14.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-11697 (High) detected in node-sass-4.13.0.tgz - ## CVE-2018-11697 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sass-4.13.0.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.13.0.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.13.0.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- :x: **node-sass-4.13.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/ReactSocial/commit/1193d502cdfb37123f347a4ceb67eb9b1fdad386">1193d502cdfb37123f347a4ceb67eb9b1fdad386</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. An out-of-bounds read of a memory region was found in the function Sass::Prelexer::exactly() which could be leveraged by an attacker to disclose information or manipulated to read from unmapped memory causing a denial of service.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11697>CVE-2018-11697</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sass/libsass/releases/tag/3.6.0">https://github.com/sass/libsass/releases/tag/3.6.0</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: 4.14.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in node sass tgz cve high severity vulnerability vulnerable library node sass tgz wrapper around libsass library home page a href path to dependency file client package json path to vulnerable library client node modules node sass package json dependency hierarchy x node sass tgz vulnerable library found in head commit a href found in base branch master vulnerability details an issue was discovered in libsass through an out of bounds read of a memory region was found in the function sass prelexer exactly which could be leveraged by an attacker to disclose information or manipulated to read from unmapped memory causing a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
72,975
| 24,392,143,633
|
IssuesEvent
|
2022-10-04 16:01:01
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Message gets falsely highlighted and notified
|
T-Defect X-Needs-Info S-Minor A-Notifications
|
### Steps to reproduce
I have a room where a custom bot sends messages about the latest vulnerabilities that are added to NIST's NVD. Since there are quite many messages every day and I don't want to be notified for each one, I configured the notification settings to only notify me for "mentions and keywords". The keywords I use are `matrix`, `synapse`, `bitwarden` and `nextcloud`.
Today I received a notification for a message (which is also highlighted red), but that message does not contain any mentions of my user or any keywords I configured. I can't reproduce why this exact message gets highlighted, while the vast majority of the other messages are not (which is correct).
This is the raw data of the message:
```json
{
"type": "m.room.message",
"sender": "@nvd-rss-bot:vollkorntomate.de",
"content": {
"msgtype": "m.text",
"body": "CVE-2021-44857 (https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2021-44857)\nAn issue was discovered in MediaWiki before 1.35.5, 1.36.x before 1.36.3, and 1.37.x before 1.37.1. It is possible to use action=mcrundo followed by action=mcrrestore to replace the content of any arbitrary page (that the user doesn't have edit rights for). This applies to any public wiki, or a private wiki that has at least one page set in $wgWhitelistRead.",
"format": "org.matrix.custom.html",
"formatted_body": "<p><a href=\"https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2021-44857\">CVE-2021-44857</a><br>An issue was discovered in MediaWiki before 1.35.5, 1.36.x before 1.36.3, and 1.37.x before 1.37.1. It is possible to use action=mcrundo followed by action=mcrrestore to replace the content of any arbitrary page (that the user doesn't have edit rights for). This applies to any public wiki, or a private wiki that has at least one page set in $wgWhitelistRead.</p>\n"
},
"origin_server_ts": 1639721546964,
"unsigned": {
"age": 407098
},
"event_id": "$YcvT-RSNq7xkN0XzY3ycBBex35KT7hqSLBKTjx93rY4",
"room_id": "!HoBafWSNFqVDHjnRkz:vollkorntomate.de"
}
```
### Outcome
The message should not have been highlighted and I should not have received a notification.
### Operating system
macOS, iOS, Web (Safari)
### Application version
macOS: Element 1.9.7, iOS: Element 1.6.10
|
1.0
|
Message gets falsely highlighted and notified - ### Steps to reproduce
I have a room where a custom bot sends messages about the latest vulnerabilities that are added to NIST's NVD. Since there are quite many messages every day and I don't want to be notified for each one, I configured the notification settings to only notify me for "mentions and keywords". The keywords I use are `matrix`, `synapse`, `bitwarden` and `nextcloud`.
Today I received a notification for a message (which is also highlighted red), but that message does not contain any mentions of my user or any keywords I configured. I can't reproduce why this exact message gets highlighted, while the vast majority of the other messages are not (which is correct).
This is the raw data of the message:
```json
{
"type": "m.room.message",
"sender": "@nvd-rss-bot:vollkorntomate.de",
"content": {
"msgtype": "m.text",
"body": "CVE-2021-44857 (https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2021-44857)\nAn issue was discovered in MediaWiki before 1.35.5, 1.36.x before 1.36.3, and 1.37.x before 1.37.1. It is possible to use action=mcrundo followed by action=mcrrestore to replace the content of any arbitrary page (that the user doesn't have edit rights for). This applies to any public wiki, or a private wiki that has at least one page set in $wgWhitelistRead.",
"format": "org.matrix.custom.html",
"formatted_body": "<p><a href=\"https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2021-44857\">CVE-2021-44857</a><br>An issue was discovered in MediaWiki before 1.35.5, 1.36.x before 1.36.3, and 1.37.x before 1.37.1. It is possible to use action=mcrundo followed by action=mcrrestore to replace the content of any arbitrary page (that the user doesn't have edit rights for). This applies to any public wiki, or a private wiki that has at least one page set in $wgWhitelistRead.</p>\n"
},
"origin_server_ts": 1639721546964,
"unsigned": {
"age": 407098
},
"event_id": "$YcvT-RSNq7xkN0XzY3ycBBex35KT7hqSLBKTjx93rY4",
"room_id": "!HoBafWSNFqVDHjnRkz:vollkorntomate.de"
}
```
### Outcome
The message should not have been highlighted and I should not have received a notification.
### Operating system
macOS, iOS, Web (Safari)
### Application version
macOS: Element 1.9.7, iOS: Element 1.6.10
|
non_test
|
message gets falsely highlighted and notified steps to reproduce i have a room where a custom bot sends messages about the latest vulnerabilities that are added to nist s nvd since there are quite many messages every day and i don t want to be notified for each one i configured the notification settings to only notify me for mentions and keywords the keywords i use are matrix synapse bitwarden and nextcloud today i received a notification for a message which is also highlighted red but that message does not contain any mentions of my user or any keywords i configured i can t reproduce why this exact message gets highlighted while the vast majority of the other messages are not which is correct this is the raw data of the message json type m room message sender nvd rss bot vollkorntomate de content msgtype m text body cve issue was discovered in mediawiki before x before and x before it is possible to use action mcrundo followed by action mcrrestore to replace the content of any arbitrary page that the user doesn t have edit rights for this applies to any public wiki or a private wiki that has at least one page set in wgwhitelistread format org matrix custom html formatted body n origin server ts unsigned age event id ycvt room id hobafwsnfqvdhjnrkz vollkorntomate de outcome the message should not have been highlighted and i should not have received a notification operating system macos ios web safari application version macos element ios element
| 0
|
72,818
| 3,391,776,941
|
IssuesEvent
|
2015-11-30 16:47:58
|
washingtontrails/vms
|
https://api.github.com/repos/washingtontrails/vms
|
opened
|
thumb image appearing twice on trip reports
|
Bug High Priority MBP BUDGET Plone
|
Both when creating a new trip report and looking at others' reports the thumb image shows up twice.

|
1.0
|
thumb image appearing twice on trip reports - Both when creating a new trip report and looking at others' reports the thumb image shows up twice.

|
non_test
|
thumb image appearing twice on trip reports both when creating a new trip report and looking at others reports the thumb image shows up twice
| 0
|
143,365
| 11,545,398,074
|
IssuesEvent
|
2020-02-18 13:22:33
|
opendatakit/collect
|
https://api.github.com/repos/opendatakit/collect
|
opened
|
Test image widgets display correct image
|
testing
|
There aren't currently tests for the image widgets (widgets descending from `BaseImageWidget`) that ensures they display the correct image in a prompt. We now have tests that they display an image default however (#3642) so we should add the "normal" case to these tests as well.
|
1.0
|
Test image widgets display correct image - There aren't currently tests for the image widgets (widgets descending from `BaseImageWidget`) that ensures they display the correct image in a prompt. We now have tests that they display an image default however (#3642) so we should add the "normal" case to these tests as well.
|
test
|
test image widgets display correct image there aren t currently tests for the image widgets widgets descending from baseimagewidget that ensures they display the correct image in a prompt we now have tests that they display an image default however so we should add the normal case to these tests as well
| 1
|
224,380
| 17,691,631,275
|
IssuesEvent
|
2021-08-24 10:40:42
|
finos/waltz
|
https://api.github.com/repos/finos/waltz
|
closed
|
Navigation: change 'in-page' navigation to a sidebar approach
|
noteworthy fixed (test & close) QoL
|
Justification:
- too many sections
- consistency with standard UI patterns
- expand collapse allows for larger 'click' targets
|
1.0
|
Navigation: change 'in-page' navigation to a sidebar approach - Justification:
- too many sections
- consistency with standard UI patterns
- expand collapse allows for larger 'click' targets
|
test
|
navigation change in page navigation to a sidebar approach justification too many sections consistency with standard ui patterns expand collapse allows for larger click targets
| 1
|
170,106
| 13,174,105,130
|
IssuesEvent
|
2020-08-11 21:40:37
|
microsoft/react-native-windows
|
https://api.github.com/repos/microsoft/react-native-windows
|
closed
|
Reimplement TextInputExample RNTester Page
|
Area: Tests Area: TextInput
|
TextInputExample was removed during the 0.62 upgrade since our version has diverged from upstream significantly. We should reimplement this.
|
1.0
|
Reimplement TextInputExample RNTester Page - TextInputExample was removed during the 0.62 upgrade since our version has diverged from upstream significantly. We should reimplement this.
|
test
|
reimplement textinputexample rntester page textinputexample was removed during the upgrade since our version has diverged from upstream significantly we should reimplement this
| 1
|
74,973
| 20,592,657,721
|
IssuesEvent
|
2022-03-05 02:50:43
|
QubesOS/qubes-issues
|
https://api.github.com/repos/QubesOS/qubes-issues
|
closed
|
Arch packages fail to build
|
T: bug C: builder C: Arch Linux P: default needs diagnosis
|
[How to file a helpful issue](https://www.qubes-os.org/doc/issue-tracking/)
### Qubes OS release
R4.0, R4.1
### Brief summary
Official builds fail with message like this:
```
2022-02-18 11:45:43.314166 +0000 build-archlinux: ( 0/21) checking keys in keyring [----------------------] 0%.( 1/21) checking keys in keyring [----------------------] 4%.( 2/21) checking keys in keyring [#---------------------] 9%.( 3/21) checking keys in keyring [###-------------------] 14%.( 4/21) checking keys in keyring [####------------------] 19%.( 5/21) checking keys in keyring [#####-----------------] 23%.( 6/21) checking keys in keyring [######----------------] 28%.( 7/21) checking keys in keyring [#######---------------] 33%.( 8/21) checking keys in keyring [########--------------] 38%.( 9/21) checking keys in keyring [#########-------------] 42%.(10/21) checking keys in keyring [##########------------] 47%.(11/21) checking keys in keyring [###########-----------] 52%.(12/21) checking keys in keyring [############----------] 57%.(13/21) checking keys in keyring [#############---------] 61%.(14/21) checking keys in keyring [##############--------] 66%.(15/21) checking keys in keyring [###############-------] 71%.(16/21) checking keys in keyring [################------] 76%.(17/21) checking keys in keyring [#################-----] 80%.(18/21) checking keys in keyring [##################----] 85%.(19/21) checking keys in keyring [###################---] 90%.(20/21) checking keys in keyring [####################--] 95%.(21/21) checking keys in keyring [######################] 100%..
2022-02-18 11:45:43.314329 +0000 build-archlinux: downloading required keys....
2022-02-18 11:45:43.314367 +0000 build-archlinux: :: Import PGP key 8A871A1BBD7093EA, "Unknown Packager"? [Y/n] .
2022-02-18 11:45:53.369092 +0000 build-archlinux: error: key "8A871A1BBD7093EA" could not be looked up remotely.
2022-02-18 11:45:53.369150 +0000 build-archlinux: error: required key missing from keyring.
2022-02-18 11:45:53.369170 +0000 build-archlinux: error: failed to commit transaction (unexpected error).
```
See https://github.com/QubesOS/build-issues/issues?q=is%3Aissue+is%3Aopen+archlinux
### Steps to reproduce
Try to build few packages.
### Expected behavior
Successful build.
### Actual behavior
Build fails. Removing chroot helps for a bit (one package?).
|
1.0
|
Arch packages fail to build - [How to file a helpful issue](https://www.qubes-os.org/doc/issue-tracking/)
### Qubes OS release
R4.0, R4.1
### Brief summary
Official builds fail with message like this:
```
2022-02-18 11:45:43.314166 +0000 build-archlinux: ( 0/21) checking keys in keyring [----------------------] 0%.( 1/21) checking keys in keyring [----------------------] 4%.( 2/21) checking keys in keyring [#---------------------] 9%.( 3/21) checking keys in keyring [###-------------------] 14%.( 4/21) checking keys in keyring [####------------------] 19%.( 5/21) checking keys in keyring [#####-----------------] 23%.( 6/21) checking keys in keyring [######----------------] 28%.( 7/21) checking keys in keyring [#######---------------] 33%.( 8/21) checking keys in keyring [########--------------] 38%.( 9/21) checking keys in keyring [#########-------------] 42%.(10/21) checking keys in keyring [##########------------] 47%.(11/21) checking keys in keyring [###########-----------] 52%.(12/21) checking keys in keyring [############----------] 57%.(13/21) checking keys in keyring [#############---------] 61%.(14/21) checking keys in keyring [##############--------] 66%.(15/21) checking keys in keyring [###############-------] 71%.(16/21) checking keys in keyring [################------] 76%.(17/21) checking keys in keyring [#################-----] 80%.(18/21) checking keys in keyring [##################----] 85%.(19/21) checking keys in keyring [###################---] 90%.(20/21) checking keys in keyring [####################--] 95%.(21/21) checking keys in keyring [######################] 100%..
2022-02-18 11:45:43.314329 +0000 build-archlinux: downloading required keys....
2022-02-18 11:45:43.314367 +0000 build-archlinux: :: Import PGP key 8A871A1BBD7093EA, "Unknown Packager"? [Y/n] .
2022-02-18 11:45:53.369092 +0000 build-archlinux: error: key "8A871A1BBD7093EA" could not be looked up remotely.
2022-02-18 11:45:53.369150 +0000 build-archlinux: error: required key missing from keyring.
2022-02-18 11:45:53.369170 +0000 build-archlinux: error: failed to commit transaction (unexpected error).
```
See https://github.com/QubesOS/build-issues/issues?q=is%3Aissue+is%3Aopen+archlinux
### Steps to reproduce
Try to build few packages.
### Expected behavior
Successful build.
### Actual behavior
Build fails. Removing chroot helps for a bit (one package?).
|
non_test
|
arch packages fail to build qubes os release brief summary official builds fail with message like this build archlinux checking keys in keyring checking keys in keyring checking keys in keyring checking keys in keyring checking keys in keyring checking keys in keyring checking keys in keyring checking keys in keyring checking keys in keyring checking keys in keyring checking keys in keyring checking keys in keyring checking keys in keyring checking keys in keyring checking keys in keyring checking keys in keyring checking keys in keyring checking keys in keyring checking keys in keyring checking keys in keyring checking keys in keyring checking keys in keyring build archlinux downloading required keys build archlinux import pgp key unknown packager build archlinux error key could not be looked up remotely build archlinux error required key missing from keyring build archlinux error failed to commit transaction unexpected error see steps to reproduce try to build few packages expected behavior successful build actual behavior build fails removing chroot helps for a bit one package
| 0
|
226,519
| 18,024,403,638
|
IssuesEvent
|
2021-09-17 01:12:28
|
Reiningecho90/The-Grand-Army-Project
|
https://api.github.com/repos/Reiningecho90/The-Grand-Army-Project
|
closed
|
Credit Buying
|
enhancement testing
|
There should be a way to purchase certain items in the game (only of cosmetic value) that can be attained through credits.
|
1.0
|
Credit Buying - There should be a way to purchase certain items in the game (only of cosmetic value) that can be attained through credits.
|
test
|
credit buying there should be a way to purchase certain items in the game only of cosmetic value that can be attained through credits
| 1
|
225,609
| 17,868,067,198
|
IssuesEvent
|
2021-09-06 12:02:16
|
ckeditor/ckeditor4
|
https://api.github.com/repos/ckeditor/ckeditor4
|
closed
|
Some of "plugins/forms" unit tests fails on IE8
|
status:confirmed browser:ie8 type:failingtest plugin:forms
|
## Type of report
Failing tests
## Provide description of the task

The error is `'type' is null or not an object`, the same as in #3527 so it might be related (the cause might be the same).
## Other details
* Browser: IE8
* OS: Windows 7
* CKEditor version: `4.13`
* Installed CKEditor plugins: -
|
1.0
|
Some of "plugins/forms" unit tests fails on IE8 - ## Type of report
Failing tests
## Provide description of the task

The error is `'type' is null or not an object`, the same as in #3527 so it might be related (the cause might be the same).
## Other details
* Browser: IE8
* OS: Windows 7
* CKEditor version: `4.13`
* Installed CKEditor plugins: -
|
test
|
some of plugins forms unit tests fails on type of report failing tests provide description of the task the error is type is null or not an object the same as in so it might be related the cause might be the same other details browser os windows ckeditor version installed ckeditor plugins
| 1
|
99,843
| 16,463,563,398
|
IssuesEvent
|
2021-05-22 01:02:27
|
RG4421/skyux-forms
|
https://api.github.com/repos/RG4421/skyux-forms
|
opened
|
CVE-2021-23386 (High) detected in dns-packet-1.3.1.tgz
|
security vulnerability
|
## CVE-2021-23386 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dns-packet-1.3.1.tgz</b></p></summary>
<p>An abstract-encoding compliant module for encoding / decoding DNS packets</p>
<p>Library home page: <a href="https://registry.npmjs.org/dns-packet/-/dns-packet-1.3.1.tgz">https://registry.npmjs.org/dns-packet/-/dns-packet-1.3.1.tgz</a></p>
<p>Path to dependency file: skyux-forms/package.json</p>
<p>Path to vulnerable library: skyux-forms/node_modules/dns-packet/package.json</p>
<p>
Dependency Hierarchy:
- builder-4.0.0-rc.15.tgz (Root Library)
- webpack-dev-server-3.11.0.tgz
- bonjour-3.5.0.tgz
- multicast-dns-6.2.3.tgz
- :x: **dns-packet-1.3.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package dns-packet before 5.2.2. It creates buffers with allocUnsafe and does not always fill them before forming network packets. This can expose internal application memory over unencrypted network when querying crafted invalid domain names.
<p>Publish Date: 2021-05-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23386>CVE-2021-23386</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23386">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23386</a></p>
<p>Release Date: 2021-05-20</p>
<p>Fix Resolution: dns-packet - 5.2.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"dns-packet","packageVersion":"1.3.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@skyux-sdk/builder:4.0.0-rc.15;webpack-dev-server:3.11.0;bonjour:3.5.0;multicast-dns:6.2.3;dns-packet:1.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"dns-packet - 5.2.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23386","vulnerabilityDetails":"This affects the package dns-packet before 5.2.2. It creates buffers with allocUnsafe and does not always fill them before forming network packets. This can expose internal application memory over unencrypted network when querying crafted invalid domain names.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23386","cvss3Severity":"high","cvss3Score":"7.7","cvss3Metrics":{"A":"Low","AC":"High","PR":"Low","S":"Changed","C":"High","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-23386 (High) detected in dns-packet-1.3.1.tgz - ## CVE-2021-23386 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dns-packet-1.3.1.tgz</b></p></summary>
<p>An abstract-encoding compliant module for encoding / decoding DNS packets</p>
<p>Library home page: <a href="https://registry.npmjs.org/dns-packet/-/dns-packet-1.3.1.tgz">https://registry.npmjs.org/dns-packet/-/dns-packet-1.3.1.tgz</a></p>
<p>Path to dependency file: skyux-forms/package.json</p>
<p>Path to vulnerable library: skyux-forms/node_modules/dns-packet/package.json</p>
<p>
Dependency Hierarchy:
- builder-4.0.0-rc.15.tgz (Root Library)
- webpack-dev-server-3.11.0.tgz
- bonjour-3.5.0.tgz
- multicast-dns-6.2.3.tgz
- :x: **dns-packet-1.3.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package dns-packet before 5.2.2. It creates buffers with allocUnsafe and does not always fill them before forming network packets. This can expose internal application memory over unencrypted network when querying crafted invalid domain names.
<p>Publish Date: 2021-05-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23386>CVE-2021-23386</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23386">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23386</a></p>
<p>Release Date: 2021-05-20</p>
<p>Fix Resolution: dns-packet - 5.2.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"dns-packet","packageVersion":"1.3.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@skyux-sdk/builder:4.0.0-rc.15;webpack-dev-server:3.11.0;bonjour:3.5.0;multicast-dns:6.2.3;dns-packet:1.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"dns-packet - 5.2.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23386","vulnerabilityDetails":"This affects the package dns-packet before 5.2.2. It creates buffers with allocUnsafe and does not always fill them before forming network packets. This can expose internal application memory over unencrypted network when querying crafted invalid domain names.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23386","cvss3Severity":"high","cvss3Score":"7.7","cvss3Metrics":{"A":"Low","AC":"High","PR":"Low","S":"Changed","C":"High","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_test
|
cve high detected in dns packet tgz cve high severity vulnerability vulnerable library dns packet tgz an abstract encoding compliant module for encoding decoding dns packets library home page a href path to dependency file skyux forms package json path to vulnerable library skyux forms node modules dns packet package json dependency hierarchy builder rc tgz root library webpack dev server tgz bonjour tgz multicast dns tgz x dns packet tgz vulnerable library found in base branch master vulnerability details this affects the package dns packet before it creates buffers with allocunsafe and does not always fill them before forming network packets this can expose internal application memory over unencrypted network when querying crafted invalid domain names publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope changed impact metrics confidentiality impact high integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution dns packet isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree skyux sdk builder rc webpack dev server bonjour multicast dns dns packet isminimumfixversionavailable true minimumfixversion dns packet basebranches vulnerabilityidentifier cve vulnerabilitydetails this affects the package dns packet before it creates buffers with allocunsafe and does not always fill them before forming network packets this can expose internal application memory over unencrypted network when querying crafted invalid domain names vulnerabilityurl
| 0
|
34,072
| 4,890,178,366
|
IssuesEvent
|
2016-11-18 12:58:33
|
Grumnir/IDEmm
|
https://api.github.com/repos/Grumnir/IDEmm
|
closed
|
Create example Unit Tests for none GUI elements
|
Test
|
First step for testing phase no should be to create unit tests for components that obviously are no GUI elements. So make an example.
|
1.0
|
Create example Unit Tests for none GUI elements - First step for testing phase no should be to create unit tests for components that obviously are no GUI elements. So make an example.
|
test
|
create example unit tests for none gui elements first step for testing phase no should be to create unit tests for components that obviously are no gui elements so make an example
| 1
|
313,166
| 26,906,752,914
|
IssuesEvent
|
2023-02-06 19:45:20
|
Azure/azure-sdk-for-java
|
https://api.github.com/repos/Azure/azure-sdk-for-java
|
closed
|
Form Recognizer Readme Issue
|
Client Docs needs-team-triage Cognitive - Form Recognizer test-manual-pass
|
1.
**Section** [link](https://github.com/Azure/azure-sdk-for-java/tree/azure-digitaltwins-core_1.3.5/sdk/formrecognizer/azure-ai-formrecognizer#documentmodeladministrationclient):

**Suggestion**:
Wrong hyperlink, update the link to https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/main/java/com/azure/ai/formrecognizer/documentanalysis/administration/DocumentModelAdministrationAsyncClient.java
@joshfree , @achandmsft , @mayurid and @mssfang for notification.
|
1.0
|
Form Recognizer Readme Issue - 1.
**Section** [link](https://github.com/Azure/azure-sdk-for-java/tree/azure-digitaltwins-core_1.3.5/sdk/formrecognizer/azure-ai-formrecognizer#documentmodeladministrationclient):

**Suggestion**:
Wrong hyperlink, update the link to https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/main/java/com/azure/ai/formrecognizer/documentanalysis/administration/DocumentModelAdministrationAsyncClient.java
@joshfree , @achandmsft , @mayurid and @mssfang for notification.
|
test
|
form recognizer readme issue section suggestion wrong hyperlink update the link to joshfree achandmsft mayurid and mssfang for notification
| 1
|
75,111
| 7,460,204,674
|
IssuesEvent
|
2018-03-30 18:37:04
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
Specified address/internal-address not synced/configured correctly when adding custom host
|
area/loadbalancer area/server kind/bug status/resolved status/to-test version/2.0
|
**Rancher versions:**
rancher/server: v2.0.0-alpha16
rancher/agent: v2.0.2
**Steps to Reproduce:**
- Create a host with eth0 and eth1
- Add a host using the `docker run` command, specify `--address` (also tried with setting `--internal-address`)
**Results:**
- Node gets added, after info is synced, the IP of the interface of the default gateway is shown.
Logging:
```
time="2018-02-08T21:06:24Z" level=info msg="Option address=172.22.101.111"
time="2018-02-08T21:06:24Z" level=info msg="Option internalAddress=172.22.101.111"
time="2018-02-08T21:06:24Z" level=info msg="Option requestedHostname=node-01"
time="2018-02-08T21:06:24Z" level=info msg="Option role=[etcd worker controlplane]"
time="2018-02-08T21:06:24Z" level=info msg="Connecting to proxy" url="wss://172.22.101.101/v3/connect"
```
API
```
"created": "2018-02-08T21:06:24Z",
"createdTS": 1518123984000,
"creatorId": null,
"customConfig": {
"address": "172.22.101.111",
"internalAddress": "172.22.101.111",
"type": "/v3/schemas/customConfig"
},
"hostname": "node-01",
"id": "cluster-6zrgn:m-3429e0b1fe17",
"imported": true,
"info": {
"cpu": {
"count": 1
},
"kubernetes": {
"kubeProxyVersion": "v1.8.7-rancher1",
"kubeletVersion": "v1.8.7-rancher1"
},
"memory": {
"memTotalKiB": 2050020
},
"os": {
"dockerVersion": "1.12.6",
"kernelVersion": "4.9.34-rancher",
"operatingSystem": "Ubuntu 16.04.1 LTS"
}
},
"ipAddress": "10.0.2.15",
```
kubectl
```
# kubectl describe nodes node-01
Name: node-01
Role:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=node-01
node-role.kubernetes.io/etcd=true
node-role.kubernetes.io/master=true
node-role.kubernetes.io/worker=true
Annotations: field.cattle.io/publicEndpoints=[{"node":"node-01","address":"10.0.2.15","port":80,"protocol":"TCP","pod":"ingress-nginx/nginx-ingress-controller-krlh7"},{"node":"node-01","address":"10.0.2.15","port"...
flannel.alpha.coreos.com/backend-data={"VtepMAC":"c6:16:a6:50:ec:3f"}
flannel.alpha.coreos.com/backend-type=vxlan
flannel.alpha.coreos.com/kube-subnet-manager=true
flannel.alpha.coreos.com/public-ip=172.22.101.111
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
Taints: <none>
CreationTimestamp: Fri, 09 Feb 2018 12:05:13 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Fri, 09 Feb 2018 12:56:58 +0000 Fri, 09 Feb 2018 12:05:13 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Fri, 09 Feb 2018 12:56:58 +0000 Fri, 09 Feb 2018 12:05:13 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 09 Feb 2018 12:56:58 +0000 Fri, 09 Feb 2018 12:05:13 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Fri, 09 Feb 2018 12:56:58 +0000 Fri, 09 Feb 2018 12:06:23 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.0.2.15
Hostname: node-01
```
|
1.0
|
Specified address/internal-address not synced/configured correctly when adding custom host - **Rancher versions:**
rancher/server: v2.0.0-alpha16
rancher/agent: v2.0.2
**Steps to Reproduce:**
- Create a host with eth0 and eth1
- Add a host using the `docker run` command, specify `--address` (also tried with setting `--internal-address`)
**Results:**
- Node gets added, after info is synced, the IP of the interface of the default gateway is shown.
Logging:
```
time="2018-02-08T21:06:24Z" level=info msg="Option address=172.22.101.111"
time="2018-02-08T21:06:24Z" level=info msg="Option internalAddress=172.22.101.111"
time="2018-02-08T21:06:24Z" level=info msg="Option requestedHostname=node-01"
time="2018-02-08T21:06:24Z" level=info msg="Option role=[etcd worker controlplane]"
time="2018-02-08T21:06:24Z" level=info msg="Connecting to proxy" url="wss://172.22.101.101/v3/connect"
```
API
```
"created": "2018-02-08T21:06:24Z",
"createdTS": 1518123984000,
"creatorId": null,
"customConfig": {
"address": "172.22.101.111",
"internalAddress": "172.22.101.111",
"type": "/v3/schemas/customConfig"
},
"hostname": "node-01",
"id": "cluster-6zrgn:m-3429e0b1fe17",
"imported": true,
"info": {
"cpu": {
"count": 1
},
"kubernetes": {
"kubeProxyVersion": "v1.8.7-rancher1",
"kubeletVersion": "v1.8.7-rancher1"
},
"memory": {
"memTotalKiB": 2050020
},
"os": {
"dockerVersion": "1.12.6",
"kernelVersion": "4.9.34-rancher",
"operatingSystem": "Ubuntu 16.04.1 LTS"
}
},
"ipAddress": "10.0.2.15",
```
kubectl
```
# kubectl describe nodes node-01
Name: node-01
Role:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=node-01
node-role.kubernetes.io/etcd=true
node-role.kubernetes.io/master=true
node-role.kubernetes.io/worker=true
Annotations: field.cattle.io/publicEndpoints=[{"node":"node-01","address":"10.0.2.15","port":80,"protocol":"TCP","pod":"ingress-nginx/nginx-ingress-controller-krlh7"},{"node":"node-01","address":"10.0.2.15","port"...
flannel.alpha.coreos.com/backend-data={"VtepMAC":"c6:16:a6:50:ec:3f"}
flannel.alpha.coreos.com/backend-type=vxlan
flannel.alpha.coreos.com/kube-subnet-manager=true
flannel.alpha.coreos.com/public-ip=172.22.101.111
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
Taints: <none>
CreationTimestamp: Fri, 09 Feb 2018 12:05:13 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Fri, 09 Feb 2018 12:56:58 +0000 Fri, 09 Feb 2018 12:05:13 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Fri, 09 Feb 2018 12:56:58 +0000 Fri, 09 Feb 2018 12:05:13 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 09 Feb 2018 12:56:58 +0000 Fri, 09 Feb 2018 12:05:13 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Fri, 09 Feb 2018 12:56:58 +0000 Fri, 09 Feb 2018 12:06:23 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.0.2.15
Hostname: node-01
```
|
test
|
specified address internal address not synced configured correctly when adding custom host rancher versions rancher server rancher agent steps to reproduce create a host with and add a host using the docker run command specify address also tried with setting internal address results node gets added after info is synced the ip of the interface of the default gateway is shown logging time level info msg option address time level info msg option internaladdress time level info msg option requestedhostname node time level info msg option role time level info msg connecting to proxy url wss connect api created createdts creatorid null customconfig address internaladdress type schemas customconfig hostname node id cluster m imported true info cpu count kubernetes kubeproxyversion kubeletversion memory memtotalkib os dockerversion kernelversion rancher operatingsystem ubuntu lts ipaddress kubectl kubectl describe nodes node name node role labels beta kubernetes io arch beta kubernetes io os linux kubernetes io hostname node node role kubernetes io etcd true node role kubernetes io master true node role kubernetes io worker true annotations field cattle io publicendpoints node node address port protocol tcp pod ingress nginx nginx ingress controller node node address port flannel alpha coreos com backend data vtepmac ec flannel alpha coreos com backend type vxlan flannel alpha coreos com kube subnet manager true flannel alpha coreos com public ip node alpha kubernetes io ttl volumes kubernetes io controller managed attach detach true taints creationtimestamp fri feb conditions type status lastheartbeattime lasttransitiontime reason message outofdisk false fri feb fri feb kubelethassufficientdisk kubelet has sufficient disk space available memorypressure false fri feb fri feb kubelethassufficientmemory kubelet has sufficient memory available diskpressure false fri feb fri feb kubelethasnodiskpressure kubelet has no disk pressure ready true fri feb fri feb kubeletready kubelet is posting ready status addresses internalip hostname node
| 1
|
88,613
| 17,616,894,716
|
IssuesEvent
|
2021-08-18 10:51:38
|
secondmind-labs/trieste
|
https://api.github.com/repos/secondmind-labs/trieste
|
closed
|
Use of deepcopy in models
|
code quality
|
**Describe the feature you'd like**
As we add more models, we should think about how we save models. Currently for GPflow models, we use `module_deepcopy` from `trieste\models\gpflow\utils.py`, but this may not work for other models, such as GPflux models or Keras models.
**Describe alternatives you've considered**
From discussing with Hrvoje, perhaps it would make most sense to implement a `save` method in the interfaces, which could then be different for each model type.
|
1.0
|
Use of deepcopy in models - **Describe the feature you'd like**
As we add more models, we should think about how we save models. Currently for GPflow models, we use `module_deepcopy` from `trieste\models\gpflow\utils.py`, but this may not work for other models, such as GPflux models or Keras models.
**Describe alternatives you've considered**
From discussing with Hrvoje, perhaps it would make most sense to implement a `save` method in the interfaces, which could then be different for each model type.
|
non_test
|
use of deepcopy in models describe the feature you d like as we add more models we should think about how we save models currently for gpflow models we use module deepcopy from trieste models gpflow utils py but this may not work for other models such as gpflux models or keras models describe alternatives you ve considered from discussing with hrvoje perhaps it would make most sense to implement a save method in the interfaces which could then be different for each model type
| 0
|
181,664
| 14,073,817,585
|
IssuesEvent
|
2020-11-04 05:56:29
|
longhorn/longhorn
|
https://api.github.com/repos/longhorn/longhorn
|
opened
|
[Test] Test_data_locality_basic fails in nightly test run
|
area/test bug
|
**Describe the bug**
Test_data_locality_basic test fails with assertion error.
```
volume3 = client.by_id_volume(volume3_name)
assert len(volume3.replicas) == 1
volume3 = client.by_id_volume(volume3_name)
create_and_wait_pod(core_api, pod3)
wait_for_rebuild_start(client, volume3_name)
volume3 = client.by_id_volume(volume3_name)
assert len(volume3.replicas) == 2
wait_for_rebuild_complete(client, volume3_name)
volume3 = client.by_id_volume(volume3_name)
> assert len(volume3.replicas) == 1
```
|
1.0
|
[Test] Test_data_locality_basic fails in nightly test run - **Describe the bug**
Test_data_locality_basic test fails with assertion error.
```
volume3 = client.by_id_volume(volume3_name)
assert len(volume3.replicas) == 1
volume3 = client.by_id_volume(volume3_name)
create_and_wait_pod(core_api, pod3)
wait_for_rebuild_start(client, volume3_name)
volume3 = client.by_id_volume(volume3_name)
assert len(volume3.replicas) == 2
wait_for_rebuild_complete(client, volume3_name)
volume3 = client.by_id_volume(volume3_name)
> assert len(volume3.replicas) == 1
```
|
test
|
test data locality basic fails in nightly test run describe the bug test data locality basic test fails with assertion error client by id volume name assert len replicas client by id volume name create and wait pod core api wait for rebuild start client name client by id volume name assert len replicas wait for rebuild complete client name client by id volume name assert len replicas
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.