Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
4
112
repo_url
stringlengths
33
141
action
stringclasses
3 values
title
stringlengths
1
1.02k
labels
stringlengths
4
1.54k
body
stringlengths
1
262k
index
stringclasses
17 values
text_combine
stringlengths
95
262k
label
stringclasses
2 values
text
stringlengths
96
252k
binary_label
int64
0
1
295,875
25,511,927,323
IssuesEvent
2022-11-28 13:44:31
neondatabase/neon
https://api.github.com/repos/neondatabase/neon
closed
`test_isolation` is flacky
t/bug a/test a/test/flaky
`debug` version of the test fails frequently. Recent failure Allure report: https://neon-github-public-dev.s3.amazonaws.com/reports/main/debug/3206518630/index.html#suites/158be07438eb5188d40b466b6acfaeb3/605ca28f08b88e15/
2.0
`test_isolation` is flacky - `debug` version of the test fails frequently. Recent failure Allure report: https://neon-github-public-dev.s3.amazonaws.com/reports/main/debug/3206518630/index.html#suites/158be07438eb5188d40b466b6acfaeb3/605ca28f08b88e15/
test
test isolation is flacky debug version of the test fails frequently recent failure allure report
1
248,103
20,995,872,960
IssuesEvent
2022-03-29 13:28:01
elastic/beats
https://api.github.com/repos/elastic/beats
closed
[Metricbeat] Add integration tests for AWS
enhancement Metricbeat :Testing [zube]: Backlog Team:Platforms size/L
We want to improve our coverage with integration tests for aws module using real AWS services. Similar to terraform scenario in Filebeat for ELBs, we can create scenarios for deploying and destroying testing services in AWS using Terraform for individual metricsets. There are aws module in Metricbeat and aws autodiscover provider need to be tested: - Metricbeat aws module - cloudwatch metricset for a given namespace - separate tests for metricsets that are not created using light weight module, such as ec2, s3, rds - test metrics from CloudWatch like `aws.ec2.cpu.total.pct` - test metrics from other API calls like DescribeInstances in EC2: `aws.ec2.instance.state`. - test tag collection - Metricbeat `aws_ec2` autodiscover - Check that a module is launched when an EC2 instance(with a specific tag) is created - Check the module stops when this EC2 instance is deleted
1.0
[Metricbeat] Add integration tests for AWS - We want to improve our coverage with integration tests for aws module using real AWS services. Similar to terraform scenario in Filebeat for ELBs, we can create scenarios for deploying and destroying testing services in AWS using Terraform for individual metricsets. There are aws module in Metricbeat and aws autodiscover provider need to be tested: - Metricbeat aws module - cloudwatch metricset for a given namespace - separate tests for metricsets that are not created using light weight module, such as ec2, s3, rds - test metrics from CloudWatch like `aws.ec2.cpu.total.pct` - test metrics from other API calls like DescribeInstances in EC2: `aws.ec2.instance.state`. - test tag collection - Metricbeat `aws_ec2` autodiscover - Check that a module is launched when an EC2 instance(with a specific tag) is created - Check the module stops when this EC2 instance is deleted
test
add integration tests for aws we want to improve our coverage with integration tests for aws module using real aws services similar to terraform scenario in filebeat for elbs we can create scenarios for deploying and destroying testing services in aws using terraform for individual metricsets there are aws module in metricbeat and aws autodiscover provider need to be tested metricbeat aws module cloudwatch metricset for a given namespace separate tests for metricsets that are not created using light weight module such as rds test metrics from cloudwatch like aws cpu total pct test metrics from other api calls like describeinstances in aws instance state test tag collection metricbeat aws autodiscover check that a module is launched when an instance with a specific tag is created check the module stops when this instance is deleted
1
204,820
15,555,629,928
IssuesEvent
2021-03-16 06:28:44
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
roachtest: kv/splits/nodes=3/quiesce=true failed
C-test-failure O-roachtest O-robot branch-release-21.1 release-blocker
[(roachtest).kv/splits/nodes=3/quiesce=true failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2780761&tab=buildLog) on [release-21.1@4a47e0e305cbdabf963f896c1cb571a28b34e63d](https://github.com/cockroachdb/cockroach/commits/4a47e0e305cbdabf963f896c1cb571a28b34e63d): ``` Wraps: (4) secondary error attachment | signal: killed | (1) signal: killed | Error types: (1) *exec.ExitError Wraps: (5) context canceled Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *secondary.withSecondaryError (5) *errors.errorString cluster.go:2688,kv.go:589,test_runner.go:767: monitor failure: unexpected node event: 1: dead (1) attached stack trace -- stack trace: | main.(*monitor).WaitE | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2676 | main.(*monitor).Wait | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2684 | main.registerKVSplits.func1 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/kv.go:589 | main.(*testRunner).runTest.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:767 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1374 Wraps: (2) monitor failure Wraps: (3) unexpected node event: 1: dead Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *errors.errorString cluster.go:1667,context.go:140,cluster.go:1656,test_runner.go:848: dead node detection: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod monitor teamcity-2780761-1615874220-12-n4cpu4 --oneshot --ignore-empty-nodes: exit status 1 4: skipped 2: 5369 3: 5314 1: dead Error: UNCLASSIFIED_PROBLEM: 1: dead (1) UNCLASSIFIED_PROBLEM Wraps: (2) attached stack trace -- stack trace: | main.glob..func14 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1147 | main.wrap.func1 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:271 | github.com/spf13/cobra.(*Command).execute | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:830 | github.com/spf13/cobra.(*Command).ExecuteC | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:914 | github.com/spf13/cobra.(*Command).Execute | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:864 | main.main | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1852 | runtime.main | /usr/local/go/src/runtime/proc.go:204 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1374 Wraps: (3) 1: dead Error types: (1) errors.Unclassified (2) *withstack.withStack (3) *errutil.leafError ``` <details><summary>More</summary><p> Artifacts: [/kv/splits/nodes=3/quiesce=true](https://teamcity.cockroachdb.com/viewLog.html?buildId=2780761&tab=artifacts#/kv/splits/nodes=3/quiesce=true) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Akv%2Fsplits%2Fnodes%3D3%2Fquiesce%3Dtrue.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
2.0
roachtest: kv/splits/nodes=3/quiesce=true failed - [(roachtest).kv/splits/nodes=3/quiesce=true failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2780761&tab=buildLog) on [release-21.1@4a47e0e305cbdabf963f896c1cb571a28b34e63d](https://github.com/cockroachdb/cockroach/commits/4a47e0e305cbdabf963f896c1cb571a28b34e63d): ``` Wraps: (4) secondary error attachment | signal: killed | (1) signal: killed | Error types: (1) *exec.ExitError Wraps: (5) context canceled Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *secondary.withSecondaryError (5) *errors.errorString cluster.go:2688,kv.go:589,test_runner.go:767: monitor failure: unexpected node event: 1: dead (1) attached stack trace -- stack trace: | main.(*monitor).WaitE | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2676 | main.(*monitor).Wait | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2684 | main.registerKVSplits.func1 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/kv.go:589 | main.(*testRunner).runTest.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:767 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1374 Wraps: (2) monitor failure Wraps: (3) unexpected node event: 1: dead Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *errors.errorString cluster.go:1667,context.go:140,cluster.go:1656,test_runner.go:848: dead node detection: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod monitor teamcity-2780761-1615874220-12-n4cpu4 --oneshot --ignore-empty-nodes: exit status 1 4: skipped 2: 5369 3: 5314 1: dead Error: UNCLASSIFIED_PROBLEM: 1: dead (1) UNCLASSIFIED_PROBLEM Wraps: (2) attached stack trace -- stack trace: | main.glob..func14 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1147 | main.wrap.func1 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:271 | github.com/spf13/cobra.(*Command).execute | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:830 | github.com/spf13/cobra.(*Command).ExecuteC | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:914 | github.com/spf13/cobra.(*Command).Execute | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:864 | main.main | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1852 | runtime.main | /usr/local/go/src/runtime/proc.go:204 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1374 Wraps: (3) 1: dead Error types: (1) errors.Unclassified (2) *withstack.withStack (3) *errutil.leafError ``` <details><summary>More</summary><p> Artifacts: [/kv/splits/nodes=3/quiesce=true](https://teamcity.cockroachdb.com/viewLog.html?buildId=2780761&tab=artifacts#/kv/splits/nodes=3/quiesce=true) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Akv%2Fsplits%2Fnodes%3D3%2Fquiesce%3Dtrue.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
test
roachtest kv splits nodes quiesce true failed on wraps secondary error attachment signal killed signal killed error types exec exiterror wraps context canceled error types withstack withstack errutil withprefix main withcommanddetails secondary withsecondaryerror errors errorstring cluster go kv go test runner go monitor failure unexpected node event dead attached stack trace stack trace main monitor waite home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main registerkvsplits home agent work go src github com cockroachdb cockroach pkg cmd roachtest kv go main testrunner runtest home agent work go src github com cockroachdb cockroach pkg cmd roachtest test runner go runtime goexit usr local go src runtime asm s wraps monitor failure wraps unexpected node event dead error types withstack withstack errutil withprefix errors errorstring cluster go context go cluster go test runner go dead node detection home agent work go src github com cockroachdb cockroach bin roachprod monitor teamcity oneshot ignore empty nodes exit status skipped dead error unclassified problem dead unclassified problem wraps attached stack trace stack trace main glob home agent work go src github com cockroachdb cockroach pkg cmd roachprod main go main wrap home agent work go src github com cockroachdb cockroach pkg cmd roachprod main go github com cobra command execute home agent work go src github com cockroachdb cockroach vendor github com cobra command go github com cobra command executec home agent work go src github com cockroachdb cockroach vendor github com cobra command go github com cobra command execute home agent work go src github com cockroachdb cockroach vendor github com cobra command go main main home agent work go src github com cockroachdb cockroach pkg cmd roachprod main go runtime main usr local go src runtime proc go runtime goexit usr local go src runtime asm s wraps dead error types errors unclassified withstack withstack errutil leaferror more artifacts powered by
1
77,716
14,910,639,871
IssuesEvent
2021-01-22 09:52:48
firecracker-microvm/firecracker
https://api.github.com/repos/firecracker-microvm/firecracker
closed
[Code improvement] deduplicate literal HTTP responses in tests
Codebase: Refactoring Contribute: Good First Issue Contribute: Help Wanted
There are many tests with literal hardcoded HTTP responses that bloat the code. Some of them even have data embedded in them, making those tests hard to maintain. Example possible deduplication: in https://github.com/firecracker-microvm/firecracker/blob/e8200f3c3eaba014220e447e8d426c8cf8607eec/src/api_server/src/parsed_request.rs#L511 : ```diff - let expected_response = format!( - "HTTP/1.1 200 \r\n\ - Server: Firecracker API\r\n\ - Connection: keep-alive\r\n\ - Content-Type: application/json\r\n\ - Content-Length: 122\r\n\r\n{}", - VmConfig::default().to_string() - ); + let body = VmConfig::default().to_string(); + let expected_response = http_response_ok(&body); ); ``` where `http_response_ok()` could be reused in all tests. Example `http_response_ok()` definition: ```rust fn http_response_ok(body: &str) { format!( "HTTP/1.1 200 \r\n\ Server: Firecracker API\r\n\ Connection: keep-alive\r\n\ Content-Type: application/json\r\n\ Content-Length: {}\r\n\r\n{}", status_code, body.len(), body, ) } ```
1.0
[Code improvement] deduplicate literal HTTP responses in tests - There are many tests with literal hardcoded HTTP responses that bloat the code. Some of them even have data embedded in them, making those tests hard to maintain. Example possible deduplication: in https://github.com/firecracker-microvm/firecracker/blob/e8200f3c3eaba014220e447e8d426c8cf8607eec/src/api_server/src/parsed_request.rs#L511 : ```diff - let expected_response = format!( - "HTTP/1.1 200 \r\n\ - Server: Firecracker API\r\n\ - Connection: keep-alive\r\n\ - Content-Type: application/json\r\n\ - Content-Length: 122\r\n\r\n{}", - VmConfig::default().to_string() - ); + let body = VmConfig::default().to_string(); + let expected_response = http_response_ok(&body); ); ``` where `http_response_ok()` could be reused in all tests. Example `http_response_ok()` definition: ```rust fn http_response_ok(body: &str) { format!( "HTTP/1.1 200 \r\n\ Server: Firecracker API\r\n\ Connection: keep-alive\r\n\ Content-Type: application/json\r\n\ Content-Length: {}\r\n\r\n{}", status_code, body.len(), body, ) } ```
non_test
deduplicate literal http responses in tests there are many tests with literal hardcoded http responses that bloat the code some of them even have data embedded in them making those tests hard to maintain example possible deduplication in diff let expected response format http r n server firecracker api r n connection keep alive r n content type application json r n content length r n r n vmconfig default to string let body vmconfig default to string let expected response http response ok body where http response ok could be reused in all tests example http response ok definition rust fn http response ok body str format http r n server firecracker api r n connection keep alive r n content type application json r n content length r n r n status code body len body
0
807,500
30,005,856,254
IssuesEvent
2023-06-26 12:24:42
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
app.meandu.com - Firefox is not a supported browser
browser-firefox priority-normal severity-critical type-unsupported action-needssitepatch engine-gecko needsinfo-raul diagnosis-priority-p1
<!-- @browser: Firefox 112.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/112.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/121875 --> **URL**: https://app.meandu.com **Browser / Version**: Firefox 112.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes Safari **Problem type**: Site is not usable **Description**: Browser unsupported **Steps to Reproduce**: The page doesn't load and says the browser is not supported. <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
2.0
app.meandu.com - Firefox is not a supported browser - <!-- @browser: Firefox 112.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/112.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/121875 --> **URL**: https://app.meandu.com **Browser / Version**: Firefox 112.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes Safari **Problem type**: Site is not usable **Description**: Browser unsupported **Steps to Reproduce**: The page doesn't load and says the browser is not supported. <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_test
app meandu com firefox is not a supported browser url browser version firefox operating system windows tested another browser yes safari problem type site is not usable description browser unsupported steps to reproduce the page doesn t load and says the browser is not supported browser configuration none from with ❤️
0
265,942
23,211,551,551
IssuesEvent
2022-08-02 10:34:48
mozilla-mobile/fenix
https://api.github.com/repos/mozilla-mobile/fenix
closed
Intermittent UI test failure - < SettingsAddonsTest. noCrashWithAddonInstalledTest >
eng:intermittent-test eng:ui-test
### Firebase Test Run: [Firebase link](https://console.firebase.google.com/u/0/project/moz-fenix/testlab/histories/bh.66b7091e15d53d45/matrices/6331410488741407859/executions/bs.d3c4f02ab026c4a3/testcases/1/test-cases) ### Stacktrace: androidx.test.espresso.IdlingResourceTimeoutException: Wait for [SessionLoadedIdlingResource] to become idle timed out at dalvik.system.VMStack.getThreadStackTrace(Native Method) at java.lang.Thread.getStackTrace(Thread.java:1736) at androidx.test.espresso.base.DefaultFailureHandler.getUserFriendlyError(DefaultFailureHandler.java:12) at androidx.test.espresso.base.DefaultFailureHandler.handle(DefaultFailureHandler.java:7) at androidx.test.espresso.ViewInteraction.waitForAndHandleInteractionResults(ViewInteraction.java:8) at androidx.test.espresso.ViewInteraction.check(ViewInteraction.java:12) at org.mozilla.fenix.ui.robots.NavigationToolbarRobot$Transition.enterURLAndEnterToBrowser(NavigationToolbarRobot.kt:109) at org.mozilla.fenix.ui.SettingsAddonsTest.noCrashWithAddonInstalledTest(SettingsAddonsTest.kt:141) ### Build: 7/4 Main ┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FNXV2-20902)
2.0
Intermittent UI test failure - < SettingsAddonsTest. noCrashWithAddonInstalledTest > - ### Firebase Test Run: [Firebase link](https://console.firebase.google.com/u/0/project/moz-fenix/testlab/histories/bh.66b7091e15d53d45/matrices/6331410488741407859/executions/bs.d3c4f02ab026c4a3/testcases/1/test-cases) ### Stacktrace: androidx.test.espresso.IdlingResourceTimeoutException: Wait for [SessionLoadedIdlingResource] to become idle timed out at dalvik.system.VMStack.getThreadStackTrace(Native Method) at java.lang.Thread.getStackTrace(Thread.java:1736) at androidx.test.espresso.base.DefaultFailureHandler.getUserFriendlyError(DefaultFailureHandler.java:12) at androidx.test.espresso.base.DefaultFailureHandler.handle(DefaultFailureHandler.java:7) at androidx.test.espresso.ViewInteraction.waitForAndHandleInteractionResults(ViewInteraction.java:8) at androidx.test.espresso.ViewInteraction.check(ViewInteraction.java:12) at org.mozilla.fenix.ui.robots.NavigationToolbarRobot$Transition.enterURLAndEnterToBrowser(NavigationToolbarRobot.kt:109) at org.mozilla.fenix.ui.SettingsAddonsTest.noCrashWithAddonInstalledTest(SettingsAddonsTest.kt:141) ### Build: 7/4 Main ┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FNXV2-20902)
test
intermittent ui test failure firebase test run stacktrace androidx test espresso idlingresourcetimeoutexception wait for to become idle timed out at dalvik system vmstack getthreadstacktrace native method at java lang thread getstacktrace thread java at androidx test espresso base defaultfailurehandler getuserfriendlyerror defaultfailurehandler java at androidx test espresso base defaultfailurehandler handle defaultfailurehandler java at androidx test espresso viewinteraction waitforandhandleinteractionresults viewinteraction java at androidx test espresso viewinteraction check viewinteraction java at org mozilla fenix ui robots navigationtoolbarrobot transition enterurlandentertobrowser navigationtoolbarrobot kt at org mozilla fenix ui settingsaddonstest nocrashwithaddoninstalledtest settingsaddonstest kt build main ┆issue is synchronized with this
1
95,624
19,723,565,916
IssuesEvent
2022-01-13 17:36:03
philres/catfishq
https://api.github.com/repos/philres/catfishq
opened
Use numpy to compute q-scores more efficiently
code-review
https://github.com/philres/catfishq/blob/4c42039d8b7c4d9009b0668399672f6d87aa3177/catfishq/cat_fastq.py#L36 1) Check if pysam returns numpy arrays. If it does use numpy to compute probabilities from phred scores more efficiently 2) Alternative: cython implementation of q-score computation
1.0
Use numpy to compute q-scores more efficiently - https://github.com/philres/catfishq/blob/4c42039d8b7c4d9009b0668399672f6d87aa3177/catfishq/cat_fastq.py#L36 1) Check if pysam returns numpy arrays. If it does use numpy to compute probabilities from phred scores more efficiently 2) Alternative: cython implementation of q-score computation
non_test
use numpy to compute q scores more efficiently check if pysam returns numpy arrays if it does use numpy to compute probabilities from phred scores more efficiently alternative cython implementation of q score computation
0
385,778
26,653,572,714
IssuesEvent
2023-01-25 15:18:46
ll7/paf22
https://api.github.com/repos/ll7/paf22
closed
[Feature]: Implement filter for the GPS sensor
documentation enhancement Acting Perception additionally
### Description The current implementation of the GPS signal is too noisy for the lateral control algorithms. A filter of some sort needs to be implemented. With current methods (simple average) not working, a dedicated effort should be made to the processing of the sensor data. ### Definition of Done - sensor data should be useable for lateral control - move code from `DummyTrajectorySub.py` to a dedicated file - Add documentation for the entire issue
1.0
[Feature]: Implement filter for the GPS sensor - ### Description The current implementation of the GPS signal is too noisy for the lateral control algorithms. A filter of some sort needs to be implemented. With current methods (simple average) not working, a dedicated effort should be made to the processing of the sensor data. ### Definition of Done - sensor data should be useable for lateral control - move code from `DummyTrajectorySub.py` to a dedicated file - Add documentation for the entire issue
non_test
implement filter for the gps sensor description the current implementation of the gps signal is too noisy for the lateral control algorithms a filter of some sort needs to be implemented with current methods simple average not working a dedicated effort should be made to the processing of the sensor data definition of done sensor data should be useable for lateral control move code from dummytrajectorysub py to a dedicated file add documentation for the entire issue
0
539,179
15,784,761,236
IssuesEvent
2021-04-01 15:30:40
azerothcore/azerothcore-wotlk
https://api.github.com/repos/azerothcore/azerothcore-wotlk
closed
Quest-Deviate-Hides-From-Nalpak-in-Wailing-Caverns-not-available-to-alliance
1-19 ChromieCraft Generic Confirmed DB Fix included Priority - Low
Originally reported https://github.com/chromiecraft/chromiecraft/issues/209 ##### ISSUE: FACTION SIDE: <!-- ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ Guide to issues: - Text in between \<!-- and --\> (Ignore the backlash on this example) is not visible. It serves to guide you through the blueprint. Leave it as is. 1) Specify to which type of Faction the problem in question belongs. If the issue can happen to only one faction, remove the arrows before and after that faction's name below. If the issue CAN happen on both sides, remove both arrows. ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ --> <!-- EDIT FROM THIS POINT DOWN ONLY --> ![Alliance](https://user-images.githubusercontent.com/1884642/108204869-3a88d100-711c-11eb-8179-e1b9b73ed450.png) ##### CONTENT PHASE: <!-- ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ 2) Specify the content phase where this bug belongs to, for example "1-19" or "20-29", etc... ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ --> <!-- WRITE FROM THIS POINT DOWN ONLY --> _1-19_ ##### SMALL DESCRIPTION: <!-- ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ 3a) Add a small description. 3b) Add links to point to the quest/NPCs/spells/items/... related to your problem. Delete the "<!--" symbols at the beginning and at the end according to the field you need, please ignore the others. ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ --> <!-- WRITE/EDIT FROM THIS POINT DOWN ONLY --> _The quest is not available to Alliance players, atleast not to night elves. I am not sure about horde._ **Quest:** [Deviate Hides](https://wowgaming.altervista.org/aowow/?quest=1486) **NPC_Start:** [Nalpak](https://wowgaming.altervista.org/aowow/?npc=5767) **NPC_End:** [Nalpak](https://wowgaming.altervista.org/aowow/?npc=5767) <!-- WRITE/EDIT FROM THIS POINT DOWN ONLY --> <!-- **NPC:** [Nalpak](https://wowgaming.altervista.org/aowow/?npc=5767) --> <!-- **Quest:** [Deviate Hides](https://wowgaming.altervista.org/aowow/?quest=1486) --> **Zone:** [Wailing Caverns](https://wowgaming.altervista.org/aowow/?zone=718) ##### EXPECTED BLIZZLIKE BEHAVIOUR: <!-- ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ 4) Describe how it should be working without the bug. ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ --> <!-- WRITE/EDIT FROM THIS POINT DOWN ONLY --> _Quest available._ ##### CURRENT BEHAVIOUR: <!-- ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ Describe the bug in detail, then fill in the required fields for your problem (based on the needs of your problem) Delete the "<!--" symbols at the beginning and at the end according to the field you need, the fields you don't need to fill ignore them. ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ --> <!-- WRITE/EDIT FROM THIS POINT DOWN ONLY --> _Quest not avalable._ ##### STEPS TO REPRODUCE THE PROBLEM: <!-- ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ Describe precisely how to reproduce the bug so we can fix it or confirm its existence: - Which commands to use? - Which NPC to teleport to? - Other steps ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ --> <!-- WRITE/EDIT FROM THIS POINT DOWN ONLY --> **Step 1** _Get to the barrens as an alliance character,_ **Step 2** _walk up to Nalpak in the Wailing Caverns (Near the entrance),_ **Step 3** _he will not have the quest available for you._ <!-------------------------------------------------------------------> <!-------------------------------------------------------------------> <!------------------ DO NOT MODIFY THE TEXT BELOW -------------------> <!-------------------------------------------------------------------> <!-------------------------------------------------------------------> ##### AC HASH/COMMIT: https://github.com/chromiecraft/azerothcore-wotlk/commit/a32310772487a7a547b1a239dfaf0967b19f7493 ##### OPERATING SYSTEM: Ubuntu 20.04 ##### MODULES: - [mod-cfbg](https://github.com/azerothcore/mod-cfbg) - [mod-duel-reset](https://github.com/azerothcore/mod-duel-reset) - [mod-desertion-warnings](https://github.com/azerothcore/mod-desertion-warnings) - [mod-ah-bot](https://github.com/azerothcore/mod-ah-bot) - [mod-eluna-lua-engine](https://github.com/azerothcore/mod-eluna-lua-engine) - [lua-LevelUpReward](https://github.com/55Honey/Acore_LevelUpReward) ##### OTHER CUSTOMIZATIONS: None. ##### SERVER: ChromieCraft <bountysource-plugin> --- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/97181938-quest-deviate-hides-from-nalpak-in-wailing-caverns-not-available-to-alliance?utm_campaign=plugin&utm_content=tracker%2F40032087&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F40032087&utm_medium=issues&utm_source=github). </bountysource-plugin>
1.0
Quest-Deviate-Hides-From-Nalpak-in-Wailing-Caverns-not-available-to-alliance - Originally reported https://github.com/chromiecraft/chromiecraft/issues/209 ##### ISSUE: FACTION SIDE: <!-- ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ Guide to issues: - Text in between \<!-- and --\> (Ignore the backlash on this example) is not visible. It serves to guide you through the blueprint. Leave it as is. 1) Specify to which type of Faction the problem in question belongs. If the issue can happen to only one faction, remove the arrows before and after that faction's name below. If the issue CAN happen on both sides, remove both arrows. ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ --> <!-- EDIT FROM THIS POINT DOWN ONLY --> ![Alliance](https://user-images.githubusercontent.com/1884642/108204869-3a88d100-711c-11eb-8179-e1b9b73ed450.png) ##### CONTENT PHASE: <!-- ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ 2) Specify the content phase where this bug belongs to, for example "1-19" or "20-29", etc... ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ --> <!-- WRITE FROM THIS POINT DOWN ONLY --> _1-19_ ##### SMALL DESCRIPTION: <!-- ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ 3a) Add a small description. 3b) Add links to point to the quest/NPCs/spells/items/... related to your problem. Delete the "<!--" symbols at the beginning and at the end according to the field you need, please ignore the others. ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ --> <!-- WRITE/EDIT FROM THIS POINT DOWN ONLY --> _The quest is not available to Alliance players, atleast not to night elves. I am not sure about horde._ **Quest:** [Deviate Hides](https://wowgaming.altervista.org/aowow/?quest=1486) **NPC_Start:** [Nalpak](https://wowgaming.altervista.org/aowow/?npc=5767) **NPC_End:** [Nalpak](https://wowgaming.altervista.org/aowow/?npc=5767) <!-- WRITE/EDIT FROM THIS POINT DOWN ONLY --> <!-- **NPC:** [Nalpak](https://wowgaming.altervista.org/aowow/?npc=5767) --> <!-- **Quest:** [Deviate Hides](https://wowgaming.altervista.org/aowow/?quest=1486) --> **Zone:** [Wailing Caverns](https://wowgaming.altervista.org/aowow/?zone=718) ##### EXPECTED BLIZZLIKE BEHAVIOUR: <!-- ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ 4) Describe how it should be working without the bug. ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ --> <!-- WRITE/EDIT FROM THIS POINT DOWN ONLY --> _Quest available._ ##### CURRENT BEHAVIOUR: <!-- ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ Describe the bug in detail, then fill in the required fields for your problem (based on the needs of your problem) Delete the "<!--" symbols at the beginning and at the end according to the field you need, the fields you don't need to fill ignore them. ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ --> <!-- WRITE/EDIT FROM THIS POINT DOWN ONLY --> _Quest not avalable._ ##### STEPS TO REPRODUCE THE PROBLEM: <!-- ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ Describe precisely how to reproduce the bug so we can fix it or confirm its existence: - Which commands to use? - Which NPC to teleport to? - Other steps ________________________________________________________________________________________________________________________________________ ________________________________________________________________________________________________________________________________________ --> <!-- WRITE/EDIT FROM THIS POINT DOWN ONLY --> **Step 1** _Get to the barrens as an alliance character,_ **Step 2** _walk up to Nalpak in the Wailing Caverns (Near the entrance),_ **Step 3** _he will not have the quest available for you._ <!-------------------------------------------------------------------> <!-------------------------------------------------------------------> <!------------------ DO NOT MODIFY THE TEXT BELOW -------------------> <!-------------------------------------------------------------------> <!-------------------------------------------------------------------> ##### AC HASH/COMMIT: https://github.com/chromiecraft/azerothcore-wotlk/commit/a32310772487a7a547b1a239dfaf0967b19f7493 ##### OPERATING SYSTEM: Ubuntu 20.04 ##### MODULES: - [mod-cfbg](https://github.com/azerothcore/mod-cfbg) - [mod-duel-reset](https://github.com/azerothcore/mod-duel-reset) - [mod-desertion-warnings](https://github.com/azerothcore/mod-desertion-warnings) - [mod-ah-bot](https://github.com/azerothcore/mod-ah-bot) - [mod-eluna-lua-engine](https://github.com/azerothcore/mod-eluna-lua-engine) - [lua-LevelUpReward](https://github.com/55Honey/Acore_LevelUpReward) ##### OTHER CUSTOMIZATIONS: None. ##### SERVER: ChromieCraft <bountysource-plugin> --- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/97181938-quest-deviate-hides-from-nalpak-in-wailing-caverns-not-available-to-alliance?utm_campaign=plugin&utm_content=tracker%2F40032087&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F40032087&utm_medium=issues&utm_source=github). </bountysource-plugin>
non_test
quest deviate hides from nalpak in wailing caverns not available to alliance originally reported issue faction side guide to issues text in between ignore the backlash on this example is not visible it serves to guide you through the blueprint leave it as is specify to which type of faction the problem in question belongs if the issue can happen to only one faction remove the arrows before and after that faction s name below if the issue can happen on both sides remove both arrows content phase specify the content phase where this bug belongs to for example or etc small description add a small description add links to point to the quest npcs spells items related to your problem delete the symbols at the beginning and at the end according to the field you need please ignore the others the quest is not available to alliance players atleast not to night elves i am not sure about horde quest npc start npc end zone expected blizzlike behaviour describe how it should be working without the bug quest available current behaviour describe the bug in detail then fill in the required fields for your problem based on the needs of your problem delete the symbols at the beginning and at the end according to the field you need the fields you don t need to fill ignore them quest not avalable steps to reproduce the problem describe precisely how to reproduce the bug so we can fix it or confirm its existence which commands to use which npc to teleport to other steps step get to the barrens as an alliance character step walk up to nalpak in the wailing caverns near the entrance step he will not have the quest available for you ac hash commit operating system ubuntu modules other customizations none server chromiecraft want to back this issue we accept bounties via
0
100,732
12,556,582,600
IssuesEvent
2020-06-07 10:13:10
pandas-dev/pandas
https://api.github.com/repos/pandas-dev/pandas
closed
ENH: Dropping outliers
API Design Enhancement Numeric
Create a new function to remove outliers. http://stackoverflow.com/questions/23199796/detect-and-exclude-outliers-in-pandas-dataframe I find myself using the code from SO quite often to remove outliers in a particular column when preprocessing data and it seems this is a common issue. It would be nice to have a function that operates on a Series to do this automatically. #### Code Sample, a copy-pastable example if possible ```python df = pd.DataFrame(np.random.randn(100, 3)) # from SO answer by tanemaki from scipy import stats df[(np.abs(stats.zscore(df)) < 3).all(axis=1)] # instead, would prefer df.drop_outliers(3) ``` Alternatively, instead of a new function, we could modify .clip(), though I think a new function makes more sense. The implementation could be similar to the SO implementation. If there is agreement that this would be useful and the implementation makes sense, happy to do the PR.
1.0
ENH: Dropping outliers - Create a new function to remove outliers. http://stackoverflow.com/questions/23199796/detect-and-exclude-outliers-in-pandas-dataframe I find myself using the code from SO quite often to remove outliers in a particular column when preprocessing data and it seems this is a common issue. It would be nice to have a function that operates on a Series to do this automatically. #### Code Sample, a copy-pastable example if possible ```python df = pd.DataFrame(np.random.randn(100, 3)) # from SO answer by tanemaki from scipy import stats df[(np.abs(stats.zscore(df)) < 3).all(axis=1)] # instead, would prefer df.drop_outliers(3) ``` Alternatively, instead of a new function, we could modify .clip(), though I think a new function makes more sense. The implementation could be similar to the SO implementation. If there is agreement that this would be useful and the implementation makes sense, happy to do the PR.
non_test
enh dropping outliers create a new function to remove outliers i find myself using the code from so quite often to remove outliers in a particular column when preprocessing data and it seems this is a common issue it would be nice to have a function that operates on a series to do this automatically code sample a copy pastable example if possible python df pd dataframe np random randn from so answer by tanemaki from scipy import stats df instead would prefer df drop outliers alternatively instead of a new function we could modify clip though i think a new function makes more sense the implementation could be similar to the so implementation if there is agreement that this would be useful and the implementation makes sense happy to do the pr
0
246,974
20,948,146,451
IssuesEvent
2022-03-26 06:54:49
Uuvana-Studios/longvinter-windows-client
https://api.github.com/repos/Uuvana-Studios/longvinter-windows-client
closed
Unmovable Infinite Storage Bug
bug need more info Not Tested
The Infinite Storage Bug when picked up will drop around 4-9 maybe more storage before disappearing leaving stacks of chests on the floor creating basically an infinite amount of storage chests. The Unmovable Storage Bug when collecting or picking an empty storage it would do the same with the Infinite Storage Bug but the Storage itself will not disappear and no matter how many times you click to pick up it would still drop infinite amounts of storage but as the storage box itself wont disappear you wont be able to collect this so called infinite amount of Storage only if it were to disappear but since the storage itself wont disappear you are able to access the storage itself. I hope the Devs take notice of this. Thank You for all your Hard Work! -MrVilo
1.0
Unmovable Infinite Storage Bug - The Infinite Storage Bug when picked up will drop around 4-9 maybe more storage before disappearing leaving stacks of chests on the floor creating basically an infinite amount of storage chests. The Unmovable Storage Bug when collecting or picking an empty storage it would do the same with the Infinite Storage Bug but the Storage itself will not disappear and no matter how many times you click to pick up it would still drop infinite amounts of storage but as the storage box itself wont disappear you wont be able to collect this so called infinite amount of Storage only if it were to disappear but since the storage itself wont disappear you are able to access the storage itself. I hope the Devs take notice of this. Thank You for all your Hard Work! -MrVilo
test
unmovable infinite storage bug the infinite storage bug when picked up will drop around maybe more storage before disappearing leaving stacks of chests on the floor creating basically an infinite amount of storage chests the unmovable storage bug when collecting or picking an empty storage it would do the same with the infinite storage bug but the storage itself will not disappear and no matter how many times you click to pick up it would still drop infinite amounts of storage but as the storage box itself wont disappear you wont be able to collect this so called infinite amount of storage only if it were to disappear but since the storage itself wont disappear you are able to access the storage itself i hope the devs take notice of this thank you for all your hard work mrvilo
1
1,305
3,550,790,801
IssuesEvent
2016-01-20 23:29:40
brata-hsdc/brata.masterserver
https://api.github.com/repos/brata-hsdc/brata.masterserver
closed
Reg Code
ms-piservice priority:1-drop-everything state:2-in-work
We need the master server to accept the registration url with the passcode as the last element of the path and use that to look up the team as well as respond with the reg_code they should use. There is an open issue @ellerychan is working for how the passcode and reg codes are generated so this issue will only put in the initial framework for this part of the solution and be completed with that other issue.
1.0
Reg Code - We need the master server to accept the registration url with the passcode as the last element of the path and use that to look up the team as well as respond with the reg_code they should use. There is an open issue @ellerychan is working for how the passcode and reg codes are generated so this issue will only put in the initial framework for this part of the solution and be completed with that other issue.
non_test
reg code we need the master server to accept the registration url with the passcode as the last element of the path and use that to look up the team as well as respond with the reg code they should use there is an open issue ellerychan is working for how the passcode and reg codes are generated so this issue will only put in the initial framework for this part of the solution and be completed with that other issue
0
21,634
3,535,336,093
IssuesEvent
2016-01-16 12:31:58
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
opened
Reading from backup does not update access data
Team: Core Type: Defect
When **MapConfig** option *"readBackupData"* is set to true then "get" operations do not trigger the update of access timestamps and hit count. This way the entries might get evicted even though they are frequently used. Cause of this issue is in *com.hazelcast.map.impl.DefaultRecordStore#readBackupData* method.
1.0
Reading from backup does not update access data - When **MapConfig** option *"readBackupData"* is set to true then "get" operations do not trigger the update of access timestamps and hit count. This way the entries might get evicted even though they are frequently used. Cause of this issue is in *com.hazelcast.map.impl.DefaultRecordStore#readBackupData* method.
non_test
reading from backup does not update access data when mapconfig option readbackupdata is set to true then get operations do not trigger the update of access timestamps and hit count this way the entries might get evicted even though they are frequently used cause of this issue is in com hazelcast map impl defaultrecordstore readbackupdata method
0
696,338
23,897,921,778
IssuesEvent
2022-09-08 16:07:19
ooni/probe
https://api.github.com/repos/ooni/probe
opened
Add copy in Test Options screen to mention that test settings apply to both manual and automated runs
ooni/probe-mobile priority/high ooni/probe-desktop copy
As a follow-up to https://github.com/ooni/probe/issues/2265 and https://github.com/ooni/probe/issues/2266, we need to communicate to users that the **test settings** that they configure (e.g. disabling WhatsApp and Psiphon tests) via the Test Options settings of the OONI Probe app **apply to both manual and automated runs**. To this end, we could add a string in the Test Options screen (under the list of test categories), which says: ``` What you configure through the above test settings (e.g. disabling the WhatsApp test) will apply to tests run manually, as well as to tests run automatically (when automated testing is enabled). ``` @majakomel adding the above to OONI Probe Desktop first depends on https://github.com/ooni/probe/issues/2267 and https://github.com/ooni/probe/issues/2265.
1.0
Add copy in Test Options screen to mention that test settings apply to both manual and automated runs - As a follow-up to https://github.com/ooni/probe/issues/2265 and https://github.com/ooni/probe/issues/2266, we need to communicate to users that the **test settings** that they configure (e.g. disabling WhatsApp and Psiphon tests) via the Test Options settings of the OONI Probe app **apply to both manual and automated runs**. To this end, we could add a string in the Test Options screen (under the list of test categories), which says: ``` What you configure through the above test settings (e.g. disabling the WhatsApp test) will apply to tests run manually, as well as to tests run automatically (when automated testing is enabled). ``` @majakomel adding the above to OONI Probe Desktop first depends on https://github.com/ooni/probe/issues/2267 and https://github.com/ooni/probe/issues/2265.
non_test
add copy in test options screen to mention that test settings apply to both manual and automated runs as a follow up to and we need to communicate to users that the test settings that they configure e g disabling whatsapp and psiphon tests via the test options settings of the ooni probe app apply to both manual and automated runs to this end we could add a string in the test options screen under the list of test categories which says what you configure through the above test settings e g disabling the whatsapp test will apply to tests run manually as well as to tests run automatically when automated testing is enabled majakomel adding the above to ooni probe desktop first depends on and
0
386,126
11,432,451,687
IssuesEvent
2020-02-04 14:06:30
mozilla/addons-server
https://api.github.com/repos/mozilla/addons-server
closed
Inconclusive or false positive triage should re-enabled auto-approval
component: scanners priority: p3 triaged
### Describe the problem and steps to reproduce it: When a version that is held for manual review gets triaged as _inconclusive_ or _false positive_, we should re-enable auto-approval it. ### What happened? Clicking on _inconclusive_ or _false positive_ does not auto-approve the held version. ### What did you expect to happen? Clicking on _inconclusive_ or _false positive_ should auto-approve the held version. ### Anything else we should know? (Please include a link to the page, screenshots and any relevant files.)
1.0
Inconclusive or false positive triage should re-enabled auto-approval - ### Describe the problem and steps to reproduce it: When a version that is held for manual review gets triaged as _inconclusive_ or _false positive_, we should re-enable auto-approval it. ### What happened? Clicking on _inconclusive_ or _false positive_ does not auto-approve the held version. ### What did you expect to happen? Clicking on _inconclusive_ or _false positive_ should auto-approve the held version. ### Anything else we should know? (Please include a link to the page, screenshots and any relevant files.)
non_test
inconclusive or false positive triage should re enabled auto approval describe the problem and steps to reproduce it when a version that is held for manual review gets triaged as inconclusive or false positive we should re enable auto approval it what happened clicking on inconclusive or false positive does not auto approve the held version what did you expect to happen clicking on inconclusive or false positive should auto approve the held version anything else we should know please include a link to the page screenshots and any relevant files
0
101,597
8,791,282,789
IssuesEvent
2018-12-21 12:05:26
SME-Issues/issues
https://api.github.com/repos/SME-Issues/issues
closed
Compound Query Tests Invoice None - 21/12/18 11:01 - 5004
NLP Api PETEDEV pulse_tests
**Compound Query Tests Invoice None** - Total: 24 - Passed: 19 - **Pass: 11 (73%)** - Not Understood: 9 - Error (not understood): 0 - Failed but Understood: 4 (27%)
1.0
Compound Query Tests Invoice None - 21/12/18 11:01 - 5004 - **Compound Query Tests Invoice None** - Total: 24 - Passed: 19 - **Pass: 11 (73%)** - Not Understood: 9 - Error (not understood): 0 - Failed but Understood: 4 (27%)
test
compound query tests invoice none compound query tests invoice none total passed pass not understood error not understood failed but understood
1
38,315
5,173,562,276
IssuesEvent
2017-01-18 16:22:47
ngageoint/hootenanny-ui
https://api.github.com/repos/ngageoint/hootenanny-ui
closed
Issue with Cookie Cutter Conflation
Category: Test Identified During Regression Test Status: Ready for Test Type: Bug
Attempted the following method: **Cookie Cutter & Horizontal** For this example we’ll need to create two custom translations, one for the DC Street Centerline Data* described in and a second simple translation to ensure that the OSM highway data for DC maintains the correct osm tags. Ingest DC Street datasets using the recently created custom translation files. Note the different Translation Schema files used to import each dataset. - district_of_columbia_highway.zip - Street_Centerline_Light.shp Return to Map and select the Street Centerlines Light as the Reference Dataset, dc highway osm as the Secondary dataset. Click ‘Conflate’ Change the value for Type to Cookie Cutter & Horizontal. Hit Conflate. Note the conflation time will vary depending on the specs of the machine. This example took about 10-15 min to run locally. 50+ reviews should appear. ----> Instead of launching into review mode, it kicks back an error (will be happy to include the log upon request due to length) Hypothesized it could be any issue with the data, but I attempted the above cookie cut conflation on Hoot NOME with zero issues. Would somebody mind trying to replicate the results on hoot release?
3.0
Issue with Cookie Cutter Conflation - Attempted the following method: **Cookie Cutter & Horizontal** For this example we’ll need to create two custom translations, one for the DC Street Centerline Data* described in and a second simple translation to ensure that the OSM highway data for DC maintains the correct osm tags. Ingest DC Street datasets using the recently created custom translation files. Note the different Translation Schema files used to import each dataset. - district_of_columbia_highway.zip - Street_Centerline_Light.shp Return to Map and select the Street Centerlines Light as the Reference Dataset, dc highway osm as the Secondary dataset. Click ‘Conflate’ Change the value for Type to Cookie Cutter & Horizontal. Hit Conflate. Note the conflation time will vary depending on the specs of the machine. This example took about 10-15 min to run locally. 50+ reviews should appear. ----> Instead of launching into review mode, it kicks back an error (will be happy to include the log upon request due to length) Hypothesized it could be any issue with the data, but I attempted the above cookie cut conflation on Hoot NOME with zero issues. Would somebody mind trying to replicate the results on hoot release?
test
issue with cookie cutter conflation attempted the following method cookie cutter horizontal for this example we’ll need to create two custom translations one for the dc street centerline data described in and a second simple translation to ensure that the osm highway data for dc maintains the correct osm tags ingest dc street datasets using the recently created custom translation files note the different translation schema files used to import each dataset district of columbia highway zip street centerline light shp return to map and select the street centerlines light as the reference dataset dc highway osm as the secondary dataset click ‘conflate’ change the value for type to cookie cutter horizontal hit conflate note the conflation time will vary depending on the specs of the machine this example took about min to run locally reviews should appear instead of launching into review mode it kicks back an error will be happy to include the log upon request due to length hypothesized it could be any issue with the data but i attempted the above cookie cut conflation on hoot nome with zero issues would somebody mind trying to replicate the results on hoot release
1
346,698
10,418,690,931
IssuesEvent
2019-09-15 10:51:18
SunwellTracker/issues
https://api.github.com/repos/SunwellTracker/issues
closed
Autoattacking trough pillar los on all arenas
Fixed & Implemented High Priority Map
Decription: ^ Title How it works: ^ Title How it should work: U shouldn't be able to attack trough los rofl. Source (you should point out proofs of your report, please give us some source):
1.0
Autoattacking trough pillar los on all arenas - Decription: ^ Title How it works: ^ Title How it should work: U shouldn't be able to attack trough los rofl. Source (you should point out proofs of your report, please give us some source):
non_test
autoattacking trough pillar los on all arenas decription title how it works title how it should work u shouldn t be able to attack trough los rofl source you should point out proofs of your report please give us some source
0
159,520
12,478,329,528
IssuesEvent
2020-05-29 16:20:44
dbrownukk/EFD_v2
https://api.github.com/repos/dbrownukk/EFD_v2
closed
Unable to upload interview spreadsheet
For Testing bug
Instance: EFD_HM App: OIHM Module: HH HH: Old Mother Riley 1. in Module Study, click 'AnOIHM Study' to invoke detail 1. Click 'Template Spreadsheet' to generate interview spreadsheet template view 1. Fill in the template (see attached) 1. In HH module, click 'New' to create a new HH 1. Upload the filled in Template spreadsheet. 1. Click Save; uploaded spreadsheet saves successfully 1. Click Parse Spreadsheet `ERROR` Parse Failed - Wrong template for Household Members java.lang.NullPointerException `ERROR` Cannot Validate Household, No Assets to Validate
1.0
Unable to upload interview spreadsheet - Instance: EFD_HM App: OIHM Module: HH HH: Old Mother Riley 1. in Module Study, click 'AnOIHM Study' to invoke detail 1. Click 'Template Spreadsheet' to generate interview spreadsheet template view 1. Fill in the template (see attached) 1. In HH module, click 'New' to create a new HH 1. Upload the filled in Template spreadsheet. 1. Click Save; uploaded spreadsheet saves successfully 1. Click Parse Spreadsheet `ERROR` Parse Failed - Wrong template for Household Members java.lang.NullPointerException `ERROR` Cannot Validate Household, No Assets to Validate
test
unable to upload interview spreadsheet instance efd hm app oihm module hh hh old mother riley in module study click anoihm study to invoke detail click template spreadsheet to generate interview spreadsheet template view fill in the template see attached in hh module click new to create a new hh upload the filled in template spreadsheet click save uploaded spreadsheet saves successfully click parse spreadsheet error parse failed wrong template for household members java lang nullpointerexception error cannot validate household no assets to validate
1
212,193
7,229,198,706
IssuesEvent
2018-02-11 17:35:23
Asgaros/asgaros-forum
https://api.github.com/repos/Asgaros/asgaros-forum
closed
Moving Forums
Feature Priority: High
It would be great to have an admin tool to move topics from one Category to another.
1.0
Moving Forums - It would be great to have an admin tool to move topics from one Category to another.
non_test
moving forums it would be great to have an admin tool to move topics from one category to another
0
34,613
4,934,217,013
IssuesEvent
2016-11-28 18:27:05
Metaswitch/clearwater-etcd
https://api.github.com/repos/Metaswitch/clearwater-etcd
opened
Running upload_shared_config too early causes problems
bug cat:system-test low-priority
#### Symptoms I've spun up a deployment and hit a couple of problems running upload_shared_config, I think because I ran it too early. 1) I fell the wrong side of [this check](https://github.com/Metaswitch/clearwater-etcd/blob/bdfc0ee2a90bb1fd3d0f0d8dc19fbc63bcc2341c/clearwater-config-manager.root/usr/share/clearwater/clearwater-config-manager/scripts/upload_shared_config#L60). Obviously this is WAD but...how long am I supposed to wait? 2) I hit a crash... ``` [10.225.166.15] sudo: /usr/share/clearwater/clearwater-config-manager/scripts/upload_shared_config [10.225.166.15] out: Traceback (most recent call last): [10.225.166.15] out: File "/usr/share/clearwater/clearwater-config-manager/scripts/log_shared_config.py", line 91, in <module> [10.225.166.15] out: main() [10.225.166.15] out: File "/usr/share/clearwater/clearwater-config-manager/scripts/log_shared_config.py", line 55, in main [10.225.166.15] out: old_config_lines = json.loads(jsonstr)["node"]["value"].splitlines() [10.225.166.15] out: KeyError: 'node' ``` I'm guessing this is because etcd had just started but we hadn't read `shared_config` for the first time yet? #### Impact <!-- What is this preventing you from doing? Does this stop Clearwater working, or stop some calls being processed? --> There's actually no tangible functional impact here - if we try to upload shared config 'too early' so to speak then I assume it's as if the version of shared_config that we want to use is the only one any of our processes have seen. However, it's not great from a usability standpoint (particularly if someone is trying to do orchestration!) and hitting an uncaught exception is never great. #### Release and environment CC-sprout-10.0.0-20161116-~255.00.0.ova #### Steps to reproduce Spin up a system and `upload_shared_config` as soon as you can... I've hit the first issue twice and the second issue once from spinning up a deployment ~20 times.
1.0
Running upload_shared_config too early causes problems - #### Symptoms I've spun up a deployment and hit a couple of problems running upload_shared_config, I think because I ran it too early. 1) I fell the wrong side of [this check](https://github.com/Metaswitch/clearwater-etcd/blob/bdfc0ee2a90bb1fd3d0f0d8dc19fbc63bcc2341c/clearwater-config-manager.root/usr/share/clearwater/clearwater-config-manager/scripts/upload_shared_config#L60). Obviously this is WAD but...how long am I supposed to wait? 2) I hit a crash... ``` [10.225.166.15] sudo: /usr/share/clearwater/clearwater-config-manager/scripts/upload_shared_config [10.225.166.15] out: Traceback (most recent call last): [10.225.166.15] out: File "/usr/share/clearwater/clearwater-config-manager/scripts/log_shared_config.py", line 91, in <module> [10.225.166.15] out: main() [10.225.166.15] out: File "/usr/share/clearwater/clearwater-config-manager/scripts/log_shared_config.py", line 55, in main [10.225.166.15] out: old_config_lines = json.loads(jsonstr)["node"]["value"].splitlines() [10.225.166.15] out: KeyError: 'node' ``` I'm guessing this is because etcd had just started but we hadn't read `shared_config` for the first time yet? #### Impact <!-- What is this preventing you from doing? Does this stop Clearwater working, or stop some calls being processed? --> There's actually no tangible functional impact here - if we try to upload shared config 'too early' so to speak then I assume it's as if the version of shared_config that we want to use is the only one any of our processes have seen. However, it's not great from a usability standpoint (particularly if someone is trying to do orchestration!) and hitting an uncaught exception is never great. #### Release and environment CC-sprout-10.0.0-20161116-~255.00.0.ova #### Steps to reproduce Spin up a system and `upload_shared_config` as soon as you can... I've hit the first issue twice and the second issue once from spinning up a deployment ~20 times.
test
running upload shared config too early causes problems symptoms i ve spun up a deployment and hit a couple of problems running upload shared config i think because i ran it too early i fell the wrong side of obviously this is wad but how long am i supposed to wait i hit a crash sudo usr share clearwater clearwater config manager scripts upload shared config out traceback most recent call last out file usr share clearwater clearwater config manager scripts log shared config py line in out main out file usr share clearwater clearwater config manager scripts log shared config py line in main out old config lines json loads jsonstr splitlines out keyerror node i m guessing this is because etcd had just started but we hadn t read shared config for the first time yet impact there s actually no tangible functional impact here if we try to upload shared config too early so to speak then i assume it s as if the version of shared config that we want to use is the only one any of our processes have seen however it s not great from a usability standpoint particularly if someone is trying to do orchestration and hitting an uncaught exception is never great release and environment cc sprout ova steps to reproduce spin up a system and upload shared config as soon as you can i ve hit the first issue twice and the second issue once from spinning up a deployment times
1
134,018
10,878,544,487
IssuesEvent
2019-11-16 18:23:45
CARTAvis/carta-backend-ICD-test
https://api.github.com/repos/CARTAvis/carta-backend-ICD-test
closed
[Backend ICD test] image zoom and pan
test implementation
Test cases doc: https://docs.google.com/document/d/1h098N2xCB4Rburjm8G9XTcN3MqfMjVhwt_Qrpy-69Os/edit#heading=h.umgtuzlazls4 Test cases in the doc: - (backlog) IMAGE_ZOOM_PAN Relevant messages: - SET_IMAGE_VIEW - RASTER_IMAGE_DATA
1.0
[Backend ICD test] image zoom and pan - Test cases doc: https://docs.google.com/document/d/1h098N2xCB4Rburjm8G9XTcN3MqfMjVhwt_Qrpy-69Os/edit#heading=h.umgtuzlazls4 Test cases in the doc: - (backlog) IMAGE_ZOOM_PAN Relevant messages: - SET_IMAGE_VIEW - RASTER_IMAGE_DATA
test
image zoom and pan test cases doc test cases in the doc backlog image zoom pan relevant messages set image view raster image data
1
203,486
15,371,490,222
IssuesEvent
2021-03-02 10:04:44
G-Node/WinGIN
https://api.github.com/repos/G-Node/WinGIN
opened
WinGIN is not closing corectly
needs testing
When restarting or shutting down PC (with fast SSD), the WinGIN will show error and does not close correctly.
1.0
WinGIN is not closing corectly - When restarting or shutting down PC (with fast SSD), the WinGIN will show error and does not close correctly.
test
wingin is not closing corectly when restarting or shutting down pc with fast ssd the wingin will show error and does not close correctly
1
167,839
13,044,597,824
IssuesEvent
2020-07-29 05:12:17
blockstack/stacks-blockchain
https://api.github.com/repos/blockstack/stacks-blockchain
closed
miner panicked at 'attempt to subtract with overflow'
bug help wanted testnet
## Describe the bug Was running a miner for over 24h (nearly 48 hours I think), and the miner panicked with: ``` DEBUG [1594297336.457] [src/chainstate/stacks/index/storage.rs:1080] [ThreadId(24214)] Flush: identifier of self is 8637 DEBUG [1594297336.457] [src/chainstate/burn/db/burndb.rs:680] [ThreadId(24214)] Insert block snapshot state 76bbef12ef89fe118b2a077a2233f445f69b3a1273a2655d7c76c2dec4fb5b19 for block 8636 (3d58d1edb7247e15cff6c2a496d07597bf2d9a2bb506ef5c794554834bbff229,6f74eb55c08bbaae46444e7465434a197b2002d380fffe9eb6a03ceef6ba7681) 8330 DEBUG [1594297336.457] [src/chainstate/burn/db/burndb.rs:965] [ThreadId(24214)] ACCEPTED(8636) leader key register 9c5bb2c072d5ad55ffb246cb9479c7b7c1f102481073e8a375254d5a8c61352c at 8636,1 DEBUG [1594297336.458] [src/chainstate/burn/db/burndb.rs:969] [ThreadId(24214)] ACCEPTED(8636) leader block commit 5a4ed1ba90ac3d4a35dc7441cd4955831c59e07e7f38ac4e7666628fa5392800 at 8636,2 DEBUG [1594297336.458] [src/chainstate/burn/db/burndb.rs:965] [ThreadId(24214)] ACCEPTED(8636) leader key register 41168f6cf646875714623e1b3580877289f74b24ffe039f89d88c9c325cd2151 at 8636,3 DEBUG [1594297336.459] [src/chainstate/burn/db/burndb.rs:969] [ThreadId(24214)] ACCEPTED(8636) leader block commit 630218037b8efb0bbb322e9c064451f1c4d24fd415d94e5e3c7eb85cae281a11 at 8636,4 DEBUG [1594297336.459] [src/burnchains/burnchain.rs:665] [ThreadId(24214)] OPS-HASH(8636): 1be9d3dd495fbc38ae59c486bdb07386f1132e882ae8ec33bb90b984bd2a9265 DEBUG [1594297336.459] [src/burnchains/burnchain.rs:666] [ThreadId(24214)] INDEX-ROOT(8636): 76bbef12ef89fe118b2a077a2233f445f69b3a1273a2655d7c76c2dec4fb5b19 DEBUG [1594297336.459] [src/burnchains/burnchain.rs:667] [ThreadId(24214)] SORTITION-HASH(8636): d3135a1038616d2ae5ec0dace3c818f01a53e01a9694294252313209b223d792 DEBUG [1594297336.459] [src/burnchains/burnchain.rs:668] [ThreadId(24214)] CONSENSUS(8636): e11771a85fef5eca2ff61964bd6bb647df2e7234 thread '<unnamed>' panicked at 'attempt to subtract with overflow', src/burnchains/burnchain.rs:919:85 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Any', src/burnchains/burnchain.rs:946:60 ``` And way earlier in the log (I was running with lots of additional logging), but the first prior `Try recv next parsed block` log: ``` DEBUG [1594297338.771] [src/burnchains/burnchain.rs:895] [ThreadId(24213)] Parsed block 8636 in 19ms DEBUG [1594297338.771] [src/burnchains/burnchain.rs:908] [ThreadId(24214)] Try recv next parsed block DEBUG [1594297338.771] [src/burnchains/burnchain.rs:760] [ThreadId(24214)] Process block 8636 3d58d1edb7247e15cff6c2a496d07597bf2d9a2bb506ef5c794554834bbff229 DEBUG [1594297338.771] [src/burnchains/burnchain.rs:706] [ThreadId(24214)] Get header for block 8636 3d58d1edb7247e15cff6c2a496d07597bf2d9a2bb506ef5c794554834bbff229 DEBUG [1594297338.771] [src/burnchains/burnchain.rs:499] [ThreadId(24214)] Extract Blockstack transactions from block 8636 3d58d1edb7247e15cff6c2a496d07597bf2d9a2bb506ef5c794554834bbff229 DEBUG [1594297338.772] [src/burnchains/burnchain.rs:680] [ThreadId(24214)] BEGIN(8636) block (3d58d1edb7247e15cff6c2a496d07597bf2d9a2bb506ef5c794554834bbff229,6f74eb55c08bbaae46444e7465434a197b2002d380fffe9eb6a03ceef6ba7681) DEBUG [1594297338.772] [src/burnchains/burnchain.rs:681] [ThreadId(24214)] Append 4 operation(s) from block 8636 3d58d1edb7247e15cff6c2a496d07597bf2d9a2bb506ef5c794554834bbff229 DEBUG [1594297338.772] [src/burnchains/burnchain.rs:608] [ThreadId(24214)] Check Blockstack transactions from block 8636 3d58d1edb7247e15cff6c2a496d07597bf2d9a2bb506ef5c794554834bbff229 ``` which has a higher timestamp, so maybe the cause for the overflow? and here's the part of the log where the "inversion" happened: ``` DEBUG [1594297338.938] [src/chainstate/stacks/index/trie.rs:745] [ThreadId(24214)] Next root hash is b6625fae007e2cb50026fa734d2f4ac129582599cdef73ec6ddaefb280b6b78b (update_skiplist=false) DEBUG [1594297338.938] [src/chainstate/stacks/index/marf.rs:467] [ThreadId(24214)] MARF Insert in 3d58d1edb7247e15cff6c2a496d07597bf2d9a2bb506ef5c794554834bbff229: 'c3d18ea0eea5a15cfe8a1ca63912d728ce68d3b9389688138c38895ec2199f40' = '6f74eb55c08bbaae46444e7465434a197b2002d380fffe9eb6a03ceef6ba76810000000000000000' (...[]) DEBUG [1594297336.352] [src/chainstate/stacks/index/trie.rs:745] [ThreadId(24214)] Next root hash is b91e9d5d206cf411d4ea03d9840f76b7b943bd4a7b1a3b82845db22f050dfc4f (update_skiplist=false) DEBUG [1594297336.353] [src/chainstate/stacks/index/marf.rs:467] [ThreadId(24214)] MARF Insert in 3d58d1edb7247e15cff6c2a496d07597bf2d9a2bb506ef5c794554834bbff229: 'c612882cf676e1d004ad9c4cc2ee70757b92fbcdc5ef4e732570058b4454ddb5' = 'bb210000000000000000000000000000000000000000000000000000000000000000000000000000' (...[]) DEBUG [1594297336.365] [src/chainstate/stacks/index/trie.rs:745] [ThreadId(24214)] Next root hash is acd85ff39cc9421273de68319227d3a017aa1c30046ff62d0af4083b6317806b (update_skiplist=true) DEBUG [1594297336.365] [src/chainstate/stacks/index/marf.rs:786] [ThreadId(24214)] Opened 6f74eb55c08bbaae46444e7465434a197b2002d380fffe9eb6a03ceef6ba7681 in ./.stack/burnchain/db/bitcoin/regtest/burn.db/marf ``` Unfortunately, while trying to save the terminal, my mac rebooted (probably was over 100Gb), so I no longer have the whole log, but I can provide more logs right before the panic if needed. The line numbers in `burnchain.rs` are consistent with current master. ## Steps To Reproduce not sure how you would reproduce for sure... run for **a while**? ## Expected behavior Don't panic would be preferable ## Environment - OS: Mac OS 10.14.6 - Rust version: rustc 1.43.1 (8d69840ab 2020-05-04)
1.0
miner panicked at 'attempt to subtract with overflow' - ## Describe the bug Was running a miner for over 24h (nearly 48 hours I think), and the miner panicked with: ``` DEBUG [1594297336.457] [src/chainstate/stacks/index/storage.rs:1080] [ThreadId(24214)] Flush: identifier of self is 8637 DEBUG [1594297336.457] [src/chainstate/burn/db/burndb.rs:680] [ThreadId(24214)] Insert block snapshot state 76bbef12ef89fe118b2a077a2233f445f69b3a1273a2655d7c76c2dec4fb5b19 for block 8636 (3d58d1edb7247e15cff6c2a496d07597bf2d9a2bb506ef5c794554834bbff229,6f74eb55c08bbaae46444e7465434a197b2002d380fffe9eb6a03ceef6ba7681) 8330 DEBUG [1594297336.457] [src/chainstate/burn/db/burndb.rs:965] [ThreadId(24214)] ACCEPTED(8636) leader key register 9c5bb2c072d5ad55ffb246cb9479c7b7c1f102481073e8a375254d5a8c61352c at 8636,1 DEBUG [1594297336.458] [src/chainstate/burn/db/burndb.rs:969] [ThreadId(24214)] ACCEPTED(8636) leader block commit 5a4ed1ba90ac3d4a35dc7441cd4955831c59e07e7f38ac4e7666628fa5392800 at 8636,2 DEBUG [1594297336.458] [src/chainstate/burn/db/burndb.rs:965] [ThreadId(24214)] ACCEPTED(8636) leader key register 41168f6cf646875714623e1b3580877289f74b24ffe039f89d88c9c325cd2151 at 8636,3 DEBUG [1594297336.459] [src/chainstate/burn/db/burndb.rs:969] [ThreadId(24214)] ACCEPTED(8636) leader block commit 630218037b8efb0bbb322e9c064451f1c4d24fd415d94e5e3c7eb85cae281a11 at 8636,4 DEBUG [1594297336.459] [src/burnchains/burnchain.rs:665] [ThreadId(24214)] OPS-HASH(8636): 1be9d3dd495fbc38ae59c486bdb07386f1132e882ae8ec33bb90b984bd2a9265 DEBUG [1594297336.459] [src/burnchains/burnchain.rs:666] [ThreadId(24214)] INDEX-ROOT(8636): 76bbef12ef89fe118b2a077a2233f445f69b3a1273a2655d7c76c2dec4fb5b19 DEBUG [1594297336.459] [src/burnchains/burnchain.rs:667] [ThreadId(24214)] SORTITION-HASH(8636): d3135a1038616d2ae5ec0dace3c818f01a53e01a9694294252313209b223d792 DEBUG [1594297336.459] [src/burnchains/burnchain.rs:668] [ThreadId(24214)] CONSENSUS(8636): e11771a85fef5eca2ff61964bd6bb647df2e7234 thread '<unnamed>' panicked at 'attempt to subtract with overflow', src/burnchains/burnchain.rs:919:85 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Any', src/burnchains/burnchain.rs:946:60 ``` And way earlier in the log (I was running with lots of additional logging), but the first prior `Try recv next parsed block` log: ``` DEBUG [1594297338.771] [src/burnchains/burnchain.rs:895] [ThreadId(24213)] Parsed block 8636 in 19ms DEBUG [1594297338.771] [src/burnchains/burnchain.rs:908] [ThreadId(24214)] Try recv next parsed block DEBUG [1594297338.771] [src/burnchains/burnchain.rs:760] [ThreadId(24214)] Process block 8636 3d58d1edb7247e15cff6c2a496d07597bf2d9a2bb506ef5c794554834bbff229 DEBUG [1594297338.771] [src/burnchains/burnchain.rs:706] [ThreadId(24214)] Get header for block 8636 3d58d1edb7247e15cff6c2a496d07597bf2d9a2bb506ef5c794554834bbff229 DEBUG [1594297338.771] [src/burnchains/burnchain.rs:499] [ThreadId(24214)] Extract Blockstack transactions from block 8636 3d58d1edb7247e15cff6c2a496d07597bf2d9a2bb506ef5c794554834bbff229 DEBUG [1594297338.772] [src/burnchains/burnchain.rs:680] [ThreadId(24214)] BEGIN(8636) block (3d58d1edb7247e15cff6c2a496d07597bf2d9a2bb506ef5c794554834bbff229,6f74eb55c08bbaae46444e7465434a197b2002d380fffe9eb6a03ceef6ba7681) DEBUG [1594297338.772] [src/burnchains/burnchain.rs:681] [ThreadId(24214)] Append 4 operation(s) from block 8636 3d58d1edb7247e15cff6c2a496d07597bf2d9a2bb506ef5c794554834bbff229 DEBUG [1594297338.772] [src/burnchains/burnchain.rs:608] [ThreadId(24214)] Check Blockstack transactions from block 8636 3d58d1edb7247e15cff6c2a496d07597bf2d9a2bb506ef5c794554834bbff229 ``` which has a higher timestamp, so maybe the cause for the overflow? and here's the part of the log where the "inversion" happened: ``` DEBUG [1594297338.938] [src/chainstate/stacks/index/trie.rs:745] [ThreadId(24214)] Next root hash is b6625fae007e2cb50026fa734d2f4ac129582599cdef73ec6ddaefb280b6b78b (update_skiplist=false) DEBUG [1594297338.938] [src/chainstate/stacks/index/marf.rs:467] [ThreadId(24214)] MARF Insert in 3d58d1edb7247e15cff6c2a496d07597bf2d9a2bb506ef5c794554834bbff229: 'c3d18ea0eea5a15cfe8a1ca63912d728ce68d3b9389688138c38895ec2199f40' = '6f74eb55c08bbaae46444e7465434a197b2002d380fffe9eb6a03ceef6ba76810000000000000000' (...[]) DEBUG [1594297336.352] [src/chainstate/stacks/index/trie.rs:745] [ThreadId(24214)] Next root hash is b91e9d5d206cf411d4ea03d9840f76b7b943bd4a7b1a3b82845db22f050dfc4f (update_skiplist=false) DEBUG [1594297336.353] [src/chainstate/stacks/index/marf.rs:467] [ThreadId(24214)] MARF Insert in 3d58d1edb7247e15cff6c2a496d07597bf2d9a2bb506ef5c794554834bbff229: 'c612882cf676e1d004ad9c4cc2ee70757b92fbcdc5ef4e732570058b4454ddb5' = 'bb210000000000000000000000000000000000000000000000000000000000000000000000000000' (...[]) DEBUG [1594297336.365] [src/chainstate/stacks/index/trie.rs:745] [ThreadId(24214)] Next root hash is acd85ff39cc9421273de68319227d3a017aa1c30046ff62d0af4083b6317806b (update_skiplist=true) DEBUG [1594297336.365] [src/chainstate/stacks/index/marf.rs:786] [ThreadId(24214)] Opened 6f74eb55c08bbaae46444e7465434a197b2002d380fffe9eb6a03ceef6ba7681 in ./.stack/burnchain/db/bitcoin/regtest/burn.db/marf ``` Unfortunately, while trying to save the terminal, my mac rebooted (probably was over 100Gb), so I no longer have the whole log, but I can provide more logs right before the panic if needed. The line numbers in `burnchain.rs` are consistent with current master. ## Steps To Reproduce not sure how you would reproduce for sure... run for **a while**? ## Expected behavior Don't panic would be preferable ## Environment - OS: Mac OS 10.14.6 - Rust version: rustc 1.43.1 (8d69840ab 2020-05-04)
test
miner panicked at attempt to subtract with overflow describe the bug was running a miner for over nearly hours i think and the miner panicked with debug flush identifier of self is debug insert block snapshot state for block debug accepted leader key register at debug accepted leader block commit at debug accepted leader key register at debug accepted leader block commit at debug ops hash debug index root debug sortition hash debug consensus thread panicked at attempt to subtract with overflow src burnchains burnchain rs note run with rust backtrace environment variable to display a backtrace thread main panicked at called result unwrap on an err value any src burnchains burnchain rs and way earlier in the log i was running with lots of additional logging but the first prior try recv next parsed block log debug parsed block in debug try recv next parsed block debug process block debug get header for block debug extract blockstack transactions from block debug begin block debug append operation s from block debug check blockstack transactions from block which has a higher timestamp so maybe the cause for the overflow and here s the part of the log where the inversion happened debug next root hash is update skiplist false debug marf insert in debug next root hash is update skiplist false debug marf insert in debug next root hash is update skiplist true debug opened in stack burnchain db bitcoin regtest burn db marf unfortunately while trying to save the terminal my mac rebooted probably was over so i no longer have the whole log but i can provide more logs right before the panic if needed the line numbers in burnchain rs are consistent with current master steps to reproduce not sure how you would reproduce for sure run for a while expected behavior don t panic would be preferable environment os mac os rust version rustc
1
187,344
6,756,324,863
IssuesEvent
2017-10-24 06:29:48
openebs/openebs
https://api.github.com/repos/openebs/openebs
closed
openebs-provisioner get crashed after launching percona pods.
area/volume/provisioning kind/bug priority/0
BUG REPORT **What happened**: * I tried to launch percona and just after creation of pvc and percona pod openebs-provisoner pod went crashloopbackoff. Here is the screenshot. ![screenshot](https://user-images.githubusercontent.com/19219723/31268646-f2415e94-aa9a-11e7-8bc7-430b409ea4ff.png) **What I did?** i deleted the pvc pods earlier and ran sql-loadgen.yaml. **What you expected to happen**: * openebs-provisioner should not show error or crashloopbackoff after provisioning the volumes. **Anything else we need to know?**: Below is the link attached for more info: * [maya-apiserver log](https://gist.github.com/utkarshmani1997/d559b2f6263f716fbcbad73193c602ef) * [openebs-provisioner pod decription](https://gist.github.com/utkarshmani1997/63561f669d904d46f0691cd91b4aff8d) * [openebs-provisioner log](https://gist.github.com/utkarshmani1997/2946ec30502548c852def53b295ab2f5) * percona goes to crashloopbackoff after sometime and restarts happens several times. [percona logs](https://gist.github.com/utkarshmani1997/5d16e0c46423be40b7025409dc16d5dc) * [kubectl describe pod percona](https://gist.github.com/utkarshmani1997/a97746293308c798a23fcaa62c9a7ba5) **Environment**: - kubectl get nodes : all the nodes are running (Status : Ready) - [kubectl get pods --all-namespaces] : all pods are running except provisioner - kubectl get services : all the services are running fine - [kubectl get sc](https://gist.github.com/utkarshmani1997/f949cc313bb1e777b561237359e58c8f) - [kubectl get pv](https://gist.github.com/utkarshmani1997/a42cbe85b164cae579c64d440edc89ff) - [kubectl describe pv](https://gist.github.com/utkarshmani1997/57c88e0f15f73a1baa8f53e1f2d4974a) - [kubectl get pvc](https://gist.github.com/utkarshmani1997/6583715b519baf94e5387dfd55d9b971) - OS (e.g. from /etc/os-release): Running on vagrant setup, this issue was countered in minikube also.
1.0
openebs-provisioner get crashed after launching percona pods. - BUG REPORT **What happened**: * I tried to launch percona and just after creation of pvc and percona pod openebs-provisoner pod went crashloopbackoff. Here is the screenshot. ![screenshot](https://user-images.githubusercontent.com/19219723/31268646-f2415e94-aa9a-11e7-8bc7-430b409ea4ff.png) **What I did?** i deleted the pvc pods earlier and ran sql-loadgen.yaml. **What you expected to happen**: * openebs-provisioner should not show error or crashloopbackoff after provisioning the volumes. **Anything else we need to know?**: Below is the link attached for more info: * [maya-apiserver log](https://gist.github.com/utkarshmani1997/d559b2f6263f716fbcbad73193c602ef) * [openebs-provisioner pod decription](https://gist.github.com/utkarshmani1997/63561f669d904d46f0691cd91b4aff8d) * [openebs-provisioner log](https://gist.github.com/utkarshmani1997/2946ec30502548c852def53b295ab2f5) * percona goes to crashloopbackoff after sometime and restarts happens several times. [percona logs](https://gist.github.com/utkarshmani1997/5d16e0c46423be40b7025409dc16d5dc) * [kubectl describe pod percona](https://gist.github.com/utkarshmani1997/a97746293308c798a23fcaa62c9a7ba5) **Environment**: - kubectl get nodes : all the nodes are running (Status : Ready) - [kubectl get pods --all-namespaces] : all pods are running except provisioner - kubectl get services : all the services are running fine - [kubectl get sc](https://gist.github.com/utkarshmani1997/f949cc313bb1e777b561237359e58c8f) - [kubectl get pv](https://gist.github.com/utkarshmani1997/a42cbe85b164cae579c64d440edc89ff) - [kubectl describe pv](https://gist.github.com/utkarshmani1997/57c88e0f15f73a1baa8f53e1f2d4974a) - [kubectl get pvc](https://gist.github.com/utkarshmani1997/6583715b519baf94e5387dfd55d9b971) - OS (e.g. from /etc/os-release): Running on vagrant setup, this issue was countered in minikube also.
non_test
openebs provisioner get crashed after launching percona pods bug report what happened i tried to launch percona and just after creation of pvc and percona pod openebs provisoner pod went crashloopbackoff here is the screenshot what i did i deleted the pvc pods earlier and ran sql loadgen yaml what you expected to happen openebs provisioner should not show error or crashloopbackoff after provisioning the volumes anything else we need to know below is the link attached for more info percona goes to crashloopbackoff after sometime and restarts happens several times environment kubectl get nodes all the nodes are running status ready all pods are running except provisioner kubectl get services all the services are running fine os e g from etc os release running on vagrant setup this issue was countered in minikube also
0
247,932
7,925,372,229
IssuesEvent
2018-07-05 20:23:12
inverse-inc/packetfence
https://api.github.com/repos/inverse-inc/packetfence
closed
Role selection isn't working on Firewall SSO and Scan.
Priority: High Type: Bug
This code (in Firewall_SSO.pm) doesn't work anymore: ``` sub options_categories { my $self = shift; my $result = $self->form->roles; my @roles = map { $_->{name} => $_->{name} } @{$result} if ($result); return ('' => '', @roles); } ``` This one work: ``` sub options_categories { my $self = shift; my ($status, $result) = $self->form->ctx->model('Config::Roles')->listFromDB(); my @roles = map { $_->{name} => $_->{name} } @{$result} if ($result); return ('' => '', @roles); } ```
1.0
Role selection isn't working on Firewall SSO and Scan. - This code (in Firewall_SSO.pm) doesn't work anymore: ``` sub options_categories { my $self = shift; my $result = $self->form->roles; my @roles = map { $_->{name} => $_->{name} } @{$result} if ($result); return ('' => '', @roles); } ``` This one work: ``` sub options_categories { my $self = shift; my ($status, $result) = $self->form->ctx->model('Config::Roles')->listFromDB(); my @roles = map { $_->{name} => $_->{name} } @{$result} if ($result); return ('' => '', @roles); } ```
non_test
role selection isn t working on firewall sso and scan this code in firewall sso pm doesn t work anymore sub options categories my self shift my result self form roles my roles map name name result if result return roles this one work sub options categories my self shift my status result self form ctx model config roles listfromdb my roles map name name result if result return roles
0
11,191
2,641,732,933
IssuesEvent
2015-03-11 19:25:35
chrsmith/html5rocks
https://api.github.com/repos/chrsmith/html5rocks
closed
slides: a simpler template
Priority-Medium Slides Type-Defect
Original [issue 93](https://code.google.com/p/html5rocks/issues/detail?id=93) created by chrsmith on 2010-07-28T21:49:22.000Z: Reported by KaiYanNju, Apr 30, 2010 It's coolest slider I have seen. So I think a template to make html5-slide is useful. The tempate should not have too many features,a video,a text and a mousic is enough. And I can do this work if you like. Comment 1 by paulirish@google.com, Jun 24, 2010 KaiYanNju, we definitely agree and would love to see a base presentation template. Annie Sullivan's recent velocity talk slides might interest you.. it was a variation of this base: http://www.monkey.org/~annie/ProgressiveEnhancement.html We'd love to see what you come up with :) (I'll keep this ticket open for now.. just to track this effort) Comment 2 by KaiYanNju, Jun 25, 2010 Thanks. The velocity talk is wonderful.This is my little effort. http://code.google.com/p/html5-slide-template/ And I want to make a latex silde Macro compatible with beamer to do this work. It can do this: *.tex --&gt; *.html &amp; *.tex --&gt; *.pdf It takes time.I think it will be popular.
1.0
slides: a simpler template - Original [issue 93](https://code.google.com/p/html5rocks/issues/detail?id=93) created by chrsmith on 2010-07-28T21:49:22.000Z: Reported by KaiYanNju, Apr 30, 2010 It's coolest slider I have seen. So I think a template to make html5-slide is useful. The tempate should not have too many features,a video,a text and a mousic is enough. And I can do this work if you like. Comment 1 by paulirish@google.com, Jun 24, 2010 KaiYanNju, we definitely agree and would love to see a base presentation template. Annie Sullivan's recent velocity talk slides might interest you.. it was a variation of this base: http://www.monkey.org/~annie/ProgressiveEnhancement.html We'd love to see what you come up with :) (I'll keep this ticket open for now.. just to track this effort) Comment 2 by KaiYanNju, Jun 25, 2010 Thanks. The velocity talk is wonderful.This is my little effort. http://code.google.com/p/html5-slide-template/ And I want to make a latex silde Macro compatible with beamer to do this work. It can do this: *.tex --&gt; *.html &amp; *.tex --&gt; *.pdf It takes time.I think it will be popular.
non_test
slides a simpler template original created by chrsmith on reported by kaiyannju apr it s coolest slider i have seen so i think a template to make slide is useful the tempate should not have too many features a video a text and a mousic is enough and i can do this work if you like comment by paulirish google com jun kaiyannju we definitely agree and would love to see a base presentation template annie sullivan s recent velocity talk slides might interest you it was a variation of this base we d love to see what you come up with i ll keep this ticket open for now just to track this effort comment by kaiyannju jun thanks the velocity talk is wonderful this is my little effort and i want to make a latex silde macro compatible with beamer to do this work it can do this tex gt html amp tex gt pdf it takes time i think it will be popular
0
284,519
30,913,640,762
IssuesEvent
2023-08-05 02:28:33
Nivaskumark/kernel_v4.19.72_old
https://api.github.com/repos/Nivaskumark/kernel_v4.19.72_old
reopened
CVE-2021-3483 (High) detected in linux-yoctov5.4.51
Mend: dependency security vulnerability
## CVE-2021-3483 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary> <p> <p>Yocto Linux Embedded kernel</p> <p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p> <p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.19.72/commit/ce49083a1c14be2d13cb5e878257d293e6c748bc">ce49083a1c14be2d13cb5e878257d293e6c748bc</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/firewire/nosy.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/firewire/nosy.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> A flaw was found in the Nosy driver in the Linux kernel. This issue allows a device to be inserted twice into a doubly-linked list, leading to a use-after-free when one of these devices is removed. The highest threat from this vulnerability is to confidentiality, integrity, as well as system availability. Versions before kernel 5.12-rc6 are affected <p>Publish Date: 2021-05-17 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-3483>CVE-2021-3483</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-3483">https://www.linuxkernelcves.com/cves/CVE-2021-3483</a></p> <p>Release Date: 2021-05-17</p> <p>Fix Resolution: v4.4.265, v4.9.265, v4.14.229, v4.19.185, v5.4.110, v5.10.28, v5.11.12</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-3483 (High) detected in linux-yoctov5.4.51 - ## CVE-2021-3483 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary> <p> <p>Yocto Linux Embedded kernel</p> <p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p> <p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.19.72/commit/ce49083a1c14be2d13cb5e878257d293e6c748bc">ce49083a1c14be2d13cb5e878257d293e6c748bc</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/firewire/nosy.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/firewire/nosy.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> A flaw was found in the Nosy driver in the Linux kernel. This issue allows a device to be inserted twice into a doubly-linked list, leading to a use-after-free when one of these devices is removed. The highest threat from this vulnerability is to confidentiality, integrity, as well as system availability. Versions before kernel 5.12-rc6 are affected <p>Publish Date: 2021-05-17 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-3483>CVE-2021-3483</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-3483">https://www.linuxkernelcves.com/cves/CVE-2021-3483</a></p> <p>Release Date: 2021-05-17</p> <p>Fix Resolution: v4.4.265, v4.9.265, v4.14.229, v4.19.185, v5.4.110, v5.10.28, v5.11.12</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve high detected in linux cve high severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in head commit a href found in base branch master vulnerable source files drivers firewire nosy c drivers firewire nosy c vulnerability details a flaw was found in the nosy driver in the linux kernel this issue allows a device to be inserted twice into a doubly linked list leading to a use after free when one of these devices is removed the highest threat from this vulnerability is to confidentiality integrity as well as system availability versions before kernel are affected publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
75,919
9,358,263,637
IssuesEvent
2019-04-02 01:37:29
isenseDev/MYR
https://api.github.com/repos/isenseDev/MYR
closed
Remove display components from containers
Frontend Redesign enhancement
Use containers for binding and move the display components into screens
1.0
Remove display components from containers - Use containers for binding and move the display components into screens
non_test
remove display components from containers use containers for binding and move the display components into screens
0
282,681
30,889,402,969
IssuesEvent
2023-08-04 02:40:16
madhans23/linux-4.1.15
https://api.github.com/repos/madhans23/linux-4.1.15
reopened
CVE-2023-1281 (High) detected in linux-stable-rtv4.1.33
Mend: dependency security vulnerability
## CVE-2023-1281 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/sched/cls_tcindex.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/sched/cls_tcindex.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> Use After Free vulnerability in Linux kernel traffic control index filter (tcindex) allows Privilege Escalation. The imperfect hash area can be updated while packets are traversing, which will cause a use-after-free when 'tcf_exts_exec()' is called with the destroyed tcf_ext. A local attacker user can use this vulnerability to elevate its privileges to root. This issue affects Linux Kernel: from 4.14 before git commit ee059170b1f7e94e55fa6cadee544e176a6e59c2. <p>Publish Date: 2023-03-22 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-1281>CVE-2023-1281</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-1281">https://www.linuxkernelcves.com/cves/CVE-2023-1281</a></p> <p>Release Date: 2023-03-22</p> <p>Fix Resolution: v5.10.169,v5.15.95,v6.1.13,v6.2,v6.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2023-1281 (High) detected in linux-stable-rtv4.1.33 - ## CVE-2023-1281 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/sched/cls_tcindex.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/sched/cls_tcindex.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> Use After Free vulnerability in Linux kernel traffic control index filter (tcindex) allows Privilege Escalation. The imperfect hash area can be updated while packets are traversing, which will cause a use-after-free when 'tcf_exts_exec()' is called with the destroyed tcf_ext. A local attacker user can use this vulnerability to elevate its privileges to root. This issue affects Linux Kernel: from 4.14 before git commit ee059170b1f7e94e55fa6cadee544e176a6e59c2. <p>Publish Date: 2023-03-22 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-1281>CVE-2023-1281</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-1281">https://www.linuxkernelcves.com/cves/CVE-2023-1281</a></p> <p>Release Date: 2023-03-22</p> <p>Fix Resolution: v5.10.169,v5.15.95,v6.1.13,v6.2,v6.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve high detected in linux stable cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in base branch master vulnerable source files net sched cls tcindex c net sched cls tcindex c vulnerability details use after free vulnerability in linux kernel traffic control index filter tcindex allows privilege escalation  the imperfect hash area can be updated while packets are traversing which will cause a use after free when tcf exts exec is called with the destroyed tcf ext  a local attacker user can use this vulnerability to elevate its privileges to root this issue affects linux kernel from before git commit publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
37,146
18,156,515,534
IssuesEvent
2021-09-27 02:52:30
verilator/verilator
https://api.github.com/repos/verilator/verilator
closed
Support profile guided optimization of mtasks
resolution: fixed area: performance type: feature-non-IEEE
I've nearly finished implementing profile guided optimization (PGO) of the multithreaded mtasks. Details of using this will be captured in the documentation. This issue collects some performance notes. Using verilated_ext_test t_cores_swerv_cmark.pl and --threads 4 Best of 5 runs with --threads 4: 25.31 seconds Best of 5 runs with --threads 4 and PGO from vlt created with --prof-threads: 22.17 seconds Result: 14% speedup (about 1/2 of a core). Running on a very large design with 8 threads had no significant improvement, but the mtask gantt chart was fairly "flat" so that isn't a surprise.
True
Support profile guided optimization of mtasks - I've nearly finished implementing profile guided optimization (PGO) of the multithreaded mtasks. Details of using this will be captured in the documentation. This issue collects some performance notes. Using verilated_ext_test t_cores_swerv_cmark.pl and --threads 4 Best of 5 runs with --threads 4: 25.31 seconds Best of 5 runs with --threads 4 and PGO from vlt created with --prof-threads: 22.17 seconds Result: 14% speedup (about 1/2 of a core). Running on a very large design with 8 threads had no significant improvement, but the mtask gantt chart was fairly "flat" so that isn't a surprise.
non_test
support profile guided optimization of mtasks i ve nearly finished implementing profile guided optimization pgo of the multithreaded mtasks details of using this will be captured in the documentation this issue collects some performance notes using verilated ext test t cores swerv cmark pl and threads best of runs with threads seconds best of runs with threads and pgo from vlt created with prof threads seconds result speedup about of a core running on a very large design with threads had no significant improvement but the mtask gantt chart was fairly flat so that isn t a surprise
0
150,968
11,995,487,402
IssuesEvent
2020-04-08 15:16:48
wasabee-project/Wasabee-IITC
https://api.github.com/repos/wasabee-project/Wasabee-IITC
closed
Split languages into individual JSON files
In Testing
Crystalwizard (ENL L16 AZ), [06.04.20 15:23] I have a concern btw - each language has so many keys that translations.json is going to get huge, fast. should we be thinking about multiple translations files? deviousness (Scot 🐢 D/FW Ԙ13), [06.04.20 15:24] Yes. Crystalwizard (ENL L16 AZ), [06.04.20 15:24] okay so how about this, then - a directory called translations and each language in its own json file deviousness (Scot 🐢 D/FW Ԙ13), [06.04.20 15:24] exactly. deviousness (Scot 🐢 D/FW Ԙ13), [06.04.20 15:24] import them in static.js Crystalwizard (ENL L16 AZ), [06.04.20 15:25] okay. I'll start working on changing that structure deviousness (Scot 🐢 D/FW Ԙ13), [06.04.20 15:25] thanks! Crystalwizard (ENL L16 AZ), [06.04.20 15:25] sure :) deviousness (Scot 🐢 D/FW Ԙ13), [06.04.20 15:25] we can probably do an "english only" build for those who want to save space, and a "all the languages" build for everyone else. Crystalwizard (ENL L16 AZ), [06.04.20 15:26] that would be cool, too
1.0
Split languages into individual JSON files - Crystalwizard (ENL L16 AZ), [06.04.20 15:23] I have a concern btw - each language has so many keys that translations.json is going to get huge, fast. should we be thinking about multiple translations files? deviousness (Scot 🐢 D/FW Ԙ13), [06.04.20 15:24] Yes. Crystalwizard (ENL L16 AZ), [06.04.20 15:24] okay so how about this, then - a directory called translations and each language in its own json file deviousness (Scot 🐢 D/FW Ԙ13), [06.04.20 15:24] exactly. deviousness (Scot 🐢 D/FW Ԙ13), [06.04.20 15:24] import them in static.js Crystalwizard (ENL L16 AZ), [06.04.20 15:25] okay. I'll start working on changing that structure deviousness (Scot 🐢 D/FW Ԙ13), [06.04.20 15:25] thanks! Crystalwizard (ENL L16 AZ), [06.04.20 15:25] sure :) deviousness (Scot 🐢 D/FW Ԙ13), [06.04.20 15:25] we can probably do an "english only" build for those who want to save space, and a "all the languages" build for everyone else. Crystalwizard (ENL L16 AZ), [06.04.20 15:26] that would be cool, too
test
split languages into individual json files crystalwizard enl az i have a concern btw each language has so many keys that translations json is going to get huge fast should we be thinking about multiple translations files deviousness scot 🐢 d fw yes crystalwizard enl az okay so how about this then a directory called translations and each language in its own json file deviousness scot 🐢 d fw exactly deviousness scot 🐢 d fw import them in static js crystalwizard enl az okay i ll start working on changing that structure deviousness scot 🐢 d fw thanks crystalwizard enl az sure deviousness scot 🐢 d fw we can probably do an english only build for those who want to save space and a all the languages build for everyone else crystalwizard enl az that would be cool too
1
212,617
16,469,603,498
IssuesEvent
2021-05-23 06:20:05
theimpossibleastronaut/rmw
https://api.github.com/repos/theimpossibleastronaut/rmw
opened
[MacOS] Sometimes the mkdir test fails
bug osx test
The rmw_mkdir() test *sometimes* fails on MacOS. I've recently noticed it about 1 out of 6 times when I push to the master branch. ``` FAIL: test_utils ================ Assertion failed: (rmw_mkdir (dir, S_IRWXU) == 0), function test_rmw_mkdir, file ../../test/test_utils.c, line 27. FAIL test_utils (exit status: 134) ``` https://travis-ci.com/github/theimpossibleastronaut/rmw/jobs/506968021#L917
1.0
[MacOS] Sometimes the mkdir test fails - The rmw_mkdir() test *sometimes* fails on MacOS. I've recently noticed it about 1 out of 6 times when I push to the master branch. ``` FAIL: test_utils ================ Assertion failed: (rmw_mkdir (dir, S_IRWXU) == 0), function test_rmw_mkdir, file ../../test/test_utils.c, line 27. FAIL test_utils (exit status: 134) ``` https://travis-ci.com/github/theimpossibleastronaut/rmw/jobs/506968021#L917
test
sometimes the mkdir test fails the rmw mkdir test sometimes fails on macos i ve recently noticed it about out of times when i push to the master branch fail test utils assertion failed rmw mkdir dir s irwxu function test rmw mkdir file test test utils c line fail test utils exit status
1
214,957
16,619,909,483
IssuesEvent
2021-06-02 22:24:10
galaxyproject/galaxy
https://api.github.com/repos/galaxyproject/galaxy
closed
pytest ignores tests grouped within a class
area/testing
Currently, pytest will ignore tests grouped in classes. This was done [here](https://github.com/galaxyproject/galaxy/pull/6722/commits/721a4656d941a5462a4db046d192ef81cd1a23c1) to get rid of pytest warnings issued when pytest attempted to collect tests from a class with a name matching the test collection pattern (`Test*`) but also with a constructor, which causes the class *not* to be collected ([see these pytest docs](https://doc.pytest.org/en/latest/explanation/goodpractices.html#conventions-for-python-test-discovery)). As a result, we cannot [organize tests into classes](https://docs.pytest.org/en/6.2.x/getting-started.html#group-multiple-tests-in-a-class), which can be very convenient, especially when there's a lot of tests in a module. Furthermore, if we tried to do that, the test methods inside any class would be skipped silently, which is not at all obvious and can be easily missed. There's a simple way to handle such classes. As per [pytest docs](https://doc.pytest.org/en/latest/example/pythoncollection.html#customizing-test-collection), "users can prevent pytest from discovering classes that start with Test by setting a boolean `__test__` attribute to `False`:" ```py # Will not be discovered as a test class TestClass: __test__ = False ``` Unless there are any objections, I'd like to use this approach and enable class-based test discovery (we don't have many classes in the code base that would need this flag). If this proves inconvenient (which I hope it won't), we can go back to function-based test only.
1.0
pytest ignores tests grouped within a class - Currently, pytest will ignore tests grouped in classes. This was done [here](https://github.com/galaxyproject/galaxy/pull/6722/commits/721a4656d941a5462a4db046d192ef81cd1a23c1) to get rid of pytest warnings issued when pytest attempted to collect tests from a class with a name matching the test collection pattern (`Test*`) but also with a constructor, which causes the class *not* to be collected ([see these pytest docs](https://doc.pytest.org/en/latest/explanation/goodpractices.html#conventions-for-python-test-discovery)). As a result, we cannot [organize tests into classes](https://docs.pytest.org/en/6.2.x/getting-started.html#group-multiple-tests-in-a-class), which can be very convenient, especially when there's a lot of tests in a module. Furthermore, if we tried to do that, the test methods inside any class would be skipped silently, which is not at all obvious and can be easily missed. There's a simple way to handle such classes. As per [pytest docs](https://doc.pytest.org/en/latest/example/pythoncollection.html#customizing-test-collection), "users can prevent pytest from discovering classes that start with Test by setting a boolean `__test__` attribute to `False`:" ```py # Will not be discovered as a test class TestClass: __test__ = False ``` Unless there are any objections, I'd like to use this approach and enable class-based test discovery (we don't have many classes in the code base that would need this flag). If this proves inconvenient (which I hope it won't), we can go back to function-based test only.
test
pytest ignores tests grouped within a class currently pytest will ignore tests grouped in classes this was done to get rid of pytest warnings issued when pytest attempted to collect tests from a class with a name matching the test collection pattern test but also with a constructor which causes the class not to be collected as a result we cannot which can be very convenient especially when there s a lot of tests in a module furthermore if we tried to do that the test methods inside any class would be skipped silently which is not at all obvious and can be easily missed there s a simple way to handle such classes as per users can prevent pytest from discovering classes that start with test by setting a boolean test attribute to false py will not be discovered as a test class testclass test false unless there are any objections i d like to use this approach and enable class based test discovery we don t have many classes in the code base that would need this flag if this proves inconvenient which i hope it won t we can go back to function based test only
1
649,924
21,330,399,354
IssuesEvent
2022-04-18 07:31:06
ballerina-platform/ballerina-dev-website
https://api.github.com/repos/ballerina-platform/ballerina-dev-website
closed
Add a Mandatory Checkbox to Avoid Breaking URL Changes
Priority/High Type/Improvement Points/0.5
**Description:** We need to add a mandatory checkbox to the PR template so that it can be done to verify if redirections are properly added when implementing breaking URL changes. **Describe your problem(s)** **Describe your solution(s)** **Related Issues (optional):** <!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> **Suggested Labels (optional):** <!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels--> **Suggested Assignees (optional):** <!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
1.0
Add a Mandatory Checkbox to Avoid Breaking URL Changes - **Description:** We need to add a mandatory checkbox to the PR template so that it can be done to verify if redirections are properly added when implementing breaking URL changes. **Describe your problem(s)** **Describe your solution(s)** **Related Issues (optional):** <!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> **Suggested Labels (optional):** <!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels--> **Suggested Assignees (optional):** <!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
non_test
add a mandatory checkbox to avoid breaking url changes description we need to add a mandatory checkbox to the pr template so that it can be done to verify if redirections are properly added when implementing breaking url changes describe your problem s describe your solution s related issues optional suggested labels optional suggested assignees optional
0
56,025
6,498,833,857
IssuesEvent
2017-08-22 19:00:02
mozilla-mobile/focus-android
https://api.github.com/repos/mozilla-mobile/focus-android
closed
Have an option to bypass Strict Mode on UITests
needs triage testing
Some version of API/simulator combination causes the Focus app to crash early when strict mode is enabled. For example, Nexus 9 fails locally, but for some reason BB Nexus 9 setup currents runs tests with no issues. If there is a way to turn off strict mode (i.e. command-line parameter) then we won't have to worry about picking the right sim/API for testing.
1.0
Have an option to bypass Strict Mode on UITests - Some version of API/simulator combination causes the Focus app to crash early when strict mode is enabled. For example, Nexus 9 fails locally, but for some reason BB Nexus 9 setup currents runs tests with no issues. If there is a way to turn off strict mode (i.e. command-line parameter) then we won't have to worry about picking the right sim/API for testing.
test
have an option to bypass strict mode on uitests some version of api simulator combination causes the focus app to crash early when strict mode is enabled for example nexus fails locally but for some reason bb nexus setup currents runs tests with no issues if there is a way to turn off strict mode i e command line parameter then we won t have to worry about picking the right sim api for testing
1
345,252
30,793,164,790
IssuesEvent
2023-07-31 17:42:56
department-of-veterans-affairs/va.gov-team
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
closed
FE | Profile | Contact Information | Fix / remedy failed unit tests for useProfileTransaction hook
bug frontend authenticated-experience profile testing contact-information
## Issue Description The CI/CD Github action runs have been failing periodically due to unit tests that the team is responsible for. Specifically the unit tests for the `useProfileTransaction` hook. Not really sure why those passed for a long time and now are failing, but the hook really isn't used outside of experimental forms library work. I think the plan of action should probably go: - figure out if there is an obvious solution to the failing tests, and fix the tests if possible. **OR** - if the tests are flakey to the extent that they should be rebuilt, or the hook should be completely rebuilt, then we will need to do more work to just remove the related functionality all together. Since it's not used in prod, and the form system is not in active development there isn't a reason to the keep the code. - Create a `feature archive branch` that has the current code, so that in the future if we really wanted to revisit it, we can just pull down that branch and attempt to merge/rebase with main so as to get back all the code we remove. - Remove, hook, tests, and experimental component. The home phone was the only section of the profile that had used the new form library. - Remove feature toggle around using the experimental form system from FE and API - I'm kind of leaning towards ripping it all out. Its been a while since we did this work and its just been sitting around for probably over a year at this point. ## Front end tasks - Ensure the page is styled according to design spec. - Ensure the back end data from API is displayed in local and review instance - Ensure unit tests are available. - Ensure your code changes are covered by E2E tests - Run axe checks using the Chrome or Firefox browser plugin - Test for color contrast and color blindness issues - Zoom layouts to 400% at 1280px width - Test with 1 or 2 screen readers - Navigate using the keyboard only ## Acceptance Criteria - [x] Designer comments on this issue to approve page styling - [x] Unit tests pass - [x] E2E tests cover current code and regression - [x] End-to-end tests show 0 violations. - [x] The data returned matches API response (for given user or scenario) - [x] All axe checks pass - [x] All color contrast checks pass - [x] All zoom testing passes - [x] All keyboard checks pass - [x] All screenreader checks pass
1.0
FE | Profile | Contact Information | Fix / remedy failed unit tests for useProfileTransaction hook - ## Issue Description The CI/CD Github action runs have been failing periodically due to unit tests that the team is responsible for. Specifically the unit tests for the `useProfileTransaction` hook. Not really sure why those passed for a long time and now are failing, but the hook really isn't used outside of experimental forms library work. I think the plan of action should probably go: - figure out if there is an obvious solution to the failing tests, and fix the tests if possible. **OR** - if the tests are flakey to the extent that they should be rebuilt, or the hook should be completely rebuilt, then we will need to do more work to just remove the related functionality all together. Since it's not used in prod, and the form system is not in active development there isn't a reason to the keep the code. - Create a `feature archive branch` that has the current code, so that in the future if we really wanted to revisit it, we can just pull down that branch and attempt to merge/rebase with main so as to get back all the code we remove. - Remove, hook, tests, and experimental component. The home phone was the only section of the profile that had used the new form library. - Remove feature toggle around using the experimental form system from FE and API - I'm kind of leaning towards ripping it all out. Its been a while since we did this work and its just been sitting around for probably over a year at this point. ## Front end tasks - Ensure the page is styled according to design spec. - Ensure the back end data from API is displayed in local and review instance - Ensure unit tests are available. - Ensure your code changes are covered by E2E tests - Run axe checks using the Chrome or Firefox browser plugin - Test for color contrast and color blindness issues - Zoom layouts to 400% at 1280px width - Test with 1 or 2 screen readers - Navigate using the keyboard only ## Acceptance Criteria - [x] Designer comments on this issue to approve page styling - [x] Unit tests pass - [x] E2E tests cover current code and regression - [x] End-to-end tests show 0 violations. - [x] The data returned matches API response (for given user or scenario) - [x] All axe checks pass - [x] All color contrast checks pass - [x] All zoom testing passes - [x] All keyboard checks pass - [x] All screenreader checks pass
test
fe profile contact information fix remedy failed unit tests for useprofiletransaction hook issue description the ci cd github action runs have been failing periodically due to unit tests that the team is responsible for specifically the unit tests for the useprofiletransaction hook not really sure why those passed for a long time and now are failing but the hook really isn t used outside of experimental forms library work i think the plan of action should probably go figure out if there is an obvious solution to the failing tests and fix the tests if possible or if the tests are flakey to the extent that they should be rebuilt or the hook should be completely rebuilt then we will need to do more work to just remove the related functionality all together since it s not used in prod and the form system is not in active development there isn t a reason to the keep the code create a feature archive branch that has the current code so that in the future if we really wanted to revisit it we can just pull down that branch and attempt to merge rebase with main so as to get back all the code we remove remove hook tests and experimental component the home phone was the only section of the profile that had used the new form library remove feature toggle around using the experimental form system from fe and api i m kind of leaning towards ripping it all out its been a while since we did this work and its just been sitting around for probably over a year at this point front end tasks ensure the page is styled according to design spec ensure the back end data from api is displayed in local and review instance ensure unit tests are available ensure your code changes are covered by tests run axe checks using the chrome or firefox browser plugin test for color contrast and color blindness issues zoom layouts to at width test with or screen readers navigate using the keyboard only acceptance criteria designer comments on this issue to approve page styling unit tests pass tests cover current code and regression end to end tests show violations the data returned matches api response for given user or scenario all axe checks pass all color contrast checks pass all zoom testing passes all keyboard checks pass all screenreader checks pass
1
126,330
10,419,066,756
IssuesEvent
2019-09-15 13:55:11
zio/zio
https://api.github.com/repos/zio/zio
closed
ZIO Test: Explore Alternatives for Bringing Back Old MockRandom Functionality
tests
Originally `MockRandom` had a very simple implementation along the lines of `MockConsole` where you would “feed” data into `MockRandom` and then when you called methods that relied on `Random` you would just get back the data you fed in. As we built out ZIO Test we upgraded `MockRandom` to a full blown purely functional pseudo-random number generator which is much more powerful but actually doesn’t support the very simple use case of “feed a few numbers, verify that your program returns the expected result” as well. I see a few options here: 1. **Add an additional MockRandom implementation** - Maybe for namespacing we create package `mockVariants` in `mock` and then `SimpleMockRandom` within that. It provides a solution and doesn’t require changing anything else but I worry a bit about providing different ways of doing similar things and it also doesn’t integrate as nicely with `MockEnvironment` since the user has to provide this separately and make sure they don’t override the normal `MockRandom` for applications like property based testing. 2. **Make Current MockRandom replace Random.Live** - `Random` is unique among the core ZIO environment types in that it doesn’t have to involve any interaction with the outside world. To print to the live console we actually have to access some system resource that exists outside ZIO. But `scala.util.Random` is just a pseudo-random number generator itself, just unfortunately one that mutates internal state. Our existing `MockRandom` implementation already generates the same random outputs as `scala.util.Random` for any given seed, and we could make sure it was seeded with the system time initially in the same way when the environment is created. This would be the first time our environmental effects didn’t just delegate to standard library methods, so it may be a hard no. But if we did this we could then make `MockRandom` implement the old behavior. One disadvantage would be all our property based testing methods would have to reference `Live` which would be a pain, but maybe we could use a type alias to get around it. 3. **Do Nothing** - `MockRandom` is deterministic so you could always set the seed to a certain value, see what outputs you get for that seed (you can do in REPL with `scala.util.Random` since it provides same outputs) and then check against those. But that is definitely not as convenient as it used to be. There may be variants of these or other alternatives I’m not thinking of so would appreciate any other ideas or thoughts on how important the old `MockRandom` use case is for supporting. As a user would you expect the default `MockRandom` to give you the “feed data” behavior or the “purely functional RNG” behavior?
1.0
ZIO Test: Explore Alternatives for Bringing Back Old MockRandom Functionality - Originally `MockRandom` had a very simple implementation along the lines of `MockConsole` where you would “feed” data into `MockRandom` and then when you called methods that relied on `Random` you would just get back the data you fed in. As we built out ZIO Test we upgraded `MockRandom` to a full blown purely functional pseudo-random number generator which is much more powerful but actually doesn’t support the very simple use case of “feed a few numbers, verify that your program returns the expected result” as well. I see a few options here: 1. **Add an additional MockRandom implementation** - Maybe for namespacing we create package `mockVariants` in `mock` and then `SimpleMockRandom` within that. It provides a solution and doesn’t require changing anything else but I worry a bit about providing different ways of doing similar things and it also doesn’t integrate as nicely with `MockEnvironment` since the user has to provide this separately and make sure they don’t override the normal `MockRandom` for applications like property based testing. 2. **Make Current MockRandom replace Random.Live** - `Random` is unique among the core ZIO environment types in that it doesn’t have to involve any interaction with the outside world. To print to the live console we actually have to access some system resource that exists outside ZIO. But `scala.util.Random` is just a pseudo-random number generator itself, just unfortunately one that mutates internal state. Our existing `MockRandom` implementation already generates the same random outputs as `scala.util.Random` for any given seed, and we could make sure it was seeded with the system time initially in the same way when the environment is created. This would be the first time our environmental effects didn’t just delegate to standard library methods, so it may be a hard no. But if we did this we could then make `MockRandom` implement the old behavior. One disadvantage would be all our property based testing methods would have to reference `Live` which would be a pain, but maybe we could use a type alias to get around it. 3. **Do Nothing** - `MockRandom` is deterministic so you could always set the seed to a certain value, see what outputs you get for that seed (you can do in REPL with `scala.util.Random` since it provides same outputs) and then check against those. But that is definitely not as convenient as it used to be. There may be variants of these or other alternatives I’m not thinking of so would appreciate any other ideas or thoughts on how important the old `MockRandom` use case is for supporting. As a user would you expect the default `MockRandom` to give you the “feed data” behavior or the “purely functional RNG” behavior?
test
zio test explore alternatives for bringing back old mockrandom functionality originally mockrandom had a very simple implementation along the lines of mockconsole where you would “feed” data into mockrandom and then when you called methods that relied on random you would just get back the data you fed in as we built out zio test we upgraded mockrandom to a full blown purely functional pseudo random number generator which is much more powerful but actually doesn’t support the very simple use case of “feed a few numbers verify that your program returns the expected result” as well i see a few options here add an additional mockrandom implementation maybe for namespacing we create package mockvariants in mock and then simplemockrandom within that it provides a solution and doesn’t require changing anything else but i worry a bit about providing different ways of doing similar things and it also doesn’t integrate as nicely with mockenvironment since the user has to provide this separately and make sure they don’t override the normal mockrandom for applications like property based testing make current mockrandom replace random live random is unique among the core zio environment types in that it doesn’t have to involve any interaction with the outside world to print to the live console we actually have to access some system resource that exists outside zio but scala util random is just a pseudo random number generator itself just unfortunately one that mutates internal state our existing mockrandom implementation already generates the same random outputs as scala util random for any given seed and we could make sure it was seeded with the system time initially in the same way when the environment is created this would be the first time our environmental effects didn’t just delegate to standard library methods so it may be a hard no but if we did this we could then make mockrandom implement the old behavior one disadvantage would be all our property based testing methods would have to reference live which would be a pain but maybe we could use a type alias to get around it do nothing mockrandom is deterministic so you could always set the seed to a certain value see what outputs you get for that seed you can do in repl with scala util random since it provides same outputs and then check against those but that is definitely not as convenient as it used to be there may be variants of these or other alternatives i’m not thinking of so would appreciate any other ideas or thoughts on how important the old mockrandom use case is for supporting as a user would you expect the default mockrandom to give you the “feed data” behavior or the “purely functional rng” behavior
1
78,347
7,626,046,135
IssuesEvent
2018-05-04 00:36:26
eclipse/openj9
https://api.github.com/repos/eclipse/openj9
closed
Test cannot directly copy and run cmd that is printed in console
comp:test prio:medium
Running test in build, we will get console output as following: ``` =============================================== Running test memoryCategories_0 ... =============================================== test with NoOptions { mkdir -p "/tmp/bld_380527/memoryCategories_0"; \ cd "/tmp/bld_380527/memoryCategories_0"; \ export LIBPATH="/bluebird/builds/bld_380527/sdk/xa6480/jre/bin/../lib/default:/bluebird/builds/bld_380527/sdk/xa6480/jre/bin/../lib/amd64/default:/bluebird/builds/bld_380527/sdk/xa6480/jre/bin/j9vm:"; \ "/bluebird/builds/bld_380527/sdk/xa6480/jre/bin/java" -Xnocompressedrefs \ -cp "/bluebird/builds/bld_380527/jvmtest/test/SE80/TestConfig/resources:/bluebird/builds/bld_380527/jvmtest/test/SE80/TestConfig/lib/testng.jar:/bluebird/builds/bld_380527/jvmtest/test/SE80/TestConfig/lib/jcommander.jar:/bluebird/builds/bld_380527/jvmtest/test/SE80/Java8andUp/GeneralTest.jar" \ org.testng.TestNG -d "/tmp/bld_380527/memoryCategories_0" "/bluebird/builds/bld_380527/jvmtest/test/SE80/Java8andUp/testng.xml" \ -testnames memoryCategories \ -groups level.extended \ -excludegroups d.*.linux_x86-64,d.*.arch.x86,d.*.os.linux,d.*.bits.64,d.*.generic-all; \ if [ $? -eq 0 ] ; then echo "memoryCategories_0""_PASSED"; else perl "-I/bluebird/builds/bld_380527/../../javatest/HEAD_379871/test/lib/perl" -mResultStore::Uploader -e "ResultStore::Uploader::upload('.',380527,532827023,'vmfarm.ottawa.ibm.com:31','results-532827023')"; echo; echo "memoryCategories_0""_FAILED"; fi; } 2>&1 | tee -a "/tmp/bld_380527/TestTargetResult" [IncludeExcludeTestAnnotationTransformer] [INFO] exclude file is /bluebird/builds/bld_380527/jvmtest/test/SE80/TestConfig/resources/excludes/current_exclude_SE80.txt ... ... TestNG 6.10 by C?dric Beust (cedric@beust.com) ... ``` Ideally, we should be able to run the test by just run the cmd directly. (stated in test readme). ``` mkdir -p "/tmp/bld_380527/memoryCategories_0"; \ cd "/tmp/bld_380527/memoryCategories_0"; \ export LIBPATH="/bluebird/builds/bld_380527/sdk/xa6480/jre/bin/../lib/default:/bluebird/builds/bld_380527/sdk/xa6480/jre/bin/../lib/amd64/default:/bluebird/builds/bld_380527/sdk/xa6480/jre/bin/j9vm:"; \ "/bluebird/builds/bld_380527/sdk/xa6480/jre/bin/java" -Xnocompressedrefs \ -cp "/bluebird/builds/bld_380527/jvmtest/test/SE80/TestConfig/resources:/bluebird/builds/bld_380527/jvmtest/test/SE80/TestConfig/lib/testng.jar:/bluebird/builds/bld_380527/jvmtest/test/SE80/TestConfig/lib/jcommander.jar:/bluebird/builds/bld_380527/jvmtest/test/SE80/Java8andUp/GeneralTest.jar" \ org.testng.TestNG -d "/tmp/bld_380527/memoryCategories_0" "/bluebird/builds/bld_380527/jvmtest/test/SE80/Java8andUp/testng.xml" \ -testnames memoryCategories \ -groups level.extended \ -excludegroups d.*.linux_x86-64,d.*.arch.x86,d.*.os.linux,d.*.bits.64,d.*.generic-all; ``` However, by just running the cmd directly, user will get error: ``` [IncludeExcludeTestAnnotationTransformer] [INFO] exclude file is null [TestNG] [ERROR] Cannot instantiate class org.openj9.test.util.IncludeExcludeTestAnnotationTransformer ```
1.0
Test cannot directly copy and run cmd that is printed in console - Running test in build, we will get console output as following: ``` =============================================== Running test memoryCategories_0 ... =============================================== test with NoOptions { mkdir -p "/tmp/bld_380527/memoryCategories_0"; \ cd "/tmp/bld_380527/memoryCategories_0"; \ export LIBPATH="/bluebird/builds/bld_380527/sdk/xa6480/jre/bin/../lib/default:/bluebird/builds/bld_380527/sdk/xa6480/jre/bin/../lib/amd64/default:/bluebird/builds/bld_380527/sdk/xa6480/jre/bin/j9vm:"; \ "/bluebird/builds/bld_380527/sdk/xa6480/jre/bin/java" -Xnocompressedrefs \ -cp "/bluebird/builds/bld_380527/jvmtest/test/SE80/TestConfig/resources:/bluebird/builds/bld_380527/jvmtest/test/SE80/TestConfig/lib/testng.jar:/bluebird/builds/bld_380527/jvmtest/test/SE80/TestConfig/lib/jcommander.jar:/bluebird/builds/bld_380527/jvmtest/test/SE80/Java8andUp/GeneralTest.jar" \ org.testng.TestNG -d "/tmp/bld_380527/memoryCategories_0" "/bluebird/builds/bld_380527/jvmtest/test/SE80/Java8andUp/testng.xml" \ -testnames memoryCategories \ -groups level.extended \ -excludegroups d.*.linux_x86-64,d.*.arch.x86,d.*.os.linux,d.*.bits.64,d.*.generic-all; \ if [ $? -eq 0 ] ; then echo "memoryCategories_0""_PASSED"; else perl "-I/bluebird/builds/bld_380527/../../javatest/HEAD_379871/test/lib/perl" -mResultStore::Uploader -e "ResultStore::Uploader::upload('.',380527,532827023,'vmfarm.ottawa.ibm.com:31','results-532827023')"; echo; echo "memoryCategories_0""_FAILED"; fi; } 2>&1 | tee -a "/tmp/bld_380527/TestTargetResult" [IncludeExcludeTestAnnotationTransformer] [INFO] exclude file is /bluebird/builds/bld_380527/jvmtest/test/SE80/TestConfig/resources/excludes/current_exclude_SE80.txt ... ... TestNG 6.10 by C?dric Beust (cedric@beust.com) ... ``` Ideally, we should be able to run the test by just run the cmd directly. (stated in test readme). ``` mkdir -p "/tmp/bld_380527/memoryCategories_0"; \ cd "/tmp/bld_380527/memoryCategories_0"; \ export LIBPATH="/bluebird/builds/bld_380527/sdk/xa6480/jre/bin/../lib/default:/bluebird/builds/bld_380527/sdk/xa6480/jre/bin/../lib/amd64/default:/bluebird/builds/bld_380527/sdk/xa6480/jre/bin/j9vm:"; \ "/bluebird/builds/bld_380527/sdk/xa6480/jre/bin/java" -Xnocompressedrefs \ -cp "/bluebird/builds/bld_380527/jvmtest/test/SE80/TestConfig/resources:/bluebird/builds/bld_380527/jvmtest/test/SE80/TestConfig/lib/testng.jar:/bluebird/builds/bld_380527/jvmtest/test/SE80/TestConfig/lib/jcommander.jar:/bluebird/builds/bld_380527/jvmtest/test/SE80/Java8andUp/GeneralTest.jar" \ org.testng.TestNG -d "/tmp/bld_380527/memoryCategories_0" "/bluebird/builds/bld_380527/jvmtest/test/SE80/Java8andUp/testng.xml" \ -testnames memoryCategories \ -groups level.extended \ -excludegroups d.*.linux_x86-64,d.*.arch.x86,d.*.os.linux,d.*.bits.64,d.*.generic-all; ``` However, by just running the cmd directly, user will get error: ``` [IncludeExcludeTestAnnotationTransformer] [INFO] exclude file is null [TestNG] [ERROR] Cannot instantiate class org.openj9.test.util.IncludeExcludeTestAnnotationTransformer ```
test
test cannot directly copy and run cmd that is printed in console running test in build we will get console output as following running test memorycategories test with nooptions mkdir p tmp bld memorycategories cd tmp bld memorycategories export libpath bluebird builds bld sdk jre bin lib default bluebird builds bld sdk jre bin lib default bluebird builds bld sdk jre bin bluebird builds bld sdk jre bin java xnocompressedrefs cp bluebird builds bld jvmtest test testconfig resources bluebird builds bld jvmtest test testconfig lib testng jar bluebird builds bld jvmtest test testconfig lib jcommander jar bluebird builds bld jvmtest test generaltest jar org testng testng d tmp bld memorycategories bluebird builds bld jvmtest test testng xml testnames memorycategories groups level extended excludegroups d linux d arch d os linux d bits d generic all if then echo memorycategories passed else perl i bluebird builds bld javatest head test lib perl mresultstore uploader e resultstore uploader upload vmfarm ottawa ibm com results echo echo memorycategories failed fi tee a tmp bld testtargetresult exclude file is bluebird builds bld jvmtest test testconfig resources excludes current exclude txt testng by c dric beust cedric beust com ideally we should be able to run the test by just run the cmd directly stated in test readme mkdir p tmp bld memorycategories cd tmp bld memorycategories export libpath bluebird builds bld sdk jre bin lib default bluebird builds bld sdk jre bin lib default bluebird builds bld sdk jre bin bluebird builds bld sdk jre bin java xnocompressedrefs cp bluebird builds bld jvmtest test testconfig resources bluebird builds bld jvmtest test testconfig lib testng jar bluebird builds bld jvmtest test testconfig lib jcommander jar bluebird builds bld jvmtest test generaltest jar org testng testng d tmp bld memorycategories bluebird builds bld jvmtest test testng xml testnames memorycategories groups level extended excludegroups d linux d arch d os linux d bits d generic all however by just running the cmd directly user will get error exclude file is null cannot instantiate class org test util includeexcludetestannotationtransformer
1
285,972
24,711,348,793
IssuesEvent
2022-10-20 01:15:32
harvester/harvester
https://api.github.com/repos/harvester/harvester
closed
[FEATURE] Logging
kind/enhancement priority/0 highlight require/HEP Epic not-require/test-plan
1. We need to provide a way for users to view the system-related logs. 2. We need the ability to export the system-related logs to outside the cluster in case the user wants to collect and analyse it later. To elaborate: 1. We need to export Harvester logs to a central location. We should support what Rancher logging can support, including Elasticsearch and Splunk. https://rancher.com/docs/rancher/v2.6/en/logging/ 2. The log we need to collect are: 1. Log in the OS. E.g. major system logs under /var/log/ 2. RKE2/Kubernetes related logs 3. Logs for Harvester components running as pod inside the Kubernetes cluster. 4. Logs for other workloads/VMs.
1.0
[FEATURE] Logging - 1. We need to provide a way for users to view the system-related logs. 2. We need the ability to export the system-related logs to outside the cluster in case the user wants to collect and analyse it later. To elaborate: 1. We need to export Harvester logs to a central location. We should support what Rancher logging can support, including Elasticsearch and Splunk. https://rancher.com/docs/rancher/v2.6/en/logging/ 2. The log we need to collect are: 1. Log in the OS. E.g. major system logs under /var/log/ 2. RKE2/Kubernetes related logs 3. Logs for Harvester components running as pod inside the Kubernetes cluster. 4. Logs for other workloads/VMs.
test
logging we need to provide a way for users to view the system related logs we need the ability to export the system related logs to outside the cluster in case the user wants to collect and analyse it later to elaborate we need to export harvester logs to a central location we should support what rancher logging can support including elasticsearch and splunk the log we need to collect are log in the os e g major system logs under var log kubernetes related logs logs for harvester components running as pod inside the kubernetes cluster logs for other workloads vms
1
246,619
18,848,997,227
IssuesEvent
2021-11-11 18:13:30
kubernetes-sigs/aws-ebs-csi-driver
https://api.github.com/repos/kubernetes-sigs/aws-ebs-csi-driver
closed
Improve documentation around extraTags argument
lifecycle/rotten kind/documentation
/kind bug **What happened?** There is no example of how to set it. The arg help text shows that it's like key=value,key=value (comma-separated) but this isn't good enough because users are not interacting with the binary as a CLI, they are using it as a Pod. Plus I had to check code the difference between extraTags and extraVolumeTags (they are the same but the latter is deprecated). **What you expected to happen?** **How to reproduce it (as minimally and precisely as possible)?** **Anything else we need to know?**: **Environment** - Kubernetes version (use `kubectl version`): - Driver version:
1.0
Improve documentation around extraTags argument - /kind bug **What happened?** There is no example of how to set it. The arg help text shows that it's like key=value,key=value (comma-separated) but this isn't good enough because users are not interacting with the binary as a CLI, they are using it as a Pod. Plus I had to check code the difference between extraTags and extraVolumeTags (they are the same but the latter is deprecated). **What you expected to happen?** **How to reproduce it (as minimally and precisely as possible)?** **Anything else we need to know?**: **Environment** - Kubernetes version (use `kubectl version`): - Driver version:
non_test
improve documentation around extratags argument kind bug what happened there is no example of how to set it the arg help text shows that it s like key value key value comma separated but this isn t good enough because users are not interacting with the binary as a cli they are using it as a pod plus i had to check code the difference between extratags and extravolumetags they are the same but the latter is deprecated what you expected to happen how to reproduce it as minimally and precisely as possible anything else we need to know environment kubernetes version use kubectl version driver version
0
269,835
23,470,677,692
IssuesEvent
2022-08-16 21:27:35
rusqlite/rusqlite
https://api.github.com/repos/rusqlite/rusqlite
closed
Add tests for 32-bit systems.
testing / ci
One impediment here is that the bundled bindings include tests that only work on 64-bit systems (because they include hard-coded constants). Probably worth removing those tests when generating the bundled bindings and including them for non-bundled cases (e.g. buildtime_bindgen).
1.0
Add tests for 32-bit systems. - One impediment here is that the bundled bindings include tests that only work on 64-bit systems (because they include hard-coded constants). Probably worth removing those tests when generating the bundled bindings and including them for non-bundled cases (e.g. buildtime_bindgen).
test
add tests for bit systems one impediment here is that the bundled bindings include tests that only work on bit systems because they include hard coded constants probably worth removing those tests when generating the bundled bindings and including them for non bundled cases e g buildtime bindgen
1
224,261
17,685,507,516
IssuesEvent
2021-08-24 00:31:04
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
roachtest: sqlsmith/setup=empty/setting=no-mutations failed
C-test-failure O-robot O-roachtest branch-master T-sql-queries E-quick-win
roachtest.sqlsmith/setup=empty/setting=no-mutations [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=3248647&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=3248647&tab=artifacts#/sqlsmith/setup=empty/setting=no-mutations) on master @ [31af9e32a55a166166e9ba9c5327b7cd847ae236](https://github.com/cockroachdb/cockroach/commits/31af9e32a55a166166e9ba9c5327b7cd847ae236): ``` The test failed on branch=master, cloud=gce: test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/sqlsmith/setup=empty/setting=no-mutations/run_1 sqlsmith.go:217,sqlsmith.go:252,test_runner.go:777: error: pq: internal error: no volatility for cast tuple{timestamptz, int[], unknown}::tuple{timestamptz, int2[], unknown} stmt: WITH with_166382 (col_927225) AS ( SELECT * FROM (VALUES (('0001-01-01 00:00:00+00:00':::TIMESTAMPTZ, ARRAY[]:::INT2[], NULL))) AS tab_537300 (col_927225) EXCEPT ALL SELECT * FROM ( VALUES ( ('294276-12-31 23:59:59.999999+00:00':::TIMESTAMPTZ, ARRAY[(-32768):::INT8,0:::INT8,25203:::INT8], NULL) ) ) AS tab_537301 (col_927226) ) SELECT tab_537302.col_927227 AS col_927228 FROM ( VALUES ((-0.2372959852218628):::FLOAT8), (0.3741636276245117:::FLOAT8), ((-0.2869168221950531):::FLOAT8), (1.1908493041992188:::FLOAT8), ((-1.7612881660461426):::FLOAT8), ((-0.4135175943374634):::FLOAT8) ) AS tab_537302 (col_927227) WHERE true; ``` <details><summary>Reproduce</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/roachtest) </p> </details> /cc @cockroachdb/sql-queries <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sqlsmith/setup=empty/setting=no-mutations.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
2.0
roachtest: sqlsmith/setup=empty/setting=no-mutations failed - roachtest.sqlsmith/setup=empty/setting=no-mutations [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=3248647&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=3248647&tab=artifacts#/sqlsmith/setup=empty/setting=no-mutations) on master @ [31af9e32a55a166166e9ba9c5327b7cd847ae236](https://github.com/cockroachdb/cockroach/commits/31af9e32a55a166166e9ba9c5327b7cd847ae236): ``` The test failed on branch=master, cloud=gce: test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/sqlsmith/setup=empty/setting=no-mutations/run_1 sqlsmith.go:217,sqlsmith.go:252,test_runner.go:777: error: pq: internal error: no volatility for cast tuple{timestamptz, int[], unknown}::tuple{timestamptz, int2[], unknown} stmt: WITH with_166382 (col_927225) AS ( SELECT * FROM (VALUES (('0001-01-01 00:00:00+00:00':::TIMESTAMPTZ, ARRAY[]:::INT2[], NULL))) AS tab_537300 (col_927225) EXCEPT ALL SELECT * FROM ( VALUES ( ('294276-12-31 23:59:59.999999+00:00':::TIMESTAMPTZ, ARRAY[(-32768):::INT8,0:::INT8,25203:::INT8], NULL) ) ) AS tab_537301 (col_927226) ) SELECT tab_537302.col_927227 AS col_927228 FROM ( VALUES ((-0.2372959852218628):::FLOAT8), (0.3741636276245117:::FLOAT8), ((-0.2869168221950531):::FLOAT8), (1.1908493041992188:::FLOAT8), ((-1.7612881660461426):::FLOAT8), ((-0.4135175943374634):::FLOAT8) ) AS tab_537302 (col_927227) WHERE true; ``` <details><summary>Reproduce</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/roachtest) </p> </details> /cc @cockroachdb/sql-queries <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sqlsmith/setup=empty/setting=no-mutations.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
test
roachtest sqlsmith setup empty setting no mutations failed roachtest sqlsmith setup empty setting no mutations with on master the test failed on branch master cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts sqlsmith setup empty setting no mutations run sqlsmith go sqlsmith go test runner go error pq internal error no volatility for cast tuple timestamptz int unknown tuple timestamptz unknown stmt with with col as select from values timestamptz array null as tab col except all select from values timestamptz array null as tab col select tab col as col from values as tab col where true reproduce see cc cockroachdb sql queries
1
23,797
16,598,187,136
IssuesEvent
2021-06-01 15:44:38
OpenHistoricalMap/issues
https://api.github.com/repos/OpenHistoricalMap/issues
closed
Move Overpass into the OHM infrastructure
infrastructure nominatim overpass
**What's your idea for a cool feature that would help you use OHM better.** It turns out... Overpass is too important not to have in our infrastructure. Especially when Nominatim has a dependency on Overpass. **Current workarounds** Hosting Overpass externally, but that makes it too tricky to make mods, fix what's broken, etc.
1.0
Move Overpass into the OHM infrastructure - **What's your idea for a cool feature that would help you use OHM better.** It turns out... Overpass is too important not to have in our infrastructure. Especially when Nominatim has a dependency on Overpass. **Current workarounds** Hosting Overpass externally, but that makes it too tricky to make mods, fix what's broken, etc.
non_test
move overpass into the ohm infrastructure what s your idea for a cool feature that would help you use ohm better it turns out overpass is too important not to have in our infrastructure especially when nominatim has a dependency on overpass current workarounds hosting overpass externally but that makes it too tricky to make mods fix what s broken etc
0
108,583
9,311,087,177
IssuesEvent
2019-03-25 20:27:59
kubernetes/minikube
https://api.github.com/repos/kubernetes/minikube
closed
Add integration test to assert that HEAD can use state created by latest release
area/testing help wanted kind/feature lifecycle/frozen priority/important-soon
<!-- Thanks for filing an issue! Before hitting the button, please answer these questions.--> **Is this a BUG REPORT or FEATURE REQUEST?** (choose one): Feature request The 0.12.0 to 0.12.1 release required a recreate of the minikube VM because of localkube DNS changes. This caused a lot of confusion for users upgrading. (#781) Minikube should ideally be able to be upgraded without recreating the VM. We should add a test during our release job to enforce this behavior. In the rare case that we can't get around recreating the VM, we should make a clear note of it in the release docs.
1.0
Add integration test to assert that HEAD can use state created by latest release - <!-- Thanks for filing an issue! Before hitting the button, please answer these questions.--> **Is this a BUG REPORT or FEATURE REQUEST?** (choose one): Feature request The 0.12.0 to 0.12.1 release required a recreate of the minikube VM because of localkube DNS changes. This caused a lot of confusion for users upgrading. (#781) Minikube should ideally be able to be upgraded without recreating the VM. We should add a test during our release job to enforce this behavior. In the rare case that we can't get around recreating the VM, we should make a clear note of it in the release docs.
test
add integration test to assert that head can use state created by latest release is this a bug report or feature request choose one feature request the to release required a recreate of the minikube vm because of localkube dns changes this caused a lot of confusion for users upgrading minikube should ideally be able to be upgraded without recreating the vm we should add a test during our release job to enforce this behavior in the rare case that we can t get around recreating the vm we should make a clear note of it in the release docs
1
77,819
14,920,607,849
IssuesEvent
2021-01-23 05:44:40
4Moyede/HexatonClass-01
https://api.github.com/repos/4Moyede/HexatonClass-01
closed
[Python] 최적의 변수명짓기
Python Code
**Commit** : commit 번호 **Content** : 질문 내용에 대해서 서술하시오 ```python PFilter = int(input("원하는 가격을 입력하시오 : ")) NFilter = input("지역을 입력하시오 : ") RFilter = int(input("최소 리뷰수를 입력하시오 : ")) airbnb_filter = airbnb_db[(airbnb_db["price"]>PFilter) & (airbnb_db["neighborhood"] == NFilter) & (airbnb_db["reviews"] > RFilter)] print('Start Filtering Job') print('The number of house :',len(airbnb_filter.index)) airbnb_filter ``` 이번 과제에서 4번문제의 코드입니다. 4번문제 필터링의 경우 원래는 변수를 굳이 사용할 필요없이 250, Downtown, 5를 각각 대입하여 구해도 상관없지만, 그래도 필터?라는 역할을 수행한다는 것을 보여주고 싶어 필터에 필요한 3가지 변수를 각각 PFilter, NFilter, RFilter로 정했습니다. 이렇게 정한 이유는 처음에는 말그대로 PriceFilter, NeighborhoodFilter, ReviewsFilter로 정하려했지만, 이것이 ```python airbnb_filter = airbnb_db[(airbnb_db["price"]>PFilter) & (airbnb_db["neighborhood"] == NFilter) & (airbnb_db["reviews"] > RFilter)] ``` 저는 필터를 다음과 같이 적었기에 이 구문에 이 변수 3개가 그대로 들어간다면, 내용이 너무 길어 화면에 짤려보였습니다. 그렇기에 정한 필터이름이 Price, Neighborhood, Reviews의 앞글자를 딴, PFilter, NFilter, RFilter로 정하고, 이렇게 코딩하여 초안으로 제출했습니다. 그러나 이것은 어디까지나 제가 약속한 내용일뿐, 멘토님이나 다른 멘티님들이 보시기엔, 이 변수가 어떤 변수를 의미하는지 주석으로 달아주지않는이상 직관적인 이해가 힘들겠다는 생각이 들었습니다 따라서 가장 좋은 방법은 다른 멘티님들이 이번 과제에서 제출한것처럼 price_filter, review_filter와 같이 변수를 쉽게 파악할 수 있게 모든 정보를 드러내는 방법이지만, 이렇게 하면 이 변수를 반복해서 사용할때 경제적이지 못하다는 단점이 있습니다. 그래서 몇가지 대안을 생각해봤는데 - 변수명에 필요한 정보를 모두 살리기(review_filter) - 띄어쓰기를 기준으로 2-3글자만 살리기 (rev_fil) - 중요한 정보만 모두 적고, 별로 중요하지 않은 정보는 변수에서 2-3글자만 살리기 (review_fil) 이중에서 가장 경제적인 코드는 2번째 방법으로 변수의 이름을 짓는것이지만, 그래도 최소한의 정보는 파악할 필요가 있기 때문에 (즉, rev_fil같은 경우 rev가 reviews인지 아니면 회전수를 말하는 revolution인지... 정확히 파악하기 힘듦) 3번째 방법으로 변수의 이름을 정의하는 것이 가장 좋은 방법이라 생각합니다!
1.0
[Python] 최적의 변수명짓기 - **Commit** : commit 번호 **Content** : 질문 내용에 대해서 서술하시오 ```python PFilter = int(input("원하는 가격을 입력하시오 : ")) NFilter = input("지역을 입력하시오 : ") RFilter = int(input("최소 리뷰수를 입력하시오 : ")) airbnb_filter = airbnb_db[(airbnb_db["price"]>PFilter) & (airbnb_db["neighborhood"] == NFilter) & (airbnb_db["reviews"] > RFilter)] print('Start Filtering Job') print('The number of house :',len(airbnb_filter.index)) airbnb_filter ``` 이번 과제에서 4번문제의 코드입니다. 4번문제 필터링의 경우 원래는 변수를 굳이 사용할 필요없이 250, Downtown, 5를 각각 대입하여 구해도 상관없지만, 그래도 필터?라는 역할을 수행한다는 것을 보여주고 싶어 필터에 필요한 3가지 변수를 각각 PFilter, NFilter, RFilter로 정했습니다. 이렇게 정한 이유는 처음에는 말그대로 PriceFilter, NeighborhoodFilter, ReviewsFilter로 정하려했지만, 이것이 ```python airbnb_filter = airbnb_db[(airbnb_db["price"]>PFilter) & (airbnb_db["neighborhood"] == NFilter) & (airbnb_db["reviews"] > RFilter)] ``` 저는 필터를 다음과 같이 적었기에 이 구문에 이 변수 3개가 그대로 들어간다면, 내용이 너무 길어 화면에 짤려보였습니다. 그렇기에 정한 필터이름이 Price, Neighborhood, Reviews의 앞글자를 딴, PFilter, NFilter, RFilter로 정하고, 이렇게 코딩하여 초안으로 제출했습니다. 그러나 이것은 어디까지나 제가 약속한 내용일뿐, 멘토님이나 다른 멘티님들이 보시기엔, 이 변수가 어떤 변수를 의미하는지 주석으로 달아주지않는이상 직관적인 이해가 힘들겠다는 생각이 들었습니다 따라서 가장 좋은 방법은 다른 멘티님들이 이번 과제에서 제출한것처럼 price_filter, review_filter와 같이 변수를 쉽게 파악할 수 있게 모든 정보를 드러내는 방법이지만, 이렇게 하면 이 변수를 반복해서 사용할때 경제적이지 못하다는 단점이 있습니다. 그래서 몇가지 대안을 생각해봤는데 - 변수명에 필요한 정보를 모두 살리기(review_filter) - 띄어쓰기를 기준으로 2-3글자만 살리기 (rev_fil) - 중요한 정보만 모두 적고, 별로 중요하지 않은 정보는 변수에서 2-3글자만 살리기 (review_fil) 이중에서 가장 경제적인 코드는 2번째 방법으로 변수의 이름을 짓는것이지만, 그래도 최소한의 정보는 파악할 필요가 있기 때문에 (즉, rev_fil같은 경우 rev가 reviews인지 아니면 회전수를 말하는 revolution인지... 정확히 파악하기 힘듦) 3번째 방법으로 변수의 이름을 정의하는 것이 가장 좋은 방법이라 생각합니다!
non_test
최적의 변수명짓기 commit commit 번호 content 질문 내용에 대해서 서술하시오 python pfilter int input 원하는 가격을 입력하시오 nfilter input 지역을 입력하시오 rfilter int input 최소 리뷰수를 입력하시오 airbnb filter airbnb db pfilter airbnb db nfilter airbnb db rfilter print start filtering job print the number of house len airbnb filter index airbnb filter 이번 과제에서 코드입니다 필터링의 경우 원래는 변수를 굳이 사용할 필요없이 downtown 각각 대입하여 구해도 상관없지만 그래도 필터 라는 역할을 수행한다는 것을 보여주고 싶어 필터에 필요한 변수를 각각 pfilter nfilter rfilter로 정했습니다 이렇게 정한 이유는 처음에는 말그대로 pricefilter neighborhoodfilter reviewsfilter로 정하려했지만 이것이 python airbnb filter airbnb db pfilter airbnb db nfilter airbnb db rfilter 저는 필터를 다음과 같이 적었기에 이 구문에 이 변수 그대로 들어간다면 내용이 너무 길어 화면에 짤려보였습니다 그렇기에 정한 필터이름이 price neighborhood reviews의 앞글자를 딴 pfilter nfilter rfilter로 정하고 이렇게 코딩하여 초안으로 제출했습니다 그러나 이것은 어디까지나 제가 약속한 내용일뿐 멘토님이나 다른 멘티님들이 보시기엔 이 변수가 어떤 변수를 의미하는지 주석으로 달아주지않는이상 직관적인 이해가 힘들겠다는 생각이 들었습니다 따라서 가장 좋은 방법은 다른 멘티님들이 이번 과제에서 제출한것처럼 price filter review filter와 같이 변수를 쉽게 파악할 수 있게 모든 정보를 드러내는 방법이지만 이렇게 하면 이 변수를 반복해서 사용할때 경제적이지 못하다는 단점이 있습니다 그래서 몇가지 대안을 생각해봤는데 변수명에 필요한 정보를 모두 살리기 review filter 띄어쓰기를 기준으로 살리기 rev fil 중요한 정보만 모두 적고 별로 중요하지 않은 정보는 변수에서 살리기 review fil 이중에서 가장 경제적인 코드는 방법으로 변수의 이름을 짓는것이지만 그래도 최소한의 정보는 파악할 필요가 있기 때문에 즉 rev fil같은 경우 rev가 reviews인지 아니면 회전수를 말하는 revolution인지 정확히 파악하기 힘듦 방법으로 변수의 이름을 정의하는 것이 가장 좋은 방법이라 생각합니다
0
52,669
6,264,687,957
IssuesEvent
2017-07-16 10:41:06
go-siris/siris
https://api.github.com/repos/go-siris/siris
closed
Faster Json Renderer
branch for tests available feature request
The default Iris Json renderer uses encode/json. For fast Json Rendering (and also parsing) there is https://github.com/mailru/easyjson. We should add an easy option to use easyjson.
1.0
Faster Json Renderer - The default Iris Json renderer uses encode/json. For fast Json Rendering (and also parsing) there is https://github.com/mailru/easyjson. We should add an easy option to use easyjson.
test
faster json renderer the default iris json renderer uses encode json for fast json rendering and also parsing there is we should add an easy option to use easyjson
1
105,121
11,432,961,799
IssuesEvent
2020-02-04 14:56:19
carbon-design-system/ibm-dotcom-library
https://api.github.com/repos/carbon-design-system/ibm-dotcom-library
closed
Abstract migration: Feature card
Airtable Done Sprint Must Have documentation package: patterns web simplification
### user story As a QA tester using a Windows PC, I need a way to access the design specs from my Windows PC so that I can conducts QA tests. ### Additional Information - Prod QA testing issue (#1213) ### Acceptance criteria - [x] Upload the design spec file(s) to the [Layout patterns Abstract project](https://share.goabstract.com/374b8860-c315-4c4e-b229-fd1926ea8880?) - [x] A collection in Abstract is created and the link to it is available - [x] Associated Git stories have been updated with the new name and link - [x] Update functional specs to point to the new Abstract hosted file - [x] Add or update README Box note with a link pointing to the hosted file - [x] A comment is posted in the Prod QA issue, tagging Praveen and Chetan, when Abstract work is finished
1.0
Abstract migration: Feature card - ### user story As a QA tester using a Windows PC, I need a way to access the design specs from my Windows PC so that I can conducts QA tests. ### Additional Information - Prod QA testing issue (#1213) ### Acceptance criteria - [x] Upload the design spec file(s) to the [Layout patterns Abstract project](https://share.goabstract.com/374b8860-c315-4c4e-b229-fd1926ea8880?) - [x] A collection in Abstract is created and the link to it is available - [x] Associated Git stories have been updated with the new name and link - [x] Update functional specs to point to the new Abstract hosted file - [x] Add or update README Box note with a link pointing to the hosted file - [x] A comment is posted in the Prod QA issue, tagging Praveen and Chetan, when Abstract work is finished
non_test
abstract migration feature card user story as a qa tester using a windows pc i need a way to access the design specs from my windows pc so that i can conducts qa tests additional information prod qa testing issue acceptance criteria upload the design spec file s to the a collection in abstract is created and the link to it is available associated git stories have been updated with the new name and link update functional specs to point to the new abstract hosted file add or update readme box note with a link pointing to the hosted file a comment is posted in the prod qa issue tagging praveen and chetan when abstract work is finished
0
274,235
8,558,793,215
IssuesEvent
2018-11-08 19:18:45
ansible/galaxy
https://api.github.com/repos/ansible/galaxy
closed
Can't disable namespaces with - or . in their names.
area/backend area/frontend priority/medium type/bug
Legacy namespaces with forbidden characters such as `-` throw `Error! Name can only contain [A-Za-z0-9_] (ansible-galaxy) ` when trying to disable them.
1.0
Can't disable namespaces with - or . in their names. - Legacy namespaces with forbidden characters such as `-` throw `Error! Name can only contain [A-Za-z0-9_] (ansible-galaxy) ` when trying to disable them.
non_test
can t disable namespaces with or in their names legacy namespaces with forbidden characters such as throw error name can only contain ansible galaxy when trying to disable them
0
178,107
29,498,857,461
IssuesEvent
2023-06-02 19:34:13
department-of-veterans-affairs/va.gov-team
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
opened
Design feature to opt-out from MHV on VA.gov
ux appointments-product-design
## Task Description Add a way for Veterans to temporarily opt out of using Appointments on MHV on VA.gov and return to MHV.gov Liferay. ## Notes and References - This will only show for Appointments until we have Cerner data displaying (phase 1b). After that we'd remove it - [Here's an example](https://dsva.slack.com/archives/C03CGTDLTFF/p1685722771929829?thread_ts=1685119970.558859&cid=C03CGTDLTFF) from Secure Messaging - this would be on their static landing page - Other teams would be showing it for a longer time as they move over. We'll need to coordinate with Secure Messaging on language and UI - This will show for all users ## Acceptance Criteria - [ ] A user can return to the MHV Liferay appointments list from - [ ] From the Appointments on VA.gov Static Landing Page - [ ] From the Appointments on VA.gov Main landing page - [ ] Consider adding an option to leave feedback
1.0
Design feature to opt-out from MHV on VA.gov - ## Task Description Add a way for Veterans to temporarily opt out of using Appointments on MHV on VA.gov and return to MHV.gov Liferay. ## Notes and References - This will only show for Appointments until we have Cerner data displaying (phase 1b). After that we'd remove it - [Here's an example](https://dsva.slack.com/archives/C03CGTDLTFF/p1685722771929829?thread_ts=1685119970.558859&cid=C03CGTDLTFF) from Secure Messaging - this would be on their static landing page - Other teams would be showing it for a longer time as they move over. We'll need to coordinate with Secure Messaging on language and UI - This will show for all users ## Acceptance Criteria - [ ] A user can return to the MHV Liferay appointments list from - [ ] From the Appointments on VA.gov Static Landing Page - [ ] From the Appointments on VA.gov Main landing page - [ ] Consider adding an option to leave feedback
non_test
design feature to opt out from mhv on va gov task description add a way for veterans to temporarily opt out of using appointments on mhv on va gov and return to mhv gov liferay notes and references this will only show for appointments until we have cerner data displaying phase after that we d remove it from secure messaging this would be on their static landing page other teams would be showing it for a longer time as they move over we ll need to coordinate with secure messaging on language and ui this will show for all users acceptance criteria a user can return to the mhv liferay appointments list from from the appointments on va gov static landing page from the appointments on va gov main landing page consider adding an option to leave feedback
0
283,056
30,889,554,281
IssuesEvent
2023-08-04 02:54:04
maddyCode23/linux-4.1.15
https://api.github.com/repos/maddyCode23/linux-4.1.15
reopened
CVE-2019-11478 (High) detected in linux-stable-rtv4.1.33
Mend: dependency security vulnerability
## CVE-2019-11478 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> Jonathan Looney discovered that the TCP retransmission queue implementation in tcp_fragment in the Linux kernel could be fragmented when handling certain TCP Selective Acknowledgment (SACK) sequences. A remote attacker could use this to cause a denial of service. This has been fixed in stable kernel releases 4.4.182, 4.9.182, 4.14.127, 4.19.52, 5.1.11, and is fixed in commit f070ef2ac66716357066b683fb0baf55f8191a2e. <p>Publish Date: 2019-06-19 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-11478>CVE-2019-11478</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11478">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11478</a></p> <p>Release Date: 2019-06-19</p> <p>Fix Resolution: v5.2-rc6</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-11478 (High) detected in linux-stable-rtv4.1.33 - ## CVE-2019-11478 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> Jonathan Looney discovered that the TCP retransmission queue implementation in tcp_fragment in the Linux kernel could be fragmented when handling certain TCP Selective Acknowledgment (SACK) sequences. A remote attacker could use this to cause a denial of service. This has been fixed in stable kernel releases 4.4.182, 4.9.182, 4.14.127, 4.19.52, 5.1.11, and is fixed in commit f070ef2ac66716357066b683fb0baf55f8191a2e. <p>Publish Date: 2019-06-19 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-11478>CVE-2019-11478</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11478">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11478</a></p> <p>Release Date: 2019-06-19</p> <p>Fix Resolution: v5.2-rc6</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve high detected in linux stable cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files vulnerability details jonathan looney discovered that the tcp retransmission queue implementation in tcp fragment in the linux kernel could be fragmented when handling certain tcp selective acknowledgment sack sequences a remote attacker could use this to cause a denial of service this has been fixed in stable kernel releases and is fixed in commit publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
74,706
7,438,679,132
IssuesEvent
2018-03-27 01:52:47
xcat2/xcat-core
https://api.github.com/repos/xcat2/xcat-core
closed
[fvt] regression for new rflash (python version)
component:test type:feature
what to do: * [ ] overall regression for new ``rflash`` (python version) against ``openbmc`` **Start to test at the third week of 2.13.10 sprint 2**
1.0
[fvt] regression for new rflash (python version) - what to do: * [ ] overall regression for new ``rflash`` (python version) against ``openbmc`` **Start to test at the third week of 2.13.10 sprint 2**
test
regression for new rflash python version what to do overall regression for new rflash python version against openbmc start to test at the third week of sprint
1
199,249
6,987,294,721
IssuesEvent
2017-12-14 08:40:35
qutebrowser/qutebrowser
https://api.github.com/repos/qutebrowser/qutebrowser
closed
Completion window moves horizontally
bug: behavior component: completion component: ui priority: 2 - low
How to reproduce: - Open any page. - Press o (to show the completion window). - Pan left/right, you will see it shakes (bad). How to not reproduce: - Open any page. - Press :. - Pan left/right, you will see it not shaking (good). qutebrowser v1.0.4 Git commit: a137a29cc (2017-12-03 22:32:17 +0100) Backend: QtWebEngine (Chromium 56.0.2924.122) CPython: 3.6.3 Qt: 5.9.3 PyQt: 5.9.2
1.0
Completion window moves horizontally - How to reproduce: - Open any page. - Press o (to show the completion window). - Pan left/right, you will see it shakes (bad). How to not reproduce: - Open any page. - Press :. - Pan left/right, you will see it not shaking (good). qutebrowser v1.0.4 Git commit: a137a29cc (2017-12-03 22:32:17 +0100) Backend: QtWebEngine (Chromium 56.0.2924.122) CPython: 3.6.3 Qt: 5.9.3 PyQt: 5.9.2
non_test
completion window moves horizontally how to reproduce open any page press o to show the completion window pan left right you will see it shakes bad how to not reproduce open any page press pan left right you will see it not shaking good qutebrowser git commit backend qtwebengine chromium cpython qt pyqt
0
96,351
8,607,204,562
IssuesEvent
2018-11-17 19:54:57
couchbase/couchbase-lite-core
https://api.github.com/repos/couchbase/couchbase-lite-core
closed
C4Database leaked in "Pull Overflowed Rev Tree" test
unit-test-failure 👎
Unit tests occasionally fail with a leaked C4Database (and litecore::DataFile::Shared) instance. This happens only rarely when I run tests, but very frequently on the Jenkins Mac builder. [Excerpt of test log](https://gist.github.com/snej/5e571818e771fe50babe170d2e68f889)
1.0
C4Database leaked in "Pull Overflowed Rev Tree" test - Unit tests occasionally fail with a leaked C4Database (and litecore::DataFile::Shared) instance. This happens only rarely when I run tests, but very frequently on the Jenkins Mac builder. [Excerpt of test log](https://gist.github.com/snej/5e571818e771fe50babe170d2e68f889)
test
leaked in pull overflowed rev tree test unit tests occasionally fail with a leaked and litecore datafile shared instance this happens only rarely when i run tests but very frequently on the jenkins mac builder
1
648,691
21,192,058,360
IssuesEvent
2022-04-08 18:37:49
status-im/status-desktop
https://api.github.com/repos/status-im/status-desktop
opened
User is taken to the Community channel directly after logging in
bug Communities priority 1: high
# Bug Report ## Description User is taken to the Community channel directly after logging in. ## Steps to reproduce - Launch app and create a new Community and a few Community channels - Add a couple of known contacts to this Community - Sign Out and Quit - Remove Data folder - Launch app again and sign in #### Expected behavior - User must land on the Chat view #### Actual behavior - User is shown the Community that was created and the Settings cog is now missing https://www.loom.com/share/9581d6a072354dd987d999436d312bf5 ### Additional Information - Status desktop version: Master : f2898b6bf75ed3aae8fc8abe641646949bf78823
1.0
User is taken to the Community channel directly after logging in - # Bug Report ## Description User is taken to the Community channel directly after logging in. ## Steps to reproduce - Launch app and create a new Community and a few Community channels - Add a couple of known contacts to this Community - Sign Out and Quit - Remove Data folder - Launch app again and sign in #### Expected behavior - User must land on the Chat view #### Actual behavior - User is shown the Community that was created and the Settings cog is now missing https://www.loom.com/share/9581d6a072354dd987d999436d312bf5 ### Additional Information - Status desktop version: Master : f2898b6bf75ed3aae8fc8abe641646949bf78823
non_test
user is taken to the community channel directly after logging in bug report description user is taken to the community channel directly after logging in steps to reproduce launch app and create a new community and a few community channels add a couple of known contacts to this community sign out and quit remove data folder launch app again and sign in expected behavior user must land on the chat view actual behavior user is shown the community that was created and the settings cog is now missing additional information status desktop version master
0
203,298
23,140,944,538
IssuesEvent
2022-07-28 18:25:42
elastic/kibana
https://api.github.com/repos/elastic/kibana
closed
[Security Solution] Unable to create rule if timestamp fallback toggle button not selected
bug Team:Detections and Resp Team: SecuritySolution Team:Detection Alerts v8.4.0
**Describe the bug** Unable to create rule if timestamp fallback toggle button not selected **Build info** ``` VERSION : main COMMIT: bcbef78b4dfaebe40f334fcaa5cef94fda9acc29 ``` **Preconditions** 1. Kibana should be running **Steps to Reproduce** 1. Navigate to security > rule page 2. Click on create a new rule 3. Select any field from timestamp override 4. Do not select the timestamp fallback button 5. Create the rule 6. Observe that error occur when create the rule **Actual Result** Unable to create the new term rule with custom user **Expected Result** User should be able to create the rule with selecting timestamp fallback toggle button **Screen-cast** https://user-images.githubusercontent.com/61860752/181450383-e1240153-4f87-4cd9-b930-ebf38c928807.mp4
True
[Security Solution] Unable to create rule if timestamp fallback toggle button not selected - **Describe the bug** Unable to create rule if timestamp fallback toggle button not selected **Build info** ``` VERSION : main COMMIT: bcbef78b4dfaebe40f334fcaa5cef94fda9acc29 ``` **Preconditions** 1. Kibana should be running **Steps to Reproduce** 1. Navigate to security > rule page 2. Click on create a new rule 3. Select any field from timestamp override 4. Do not select the timestamp fallback button 5. Create the rule 6. Observe that error occur when create the rule **Actual Result** Unable to create the new term rule with custom user **Expected Result** User should be able to create the rule with selecting timestamp fallback toggle button **Screen-cast** https://user-images.githubusercontent.com/61860752/181450383-e1240153-4f87-4cd9-b930-ebf38c928807.mp4
non_test
unable to create rule if timestamp fallback toggle button not selected describe the bug unable to create rule if timestamp fallback toggle button not selected build info version main commit preconditions kibana should be running steps to reproduce navigate to security rule page click on create a new rule select any field from timestamp override do not select the timestamp fallback button create the rule observe that error occur when create the rule actual result unable to create the new term rule with custom user expected result user should be able to create the rule with selecting timestamp fallback toggle button screen cast
0
291,076
25,119,535,773
IssuesEvent
2022-11-09 06:44:24
o1-labs/snarkyjs
https://api.github.com/repos/o1-labs/snarkyjs
closed
Dex: Token tests
logic-testing
Tests for Creating, Minting, Burning, Transferring, and Putting Preconditions/Assertions on Tokens
1.0
Dex: Token tests - Tests for Creating, Minting, Burning, Transferring, and Putting Preconditions/Assertions on Tokens
test
dex token tests tests for creating minting burning transferring and putting preconditions assertions on tokens
1
237,623
19,661,701,309
IssuesEvent
2022-01-10 17:40:58
Topl/Bifrost
https://api.github.com/repos/Topl/Bifrost
closed
Domain-Specific Language
testing feature
Writing out a template of functions that can be used in Javascript contracts that directly translate to specific sequences of transactions ┆Issue is synchronized with this [Jira Epic](https://topl.atlassian.net/browse/CORE-915) by [Unito](https://www.unito.io)
1.0
Domain-Specific Language - Writing out a template of functions that can be used in Javascript contracts that directly translate to specific sequences of transactions ┆Issue is synchronized with this [Jira Epic](https://topl.atlassian.net/browse/CORE-915) by [Unito](https://www.unito.io)
test
domain specific language writing out a template of functions that can be used in javascript contracts that directly translate to specific sequences of transactions ┆issue is synchronized with this by
1
742,902
25,877,143,795
IssuesEvent
2022-12-14 08:45:13
wso2/api-manager
https://api.github.com/repos/wso2/api-manager
opened
Log Tracing is not working as expected in OpenTracing
Type/Bug Priority/Normal Component/APIM 4.x.x
### Description When you enable log tracing as mentioned in [1], the log file will not contain correct information. It only prints something like, `n14:11:48,992 [-] [PassThroughMessageProcessor-1] TRACE ` ### Steps to Reproduce Follow the steps in [1] ### Affected Component APIM ### Version 4.2.0-Pre-Alpha ### Environment Details (with versions) _No response_ ### Relevant Log Output ```shell n14:11:48,992 [-] [PassThroughMessageProcessor-1] TRACE n14:11:50,073 [-] [PassThroughMessageProcessor-2] TRACE n14:11:51,513 [-] [PassThroughMessageProcessor-3] TRACE n14:11:51,513 [-] [PassThroughMessageProcessor-3] TRACE n14:11:51,515 [-] [PassThroughMessageProcessor-3] TRACE n14:11:51,552 [-] [PassThroughMessageProcessor-4] TRACE n14:11:51,571 [-] [PassThroughMessageProcessor-4] TRACE n14:11:51,575 [-] [PassThroughMessageProcessor-4] TRACE n14:11:51,576 [-] [PassThroughMessageProcessor-4] TRACE n14:11:51,576 [-] [PassThroughMessageProcessor-4] TRACE n14:11:52,178 [-] [PassThroughMessageProcessor-5] TRACE n14:11:52,179 [-] [PassThroughMessageProcessor-5] TRACE n14:11:52,180 [-] [PassThroughMessageProcessor-5] TRACE n14:11:52,180 [-] [PassThroughMessageProcessor-5] TRACE ``` ### Related Issues _No response_ ### Suggested Labels _No response_
1.0
Log Tracing is not working as expected in OpenTracing - ### Description When you enable log tracing as mentioned in [1], the log file will not contain correct information. It only prints something like, `n14:11:48,992 [-] [PassThroughMessageProcessor-1] TRACE ` ### Steps to Reproduce Follow the steps in [1] ### Affected Component APIM ### Version 4.2.0-Pre-Alpha ### Environment Details (with versions) _No response_ ### Relevant Log Output ```shell n14:11:48,992 [-] [PassThroughMessageProcessor-1] TRACE n14:11:50,073 [-] [PassThroughMessageProcessor-2] TRACE n14:11:51,513 [-] [PassThroughMessageProcessor-3] TRACE n14:11:51,513 [-] [PassThroughMessageProcessor-3] TRACE n14:11:51,515 [-] [PassThroughMessageProcessor-3] TRACE n14:11:51,552 [-] [PassThroughMessageProcessor-4] TRACE n14:11:51,571 [-] [PassThroughMessageProcessor-4] TRACE n14:11:51,575 [-] [PassThroughMessageProcessor-4] TRACE n14:11:51,576 [-] [PassThroughMessageProcessor-4] TRACE n14:11:51,576 [-] [PassThroughMessageProcessor-4] TRACE n14:11:52,178 [-] [PassThroughMessageProcessor-5] TRACE n14:11:52,179 [-] [PassThroughMessageProcessor-5] TRACE n14:11:52,180 [-] [PassThroughMessageProcessor-5] TRACE n14:11:52,180 [-] [PassThroughMessageProcessor-5] TRACE ``` ### Related Issues _No response_ ### Suggested Labels _No response_
non_test
log tracing is not working as expected in opentracing description when you enable log tracing as mentioned in the log file will not contain correct information it only prints something like trace steps to reproduce follow the steps in affected component apim version pre alpha environment details with versions no response relevant log output shell trace trace trace trace trace trace trace trace trace trace trace trace trace trace related issues no response suggested labels no response
0
129,554
5,098,705,892
IssuesEvent
2017-01-04 03:17:59
PolarisSS13/Polaris
https://api.github.com/repos/PolarisSS13/Polaris
closed
Blindness, Confusion and Blurry Eyes do not modify chance to hit for some attacks
Oversight Priority: High
#### Brief description of the issue Blindness from any source, including genetics, having no eyes, and being flashed does not add the intended 75% to hit malus. #2134 Confusion and blurred vision from flashes also seems to have no effect on chance to hit. #### What you expected to happen Someone with no eyes to have difficulty punching you. #### What actually happened 90%+ accuracy for all melee attacks. Ranged accuracy seemed to be affected. With an average of 2/5 shots hitting at 1 tile range. #### Steps to reproduce Blind self, fire gun/punch friend. #### Additional info: - **Server Revision**: Server revision: master - 2016-10-21 67330d1df626506b796a218c7c261e868060db9c
1.0
Blindness, Confusion and Blurry Eyes do not modify chance to hit for some attacks - #### Brief description of the issue Blindness from any source, including genetics, having no eyes, and being flashed does not add the intended 75% to hit malus. #2134 Confusion and blurred vision from flashes also seems to have no effect on chance to hit. #### What you expected to happen Someone with no eyes to have difficulty punching you. #### What actually happened 90%+ accuracy for all melee attacks. Ranged accuracy seemed to be affected. With an average of 2/5 shots hitting at 1 tile range. #### Steps to reproduce Blind self, fire gun/punch friend. #### Additional info: - **Server Revision**: Server revision: master - 2016-10-21 67330d1df626506b796a218c7c261e868060db9c
non_test
blindness confusion and blurry eyes do not modify chance to hit for some attacks brief description of the issue blindness from any source including genetics having no eyes and being flashed does not add the intended to hit malus confusion and blurred vision from flashes also seems to have no effect on chance to hit what you expected to happen someone with no eyes to have difficulty punching you what actually happened accuracy for all melee attacks ranged accuracy seemed to be affected with an average of shots hitting at tile range steps to reproduce blind self fire gun punch friend additional info server revision server revision master
0
6,115
5,286,679,483
IssuesEvent
2017-02-08 10:02:38
MadOgre/angular-rest
https://api.github.com/repos/MadOgre/angular-rest
opened
Needs to be optimized for lazy API loading
performance related
Currently the root component is loading all existing articles on the system. Maybe implement a delta update system by last modified date.
True
Needs to be optimized for lazy API loading - Currently the root component is loading all existing articles on the system. Maybe implement a delta update system by last modified date.
non_test
needs to be optimized for lazy api loading currently the root component is loading all existing articles on the system maybe implement a delta update system by last modified date
0
200,438
22,773,751,101
IssuesEvent
2022-07-08 12:37:12
ignatandrei/AspNetCoreImageTagHelper
https://api.github.com/repos/ignatandrei/AspNetCoreImageTagHelper
opened
CVE-2016-10735 (Medium) detected in bootstrap-3.3.6.min.js, bootstrap-3.3.6.js
security vulnerability
## CVE-2016-10735 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>bootstrap-3.3.6.min.js</b>, <b>bootstrap-3.3.6.js</b></p></summary> <p> <details><summary><b>bootstrap-3.3.6.min.js</b></p></summary> <p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.6/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.6/js/bootstrap.min.js</a></p> <p>Path to vulnerable library: /src/TestWebSite/wwwroot/lib/bootstrap/dist/js/bootstrap.min.js</p> <p> Dependency Hierarchy: - :x: **bootstrap-3.3.6.min.js** (Vulnerable Library) </details> <details><summary><b>bootstrap-3.3.6.js</b></p></summary> <p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.6/js/bootstrap.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.6/js/bootstrap.js</a></p> <p>Path to vulnerable library: /src/TestWebSite/wwwroot/lib/bootstrap/dist/js/bootstrap.js</p> <p> Dependency Hierarchy: - :x: **bootstrap-3.3.6.js** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/ignatandrei/AspNetCoreImageTagHelper/commit/1acdde0e5e2173fc9be87dbb3080ebd3037e487b">1acdde0e5e2173fc9be87dbb3080ebd3037e487b</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Bootstrap 3.x before 3.4.0 and 4.x-beta before 4.0.0-beta.2, XSS is possible in the data-target attribute, a different vulnerability than CVE-2018-14041. <p>Publish Date: 2019-01-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10735>CVE-2016-10735</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10735">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10735</a></p> <p>Release Date: 2019-01-09</p> <p>Fix Resolution: bootstrap - 3.4.0, 4.0.0-beta.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2016-10735 (Medium) detected in bootstrap-3.3.6.min.js, bootstrap-3.3.6.js - ## CVE-2016-10735 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>bootstrap-3.3.6.min.js</b>, <b>bootstrap-3.3.6.js</b></p></summary> <p> <details><summary><b>bootstrap-3.3.6.min.js</b></p></summary> <p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.6/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.6/js/bootstrap.min.js</a></p> <p>Path to vulnerable library: /src/TestWebSite/wwwroot/lib/bootstrap/dist/js/bootstrap.min.js</p> <p> Dependency Hierarchy: - :x: **bootstrap-3.3.6.min.js** (Vulnerable Library) </details> <details><summary><b>bootstrap-3.3.6.js</b></p></summary> <p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.6/js/bootstrap.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.6/js/bootstrap.js</a></p> <p>Path to vulnerable library: /src/TestWebSite/wwwroot/lib/bootstrap/dist/js/bootstrap.js</p> <p> Dependency Hierarchy: - :x: **bootstrap-3.3.6.js** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/ignatandrei/AspNetCoreImageTagHelper/commit/1acdde0e5e2173fc9be87dbb3080ebd3037e487b">1acdde0e5e2173fc9be87dbb3080ebd3037e487b</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Bootstrap 3.x before 3.4.0 and 4.x-beta before 4.0.0-beta.2, XSS is possible in the data-target attribute, a different vulnerability than CVE-2018-14041. <p>Publish Date: 2019-01-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10735>CVE-2016-10735</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10735">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10735</a></p> <p>Release Date: 2019-01-09</p> <p>Fix Resolution: bootstrap - 3.4.0, 4.0.0-beta.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve medium detected in bootstrap min js bootstrap js cve medium severity vulnerability vulnerable libraries bootstrap min js bootstrap js bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to vulnerable library src testwebsite wwwroot lib bootstrap dist js bootstrap min js dependency hierarchy x bootstrap min js vulnerable library bootstrap js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to vulnerable library src testwebsite wwwroot lib bootstrap dist js bootstrap js dependency hierarchy x bootstrap js vulnerable library found in head commit a href found in base branch master vulnerability details in bootstrap x before and x beta before beta xss is possible in the data target attribute a different vulnerability than cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bootstrap beta step up your open source security game with mend
0
3,415
2,763,271,488
IssuesEvent
2015-04-29 08:05:14
apinf/api-umbrella-dashboard
https://api.github.com/repos/apinf/api-umbrella-dashboard
closed
Create communication plan document
Documentation
Create a documennt describing tools and procedures for team communication, review process, etc..
1.0
Create communication plan document - Create a documennt describing tools and procedures for team communication, review process, etc..
non_test
create communication plan document create a documennt describing tools and procedures for team communication review process etc
0
47,722
13,066,142,182
IssuesEvent
2020-07-30 21:04:45
googlefonts/noto-fonts
https://api.github.com/repos/googlefonts/noto-fonts
closed
Noto Nastaliq ک final and isolated? shape
FoundIn-1.x Script-Urdu Type-Defect
``` What steps will reproduce the problem? The 'kaaf' which is used at this moment produces the miniature kaaf symbol together with the letter in the final and stand-alone positions. This occurs in some words while in others it doesn't! What is the expected output? What do you see instead? In Urdu it is never used, to the contrary, this shape is Arabic only. I suppose it is neither used in Persian. Please see the attached illustration. What version of the product are you using? On what operating system? OS X but this time in Microsoft Word which enables me to see the characters without the interference of Apple's core code error. ``` Original issue reported on code.google.com by `marri....@gmail.com` on 6 May 2015 at 3:39 Attachments: - [namak jhalak talak tak shak.jpg](https://storage.googleapis.com/google-code-attachments/noto/issue-361/comment-0/namak jhalak talak tak shak.jpg)
1.0
Noto Nastaliq ک final and isolated? shape - ``` What steps will reproduce the problem? The 'kaaf' which is used at this moment produces the miniature kaaf symbol together with the letter in the final and stand-alone positions. This occurs in some words while in others it doesn't! What is the expected output? What do you see instead? In Urdu it is never used, to the contrary, this shape is Arabic only. I suppose it is neither used in Persian. Please see the attached illustration. What version of the product are you using? On what operating system? OS X but this time in Microsoft Word which enables me to see the characters without the interference of Apple's core code error. ``` Original issue reported on code.google.com by `marri....@gmail.com` on 6 May 2015 at 3:39 Attachments: - [namak jhalak talak tak shak.jpg](https://storage.googleapis.com/google-code-attachments/noto/issue-361/comment-0/namak jhalak talak tak shak.jpg)
non_test
noto nastaliq ک final and isolated shape what steps will reproduce the problem the kaaf which is used at this moment produces the miniature kaaf symbol together with the letter in the final and stand alone positions this occurs in some words while in others it doesn t what is the expected output what do you see instead in urdu it is never used to the contrary this shape is arabic only i suppose it is neither used in persian please see the attached illustration what version of the product are you using on what operating system os x but this time in microsoft word which enables me to see the characters without the interference of apple s core code error original issue reported on code google com by marri gmail com on may at attachments jhalak talak tak shak jpg
0
213,689
16,532,610,895
IssuesEvent
2021-05-27 08:04:34
LeisyVasquez/EcoCol
https://api.github.com/repos/LeisyVasquez/EcoCol
closed
Prototipo en miro
documentation
Según la documentación solo quedo faltando el espacio para **reportar errores de la página**, se pondrá en el prototipo si en ultima se va a implementar. [https://miro.com/welcomeonboard/Zs1alv6K7tog1rKS8nBVAGiCorznevAsc4JHSbodoT797AmxSIABO73fLFsCMswk](Miro)
1.0
Prototipo en miro - Según la documentación solo quedo faltando el espacio para **reportar errores de la página**, se pondrá en el prototipo si en ultima se va a implementar. [https://miro.com/welcomeonboard/Zs1alv6K7tog1rKS8nBVAGiCorznevAsc4JHSbodoT797AmxSIABO73fLFsCMswk](Miro)
non_test
prototipo en miro según la documentación solo quedo faltando el espacio para reportar errores de la página se pondrá en el prototipo si en ultima se va a implementar miro
0
293,620
25,311,384,262
IssuesEvent
2022-11-17 17:43:34
wazuh/wazuh-qa
https://api.github.com/repos/wazuh/wazuh-qa
closed
Verify engine's API behavior
team/qa type/manual-testing role/qa-runtime-terror subteam/qa-rainbow target/5.0.0
| Target version | Related issue | Related PR/dev branch | |--------------------|--------------------|-----------------| | 5.0 | https://github.com/wazuh/wazuh-qa/issues/3533 | https://github.com/wazuh/wazuh/issues/11334 | <!-- Important: No section may be left blank. If not, delete it directly (in principle only "Configurations" and "Considerations" could be left blank in case of not proceeding). --> ## Description <!-- Description that puts into context and shows the QA tester the changes that have been implemented and have to be tested. --> Since the team is reworking the engine, we need to cover this new engine rework. This issue will test the new engine to ensure all is correct. The first two commands with their subcommand were tested in https://github.com/wazuh/wazuh-qa/issues/3475, now some of them like `env` are remaining ## Proposed test cases - [x] `API:` We want to test a correct input returns a valid result, an incorrect input returns an error and an unexpected input doesn't crash the program. The module to test are: - [x] Verify `test` command - [x] Verify `graph` command - [x] Verify `env` command ## Considerations
1.0
Verify engine's API behavior - | Target version | Related issue | Related PR/dev branch | |--------------------|--------------------|-----------------| | 5.0 | https://github.com/wazuh/wazuh-qa/issues/3533 | https://github.com/wazuh/wazuh/issues/11334 | <!-- Important: No section may be left blank. If not, delete it directly (in principle only "Configurations" and "Considerations" could be left blank in case of not proceeding). --> ## Description <!-- Description that puts into context and shows the QA tester the changes that have been implemented and have to be tested. --> Since the team is reworking the engine, we need to cover this new engine rework. This issue will test the new engine to ensure all is correct. The first two commands with their subcommand were tested in https://github.com/wazuh/wazuh-qa/issues/3475, now some of them like `env` are remaining ## Proposed test cases - [x] `API:` We want to test a correct input returns a valid result, an incorrect input returns an error and an unexpected input doesn't crash the program. The module to test are: - [x] Verify `test` command - [x] Verify `graph` command - [x] Verify `env` command ## Considerations
test
verify engine s api behavior target version related issue related pr dev branch description since the team is reworking the engine we need to cover this new engine rework this issue will test the new engine to ensure all is correct the first two commands with their subcommand were tested in now some of them like env are remaining proposed test cases api we want to test a correct input returns a valid result an incorrect input returns an error and an unexpected input doesn t crash the program the module to test are verify test command verify graph command verify env command considerations
1
117,247
25,079,276,995
IssuesEvent
2022-11-07 17:51:33
sourcegraph/sourcegraph
https://api.github.com/repos/sourcegraph/sourcegraph
opened
insights: migrate backfill_completed_at
team/code-insights backend
Deprecate the old background routine and migrate backfill_completed_at to use the new stateful backfiller
1.0
insights: migrate backfill_completed_at - Deprecate the old background routine and migrate backfill_completed_at to use the new stateful backfiller
non_test
insights migrate backfill completed at deprecate the old background routine and migrate backfill completed at to use the new stateful backfiller
0
172,853
13,349,283,505
IssuesEvent
2020-08-29 23:42:59
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
roachtest: sqlsmith/setup=tpcc/setting=default failed
C-test-failure O-roachtest O-robot branch-master release-blocker
[(roachtest).sqlsmith/setup=tpcc/setting=default failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2231078&tab=buildLog) on [master@8b91062f9351d18f9104aff567cb152df162021e](https://github.com/cockroachdb/cockroach/commits/8b91062f9351d18f9104aff567cb152df162021e): ``` The test failed on branch=master, cloud=gce: test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/sqlsmith/setup=tpcc/setting=default/run_1 sqlsmith.go:172,sqlsmith.go:202,test_runner.go:754: error: pq: internal error: runtime error: invalid memory address or nil pointer dereference stmt: SELECT (-2998517247374703822):::INT8 AS col_314, (-794607064220096046):::INT8 AS col_315 FROM defaultdb.public.stock@[0] AS tab_147 JOIN defaultdb.public.order_line@order_line_stock_fk_idx AS tab_148 ON (tab_147.s_dist_03) = (tab_148.ol_dist_info) AND (tab_147.tableoid) = (tab_148.tableoid) AND (tab_147.s_i_id) = (tab_148.ol_quantity) LIMIT 41:::INT8; ``` <details><summary>More</summary><p> Artifacts: [/sqlsmith/setup=tpcc/setting=default](https://teamcity.cockroachdb.com/viewLog.html?buildId=2231078&tab=artifacts#/sqlsmith/setup=tpcc/setting=default) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asqlsmith%2Fsetup%3Dtpcc%2Fsetting%3Ddefault.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
2.0
roachtest: sqlsmith/setup=tpcc/setting=default failed - [(roachtest).sqlsmith/setup=tpcc/setting=default failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2231078&tab=buildLog) on [master@8b91062f9351d18f9104aff567cb152df162021e](https://github.com/cockroachdb/cockroach/commits/8b91062f9351d18f9104aff567cb152df162021e): ``` The test failed on branch=master, cloud=gce: test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/sqlsmith/setup=tpcc/setting=default/run_1 sqlsmith.go:172,sqlsmith.go:202,test_runner.go:754: error: pq: internal error: runtime error: invalid memory address or nil pointer dereference stmt: SELECT (-2998517247374703822):::INT8 AS col_314, (-794607064220096046):::INT8 AS col_315 FROM defaultdb.public.stock@[0] AS tab_147 JOIN defaultdb.public.order_line@order_line_stock_fk_idx AS tab_148 ON (tab_147.s_dist_03) = (tab_148.ol_dist_info) AND (tab_147.tableoid) = (tab_148.tableoid) AND (tab_147.s_i_id) = (tab_148.ol_quantity) LIMIT 41:::INT8; ``` <details><summary>More</summary><p> Artifacts: [/sqlsmith/setup=tpcc/setting=default](https://teamcity.cockroachdb.com/viewLog.html?buildId=2231078&tab=artifacts#/sqlsmith/setup=tpcc/setting=default) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asqlsmith%2Fsetup%3Dtpcc%2Fsetting%3Ddefault.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
test
roachtest sqlsmith setup tpcc setting default failed on the test failed on branch master cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts sqlsmith setup tpcc setting default run sqlsmith go sqlsmith go test runner go error pq internal error runtime error invalid memory address or nil pointer dereference stmt select as col as col from defaultdb public stock as tab join defaultdb public order line order line stock fk idx as tab on tab s dist tab ol dist info and tab tableoid tab tableoid and tab s i id tab ol quantity limit more artifacts powered by
1
52,469
6,258,118,753
IssuesEvent
2017-07-14 14:52:25
openSUSE/umoci
https://api.github.com/repos/openSUSE/umoci
closed
tests: remove need for Docker
test/integration test/unit
Usage of Docker is a bit of an anti-pattern IMO, because it actually makes it harder to run the tests in environments where you don't have Docker (such as inside `rpmbuild`). Luckily the bats tests aren't *too* specific to Docker. * [ ] Make a nice way of getting the right version of our various testing dependencies. * [ ] Set up a pulling script for images that operates outside of Docker. * [ ] Clean up the weird mess of environment variable passing in the Makefile. * [ ] (optional) Make testing more modular so that you can run all tests that don't require internet (or something). The only downside to doing this is that the CI will no longer be running inside openSUSE so we can't make sure our packages are sane (the upside is that we no longer have to get packages into OBS in order to test with them on the CI). Also Docker is the main reason that #131 is necessary.
2.0
tests: remove need for Docker - Usage of Docker is a bit of an anti-pattern IMO, because it actually makes it harder to run the tests in environments where you don't have Docker (such as inside `rpmbuild`). Luckily the bats tests aren't *too* specific to Docker. * [ ] Make a nice way of getting the right version of our various testing dependencies. * [ ] Set up a pulling script for images that operates outside of Docker. * [ ] Clean up the weird mess of environment variable passing in the Makefile. * [ ] (optional) Make testing more modular so that you can run all tests that don't require internet (or something). The only downside to doing this is that the CI will no longer be running inside openSUSE so we can't make sure our packages are sane (the upside is that we no longer have to get packages into OBS in order to test with them on the CI). Also Docker is the main reason that #131 is necessary.
test
tests remove need for docker usage of docker is a bit of an anti pattern imo because it actually makes it harder to run the tests in environments where you don t have docker such as inside rpmbuild luckily the bats tests aren t too specific to docker make a nice way of getting the right version of our various testing dependencies set up a pulling script for images that operates outside of docker clean up the weird mess of environment variable passing in the makefile optional make testing more modular so that you can run all tests that don t require internet or something the only downside to doing this is that the ci will no longer be running inside opensuse so we can t make sure our packages are sane the upside is that we no longer have to get packages into obs in order to test with them on the ci also docker is the main reason that is necessary
1
189,904
14,527,517,813
IssuesEvent
2020-12-14 15:26:49
OpenLiberty/open-liberty
https://api.github.com/repos/OpenLiberty/open-liberty
closed
Test Failure: com.ibm.ws.ejbcontainer.session.passivation.tests.StatefulTimeoutTest.testXMLOnly_EE8_FEATURES
in:EJB Container team:Blizzard test bug
testXMLOnly_EE8_FEATURES:junit.framework.AssertionFailedError: 2020-11-30-19:48:51:577 The response did not contain "[SUCCESS]". Full output is:" ERROR: Caught exception attempting to call test method testXMLOnly on servlet com.ibm.ws.ejbcontainer.session.passivation.statefulTimeout.web.StatefulTimeoutServlet java.lang.AssertionError: Timeout was 15000 ms, but bean timed out after sleeping 10000 ms. at com.ibm.ws.ejbcontainer.session.passivation.statefulTimeout.web.StatefulTimeoutServlet.testXMLOnly(StatefulTimeoutServlet.java:208) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at componenttest.app.FATServlet.doGet(FATServlet.java:71) at javax.servlet.http.HttpServlet.service(HttpServlet.java:686) at javax.servlet.http.HttpServlet.service(HttpServlet.java:791) at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1257) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:745) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:442) at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1226) at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1010) at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:75) at com.ibm.ws.webcontainer40.servlet.CacheServletWrapper40.handleRequest(CacheServletWrapper40.java:83) at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:936) at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:279) at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:1141) at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.wrapHandlerAndExecute(HttpDispatcherLink.java:422) at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.ready(HttpDispatcherLink.java:381) at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:565) at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleNewRequest(HttpInboundLink.java:499) at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.processRequest(HttpInboundLink.java:359) at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.ready(HttpInboundLink.java:326) at com.ibm.ws.tcpchannel.internal.NewConnectionInitialReadCallback.sendToDiscriminators(NewConnectionInitialReadCallback.java:167) at com.ibm.ws.tcpchannel.internal.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:75) at com.ibm.ws.tcpchannel.internal.WorkQueueManager.requestComplete(WorkQueueManager.java:504) at com.ibm.ws.tcpchannel.internal.WorkQueueManager.attemptIO(WorkQueueManager.java:574) at com.ibm.ws.tcpchannel.internal.WorkQueueManager.workerRun(WorkQueueManager.java:958) at com.ibm.ws.tcpchannel.internal.WorkQueueManager$Worker.run(WorkQueueManager.java:1047) at com.ibm.ws.threading.internal.ExecutorServiceImpl$RunnableWrapper.run(ExecutorServiceImpl.java:239) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) ". expected:<0> but was:<1> at componenttest.topology.utils.HttpUtils.findStringInHttpConnection(HttpUtils.java:537) at componenttest.topology.utils.HttpUtils.findStringInHttpConnection(HttpUtils.java:495) at componenttest.topology.utils.HttpUtils.findStringInUrl(HttpUtils.java:472) at componenttest.topology.utils.HttpUtils.findStringInReadyUrl(HttpUtils.java:444) at componenttest.topology.utils.HttpUtils.findStringInReadyUrl(HttpUtils.java:414) at componenttest.topology.utils.FATServletClient.runTest(FATServletClient.java:83) at componenttest.custom.junit.runner.SyntheticServletTest.invokeExplosively(SyntheticServletTest.java:40) at componenttest.custom.junit.runner.FATRunner$1.evaluate(FATRunner.java:197) at componenttest.rules.repeater.RepeatTests$CompositeRepeatTestActionStatement.evaluate(RepeatTests.java:115) at componenttest.custom.junit.runner.FATRunner$2.evaluate(FATRunner.java:318) at componenttest.custom.junit.runner.FATRunner.run(FATRunner.java:171) Further analysis is available at https://libh-proxy1.fyre.ibm.com/cognitive/defectAnalysis.html?failureGroupId=failure_group-e7e85d5c-06d7-455f-acc9-34f139b446ed
1.0
Test Failure: com.ibm.ws.ejbcontainer.session.passivation.tests.StatefulTimeoutTest.testXMLOnly_EE8_FEATURES - testXMLOnly_EE8_FEATURES:junit.framework.AssertionFailedError: 2020-11-30-19:48:51:577 The response did not contain "[SUCCESS]". Full output is:" ERROR: Caught exception attempting to call test method testXMLOnly on servlet com.ibm.ws.ejbcontainer.session.passivation.statefulTimeout.web.StatefulTimeoutServlet java.lang.AssertionError: Timeout was 15000 ms, but bean timed out after sleeping 10000 ms. at com.ibm.ws.ejbcontainer.session.passivation.statefulTimeout.web.StatefulTimeoutServlet.testXMLOnly(StatefulTimeoutServlet.java:208) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at componenttest.app.FATServlet.doGet(FATServlet.java:71) at javax.servlet.http.HttpServlet.service(HttpServlet.java:686) at javax.servlet.http.HttpServlet.service(HttpServlet.java:791) at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1257) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:745) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:442) at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1226) at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1010) at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:75) at com.ibm.ws.webcontainer40.servlet.CacheServletWrapper40.handleRequest(CacheServletWrapper40.java:83) at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:936) at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:279) at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:1141) at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.wrapHandlerAndExecute(HttpDispatcherLink.java:422) at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.ready(HttpDispatcherLink.java:381) at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:565) at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleNewRequest(HttpInboundLink.java:499) at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.processRequest(HttpInboundLink.java:359) at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.ready(HttpInboundLink.java:326) at com.ibm.ws.tcpchannel.internal.NewConnectionInitialReadCallback.sendToDiscriminators(NewConnectionInitialReadCallback.java:167) at com.ibm.ws.tcpchannel.internal.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:75) at com.ibm.ws.tcpchannel.internal.WorkQueueManager.requestComplete(WorkQueueManager.java:504) at com.ibm.ws.tcpchannel.internal.WorkQueueManager.attemptIO(WorkQueueManager.java:574) at com.ibm.ws.tcpchannel.internal.WorkQueueManager.workerRun(WorkQueueManager.java:958) at com.ibm.ws.tcpchannel.internal.WorkQueueManager$Worker.run(WorkQueueManager.java:1047) at com.ibm.ws.threading.internal.ExecutorServiceImpl$RunnableWrapper.run(ExecutorServiceImpl.java:239) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) ". expected:<0> but was:<1> at componenttest.topology.utils.HttpUtils.findStringInHttpConnection(HttpUtils.java:537) at componenttest.topology.utils.HttpUtils.findStringInHttpConnection(HttpUtils.java:495) at componenttest.topology.utils.HttpUtils.findStringInUrl(HttpUtils.java:472) at componenttest.topology.utils.HttpUtils.findStringInReadyUrl(HttpUtils.java:444) at componenttest.topology.utils.HttpUtils.findStringInReadyUrl(HttpUtils.java:414) at componenttest.topology.utils.FATServletClient.runTest(FATServletClient.java:83) at componenttest.custom.junit.runner.SyntheticServletTest.invokeExplosively(SyntheticServletTest.java:40) at componenttest.custom.junit.runner.FATRunner$1.evaluate(FATRunner.java:197) at componenttest.rules.repeater.RepeatTests$CompositeRepeatTestActionStatement.evaluate(RepeatTests.java:115) at componenttest.custom.junit.runner.FATRunner$2.evaluate(FATRunner.java:318) at componenttest.custom.junit.runner.FATRunner.run(FATRunner.java:171) Further analysis is available at https://libh-proxy1.fyre.ibm.com/cognitive/defectAnalysis.html?failureGroupId=failure_group-e7e85d5c-06d7-455f-acc9-34f139b446ed
test
test failure com ibm ws ejbcontainer session passivation tests statefultimeouttest testxmlonly features testxmlonly features junit framework assertionfailederror the response did not contain full output is error caught exception attempting to call test method testxmlonly on servlet com ibm ws ejbcontainer session passivation statefultimeout web statefultimeoutservlet java lang assertionerror timeout was ms but bean timed out after sleeping ms at com ibm ws ejbcontainer session passivation statefultimeout web statefultimeoutservlet testxmlonly statefultimeoutservlet java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at componenttest app fatservlet doget fatservlet java at javax servlet http httpservlet service httpservlet java at javax servlet http httpservlet service httpservlet java at com ibm ws webcontainer servlet servletwrapper service servletwrapper java at com ibm ws webcontainer servlet servletwrapper handlerequest servletwrapper java at com ibm ws webcontainer servlet servletwrapper handlerequest servletwrapper java at com ibm ws webcontainer filter webappfiltermanager invokefilters webappfiltermanager java at com ibm ws webcontainer filter webappfiltermanager invokefilters webappfiltermanager java at com ibm ws webcontainer servlet cacheservletwrapper handlerequest cacheservletwrapper java at com ibm ws servlet handlerequest java at com ibm ws webcontainer webcontainer handlerequest webcontainer java at com ibm ws webcontainer osgi dynamicvirtualhost run dynamicvirtualhost java at com ibm ws http dispatcher internal channel httpdispatcherlink taskwrapper run httpdispatcherlink java at com ibm ws http dispatcher internal channel httpdispatcherlink wraphandlerandexecute httpdispatcherlink java at com ibm ws http dispatcher internal channel httpdispatcherlink ready httpdispatcherlink java at com ibm ws http channel internal inbound httpinboundlink handlediscrimination httpinboundlink java at com ibm ws http channel internal inbound httpinboundlink handlenewrequest httpinboundlink java at com ibm ws http channel internal inbound httpinboundlink processrequest httpinboundlink java at com ibm ws http channel internal inbound httpinboundlink ready httpinboundlink java at com ibm ws tcpchannel internal newconnectioninitialreadcallback sendtodiscriminators newconnectioninitialreadcallback java at com ibm ws tcpchannel internal newconnectioninitialreadcallback complete newconnectioninitialreadcallback java at com ibm ws tcpchannel internal workqueuemanager requestcomplete workqueuemanager java at com ibm ws tcpchannel internal workqueuemanager attemptio workqueuemanager java at com ibm ws tcpchannel internal workqueuemanager workerrun workqueuemanager java at com ibm ws tcpchannel internal workqueuemanager worker run workqueuemanager java at com ibm ws threading internal executorserviceimpl runnablewrapper run executorserviceimpl java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java expected but was at componenttest topology utils httputils findstringinhttpconnection httputils java at componenttest topology utils httputils findstringinhttpconnection httputils java at componenttest topology utils httputils findstringinurl httputils java at componenttest topology utils httputils findstringinreadyurl httputils java at componenttest topology utils httputils findstringinreadyurl httputils java at componenttest topology utils fatservletclient runtest fatservletclient java at componenttest custom junit runner syntheticservlettest invokeexplosively syntheticservlettest java at componenttest custom junit runner fatrunner evaluate fatrunner java at componenttest rules repeater repeattests compositerepeattestactionstatement evaluate repeattests java at componenttest custom junit runner fatrunner evaluate fatrunner java at componenttest custom junit runner fatrunner run fatrunner java further analysis is available at
1
286,061
21,562,447,779
IssuesEvent
2022-05-01 11:16:24
rollerderby/scoreboard
https://api.github.com/repos/rollerderby/scoreboard
closed
Announce v5.0.0 and forward feedback on some issues reported on Facebook
documentation
**CRG 5.0.0 has been [released](https://github.com/rollerderby/scoreboard/releases/tag/v5.0.0).** This should be announced in the Facebook group. Also when preparing the release I went through the public Facebook group to see if any feedback on the beta was posted there and found a couple of posts where I think I can help the OPs. Since I can't post there myself it'd be nice if these answers could be forwarded to the respective posts. #### Color Picker Default Color ([Post by Benjamin Doyle on March 17](https://www.facebook.com/groups/derbyscoreboard/posts/4890633761018357/)) This is included in 5.0.0. #### Operator Page not loading on Chrome v100 ([Post by Dave Almond on March 30](https://www.facebook.com/groups/derbyscoreboard/posts/4923083001106766/)) The files reported as not found have been removed going from 3.x to 4.0.0. Since it works using a different IP address I'm assuming there is a file from 3.x stuck in the browser cache and this causes the issue. If so, clearing the browser cache should resolve it. #### Disappeared "Box" buttons ([Post by Yvonne Dietrich on April 1](https://www.facebook.com/groups/derbyscoreboard/posts/4927068467374886/)) This is intended behavior when the lineup tracking functionality is used in order to avoid wrong game data when SBO and LT try to start a box trip at about the same time, which is quite annoying to fix. There is an option in the settings to disable LT functionality which will return the buttons. #### Missing Jam list ([Post by Ioan Wigmore on April 1](https://www.facebook.com/groups/derbyscoreboard/posts/4929261383822261/)) I managed to reproduce this by using "Start New Game" when the prior game had 0 jams. Reloading the screen consistently fixed the problem for me. Since 5.0.0 forces a reload whenever a new game is started, the problem should not occur there. #### Keybindings not disabled on other tabs ([Post by Carina Gerry on March 16](https://www.facebook.com/groups/derbyscoreboard/posts/4886098571471876/)) This bug is fixed in 5.0.0. (It was present since 4.1.1 - I'm surprised it wasn't noticed earlier.)
1.0
Announce v5.0.0 and forward feedback on some issues reported on Facebook - **CRG 5.0.0 has been [released](https://github.com/rollerderby/scoreboard/releases/tag/v5.0.0).** This should be announced in the Facebook group. Also when preparing the release I went through the public Facebook group to see if any feedback on the beta was posted there and found a couple of posts where I think I can help the OPs. Since I can't post there myself it'd be nice if these answers could be forwarded to the respective posts. #### Color Picker Default Color ([Post by Benjamin Doyle on March 17](https://www.facebook.com/groups/derbyscoreboard/posts/4890633761018357/)) This is included in 5.0.0. #### Operator Page not loading on Chrome v100 ([Post by Dave Almond on March 30](https://www.facebook.com/groups/derbyscoreboard/posts/4923083001106766/)) The files reported as not found have been removed going from 3.x to 4.0.0. Since it works using a different IP address I'm assuming there is a file from 3.x stuck in the browser cache and this causes the issue. If so, clearing the browser cache should resolve it. #### Disappeared "Box" buttons ([Post by Yvonne Dietrich on April 1](https://www.facebook.com/groups/derbyscoreboard/posts/4927068467374886/)) This is intended behavior when the lineup tracking functionality is used in order to avoid wrong game data when SBO and LT try to start a box trip at about the same time, which is quite annoying to fix. There is an option in the settings to disable LT functionality which will return the buttons. #### Missing Jam list ([Post by Ioan Wigmore on April 1](https://www.facebook.com/groups/derbyscoreboard/posts/4929261383822261/)) I managed to reproduce this by using "Start New Game" when the prior game had 0 jams. Reloading the screen consistently fixed the problem for me. Since 5.0.0 forces a reload whenever a new game is started, the problem should not occur there. #### Keybindings not disabled on other tabs ([Post by Carina Gerry on March 16](https://www.facebook.com/groups/derbyscoreboard/posts/4886098571471876/)) This bug is fixed in 5.0.0. (It was present since 4.1.1 - I'm surprised it wasn't noticed earlier.)
non_test
announce and forward feedback on some issues reported on facebook crg has been this should be announced in the facebook group also when preparing the release i went through the public facebook group to see if any feedback on the beta was posted there and found a couple of posts where i think i can help the ops since i can t post there myself it d be nice if these answers could be forwarded to the respective posts color picker default color this is included in operator page not loading on chrome the files reported as not found have been removed going from x to since it works using a different ip address i m assuming there is a file from x stuck in the browser cache and this causes the issue if so clearing the browser cache should resolve it disappeared box buttons this is intended behavior when the lineup tracking functionality is used in order to avoid wrong game data when sbo and lt try to start a box trip at about the same time which is quite annoying to fix there is an option in the settings to disable lt functionality which will return the buttons missing jam list i managed to reproduce this by using start new game when the prior game had jams reloading the screen consistently fixed the problem for me since forces a reload whenever a new game is started the problem should not occur there keybindings not disabled on other tabs this bug is fixed in it was present since i m surprised it wasn t noticed earlier
0
344,459
30,747,517,240
IssuesEvent
2023-07-28 16:10:45
elastic/kibana
https://api.github.com/repos/elastic/kibana
closed
Failing test: Jest Integration Tests.src/core/server/integration_tests/saved_objects/migrations/group2 - migration v2 fails with a descriptive message when a single document exceeds maxBatchSizeBytes
Team:Core failed-test
A test failed on a tracked branch ``` Error: Missing version for public endpoint GET /api/observability_onboarding/custom_logs/step/{name} at parseEndpoint (/var/lib/buildkite-agent/builds/kb-n2-4-spot-6ad4d653405b6337/elastic/kibana-on-merge/kibana/packages/kbn-server-route-repository/src/parse_endpoint.ts:23:11) at /var/lib/buildkite-agent/builds/kb-n2-4-spot-6ad4d653405b6337/elastic/kibana-on-merge/kibana/x-pack/plugins/observability_onboarding/server/routes/register_routes.ts:39:47 at Array.forEach (<anonymous>) at forEach (/var/lib/buildkite-agent/builds/kb-n2-4-spot-6ad4d653405b6337/elastic/kibana-on-merge/kibana/x-pack/plugins/observability_onboarding/server/routes/register_routes.ts:37:10) at ObservabilityOnboardingPlugin.setup (/var/lib/buildkite-agent/builds/kb-n2-4-spot-6ad4d653405b6337/elastic/kibana-on-merge/kibana/x-pack/plugins/observability_onboarding/server/plugin.ts:66:19) at PluginWrapper.setup (/var/lib/buildkite-agent/builds/kb-n2-4-spot-6ad4d653405b6337/elastic/kibana-on-merge/kibana/packages/core/plugins/core-plugins-server-internal/src/plugin.ts:105:26) at PluginsSystem.setup [as setupPlugins] (/var/lib/buildkite-agent/builds/kb-n2-4-spot-6ad4d653405b6337/elastic/kibana-on-merge/kibana/packages/core/plugins/core-plugins-server-internal/src/plugins_system.ts:131:40) at PluginsService.setupPlugins [as setup] (/var/lib/buildkite-agent/builds/kb-n2-4-spot-6ad4d653405b6337/elastic/kibana-on-merge/kibana/packages/core/plugins/core-plugins-server-internal/src/plugins_service.ts:166:52) at Server.setup (/var/lib/buildkite-agent/builds/kb-n2-4-spot-6ad4d653405b6337/elastic/kibana-on-merge/kibana/packages/core/root/core-root-server-internal/src/server.ts:348:26) at Root.setup (/var/lib/buildkite-agent/builds/kb-n2-4-spot-6ad4d653405b6337/elastic/kibana-on-merge/kibana/packages/core/root/core-root-server-internal/src/root/index.ts:66:14) at Object.<anonymous> (/var/lib/buildkite-agent/builds/kb-n2-4-spot-6ad4d653405b6337/elastic/kibana-on-merge/kibana/src/core/server/integration_tests/saved_objects/migrations/group2/batch_size_bytes.test.ts:119:5) ``` First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/30745#01884a02-35b1-4dfb-aab8-78245d4c47f5) <!-- kibanaCiData = {"failed-test":{"test.class":"Jest Integration Tests.src/core/server/integration_tests/saved_objects/migrations/group2","test.name":"migration v2 fails with a descriptive message when a single document exceeds maxBatchSizeBytes","test.failCount":12}} -->
1.0
Failing test: Jest Integration Tests.src/core/server/integration_tests/saved_objects/migrations/group2 - migration v2 fails with a descriptive message when a single document exceeds maxBatchSizeBytes - A test failed on a tracked branch ``` Error: Missing version for public endpoint GET /api/observability_onboarding/custom_logs/step/{name} at parseEndpoint (/var/lib/buildkite-agent/builds/kb-n2-4-spot-6ad4d653405b6337/elastic/kibana-on-merge/kibana/packages/kbn-server-route-repository/src/parse_endpoint.ts:23:11) at /var/lib/buildkite-agent/builds/kb-n2-4-spot-6ad4d653405b6337/elastic/kibana-on-merge/kibana/x-pack/plugins/observability_onboarding/server/routes/register_routes.ts:39:47 at Array.forEach (<anonymous>) at forEach (/var/lib/buildkite-agent/builds/kb-n2-4-spot-6ad4d653405b6337/elastic/kibana-on-merge/kibana/x-pack/plugins/observability_onboarding/server/routes/register_routes.ts:37:10) at ObservabilityOnboardingPlugin.setup (/var/lib/buildkite-agent/builds/kb-n2-4-spot-6ad4d653405b6337/elastic/kibana-on-merge/kibana/x-pack/plugins/observability_onboarding/server/plugin.ts:66:19) at PluginWrapper.setup (/var/lib/buildkite-agent/builds/kb-n2-4-spot-6ad4d653405b6337/elastic/kibana-on-merge/kibana/packages/core/plugins/core-plugins-server-internal/src/plugin.ts:105:26) at PluginsSystem.setup [as setupPlugins] (/var/lib/buildkite-agent/builds/kb-n2-4-spot-6ad4d653405b6337/elastic/kibana-on-merge/kibana/packages/core/plugins/core-plugins-server-internal/src/plugins_system.ts:131:40) at PluginsService.setupPlugins [as setup] (/var/lib/buildkite-agent/builds/kb-n2-4-spot-6ad4d653405b6337/elastic/kibana-on-merge/kibana/packages/core/plugins/core-plugins-server-internal/src/plugins_service.ts:166:52) at Server.setup (/var/lib/buildkite-agent/builds/kb-n2-4-spot-6ad4d653405b6337/elastic/kibana-on-merge/kibana/packages/core/root/core-root-server-internal/src/server.ts:348:26) at Root.setup (/var/lib/buildkite-agent/builds/kb-n2-4-spot-6ad4d653405b6337/elastic/kibana-on-merge/kibana/packages/core/root/core-root-server-internal/src/root/index.ts:66:14) at Object.<anonymous> (/var/lib/buildkite-agent/builds/kb-n2-4-spot-6ad4d653405b6337/elastic/kibana-on-merge/kibana/src/core/server/integration_tests/saved_objects/migrations/group2/batch_size_bytes.test.ts:119:5) ``` First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/30745#01884a02-35b1-4dfb-aab8-78245d4c47f5) <!-- kibanaCiData = {"failed-test":{"test.class":"Jest Integration Tests.src/core/server/integration_tests/saved_objects/migrations/group2","test.name":"migration v2 fails with a descriptive message when a single document exceeds maxBatchSizeBytes","test.failCount":12}} -->
test
failing test jest integration tests src core server integration tests saved objects migrations migration fails with a descriptive message when a single document exceeds maxbatchsizebytes a test failed on a tracked branch error missing version for public endpoint get api observability onboarding custom logs step name at parseendpoint var lib buildkite agent builds kb spot elastic kibana on merge kibana packages kbn server route repository src parse endpoint ts at var lib buildkite agent builds kb spot elastic kibana on merge kibana x pack plugins observability onboarding server routes register routes ts at array foreach at foreach var lib buildkite agent builds kb spot elastic kibana on merge kibana x pack plugins observability onboarding server routes register routes ts at observabilityonboardingplugin setup var lib buildkite agent builds kb spot elastic kibana on merge kibana x pack plugins observability onboarding server plugin ts at pluginwrapper setup var lib buildkite agent builds kb spot elastic kibana on merge kibana packages core plugins core plugins server internal src plugin ts at pluginssystem setup var lib buildkite agent builds kb spot elastic kibana on merge kibana packages core plugins core plugins server internal src plugins system ts at pluginsservice setupplugins var lib buildkite agent builds kb spot elastic kibana on merge kibana packages core plugins core plugins server internal src plugins service ts at server setup var lib buildkite agent builds kb spot elastic kibana on merge kibana packages core root core root server internal src server ts at root setup var lib buildkite agent builds kb spot elastic kibana on merge kibana packages core root core root server internal src root index ts at object var lib buildkite agent builds kb spot elastic kibana on merge kibana src core server integration tests saved objects migrations batch size bytes test ts first failure
1
145,841
11,710,120,744
IssuesEvent
2020-03-08 22:47:37
JuliaDocs/Documenter.jl
https://api.github.com/repos/JuliaDocs/Documenter.jl
closed
PDF/LaTeX test phase failing on tags
Type: Tests
https://travis-ci.org/JuliaDocs/Documenter.jl/builds/644089901 https://travis-ci.org/JuliaDocs/Documenter.jl/builds/638646797 ... https://travis-ci.org/JuliaDocs/Documenter.jl/builds/615449662 It blocks the documentation deployment phase, so that phase has to be restarted manually for tags as long as this is not fixed.
1.0
PDF/LaTeX test phase failing on tags - https://travis-ci.org/JuliaDocs/Documenter.jl/builds/644089901 https://travis-ci.org/JuliaDocs/Documenter.jl/builds/638646797 ... https://travis-ci.org/JuliaDocs/Documenter.jl/builds/615449662 It blocks the documentation deployment phase, so that phase has to be restarted manually for tags as long as this is not fixed.
test
pdf latex test phase failing on tags it blocks the documentation deployment phase so that phase has to be restarted manually for tags as long as this is not fixed
1
200,109
15,089,886,583
IssuesEvent
2021-02-06 08:13:30
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
roachtest: ycsb/E/nodes=3 failed
C-test-failure O-roachtest O-robot branch-master release-blocker
[(roachtest).ycsb/E/nodes=3 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2650257&tab=buildLog) on [master@81a2c26a104fa8cc7e8b530b837ffb6ff85ddc5a](https://github.com/cockroachdb/cockroach/commits/81a2c26a104fa8cc7e8b530b837ffb6ff85ddc5a): ``` | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1374 Wraps: (2) output in run_081315.346_n4_workload_run_ycsb Wraps: (3) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2650257-1612594943-41-n4cpu8:4 -- ./workload run ycsb --init --insert-count=1000000 --workload=E --concurrency=96 --splits=3 --histograms=perf/stats.json --select-for-update=true --ramp=1m --duration=10m {pgurl:1-3} returned | stderr: | ./workload: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./workload) | Error: COMMAND_PROBLEM: exit status 1 | (1) COMMAND_PROBLEM | Wraps: (2) Node 4. Command with error: | | ``` | | ./workload run ycsb --init --insert-count=1000000 --workload=E --concurrency=96 --splits=3 --histograms=perf/stats.json --select-for-update=true --ramp=1m --duration=10m {pgurl:1-3} | | ``` | Wraps: (3) exit status 1 | Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError | | stdout: Wraps: (4) exit status 20 Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *exec.ExitError cluster.go:2687,ycsb.go:62,ycsb.go:79,test_runner.go:767: monitor failure: monitor task failed: t.Fatal() was called (1) attached stack trace -- stack trace: | main.(*monitor).WaitE | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2675 | main.(*monitor).Wait | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2683 | main.registerYCSB.func1 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/ycsb.go:62 | main.registerYCSB.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/ycsb.go:79 | main.(*testRunner).runTest.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:767 Wraps: (2) monitor failure Wraps: (3) attached stack trace -- stack trace: | main.(*monitor).wait.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2731 Wraps: (4) monitor task failed Wraps: (5) attached stack trace -- stack trace: | main.init | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2645 | runtime.doInit | /usr/local/go/src/runtime/proc.go:5652 | runtime.main | /usr/local/go/src/runtime/proc.go:191 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1374 Wraps: (6) t.Fatal() was called Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError ``` <details><summary>More</summary><p> Artifacts: [/ycsb/E/nodes=3](https://teamcity.cockroachdb.com/viewLog.html?buildId=2650257&tab=artifacts#/ycsb/E/nodes=3) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Aycsb%2FE%2Fnodes%3D3.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
2.0
roachtest: ycsb/E/nodes=3 failed - [(roachtest).ycsb/E/nodes=3 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2650257&tab=buildLog) on [master@81a2c26a104fa8cc7e8b530b837ffb6ff85ddc5a](https://github.com/cockroachdb/cockroach/commits/81a2c26a104fa8cc7e8b530b837ffb6ff85ddc5a): ``` | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1374 Wraps: (2) output in run_081315.346_n4_workload_run_ycsb Wraps: (3) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2650257-1612594943-41-n4cpu8:4 -- ./workload run ycsb --init --insert-count=1000000 --workload=E --concurrency=96 --splits=3 --histograms=perf/stats.json --select-for-update=true --ramp=1m --duration=10m {pgurl:1-3} returned | stderr: | ./workload: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./workload) | Error: COMMAND_PROBLEM: exit status 1 | (1) COMMAND_PROBLEM | Wraps: (2) Node 4. Command with error: | | ``` | | ./workload run ycsb --init --insert-count=1000000 --workload=E --concurrency=96 --splits=3 --histograms=perf/stats.json --select-for-update=true --ramp=1m --duration=10m {pgurl:1-3} | | ``` | Wraps: (3) exit status 1 | Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError | | stdout: Wraps: (4) exit status 20 Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *exec.ExitError cluster.go:2687,ycsb.go:62,ycsb.go:79,test_runner.go:767: monitor failure: monitor task failed: t.Fatal() was called (1) attached stack trace -- stack trace: | main.(*monitor).WaitE | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2675 | main.(*monitor).Wait | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2683 | main.registerYCSB.func1 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/ycsb.go:62 | main.registerYCSB.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/ycsb.go:79 | main.(*testRunner).runTest.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:767 Wraps: (2) monitor failure Wraps: (3) attached stack trace -- stack trace: | main.(*monitor).wait.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2731 Wraps: (4) monitor task failed Wraps: (5) attached stack trace -- stack trace: | main.init | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2645 | runtime.doInit | /usr/local/go/src/runtime/proc.go:5652 | runtime.main | /usr/local/go/src/runtime/proc.go:191 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1374 Wraps: (6) t.Fatal() was called Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError ``` <details><summary>More</summary><p> Artifacts: [/ycsb/E/nodes=3](https://teamcity.cockroachdb.com/viewLog.html?buildId=2650257&tab=artifacts#/ycsb/E/nodes=3) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Aycsb%2FE%2Fnodes%3D3.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
test
roachtest ycsb e nodes failed on runtime goexit usr local go src runtime asm s wraps output in run workload run ycsb wraps home agent work go src github com cockroachdb cockroach bin roachprod run teamcity workload run ycsb init insert count workload e concurrency splits histograms perf stats json select for update true ramp duration pgurl returned stderr workload lib linux gnu libm so version glibc not found required by workload error command problem exit status command problem wraps node command with error workload run ycsb init insert count workload e concurrency splits histograms perf stats json select for update true ramp duration pgurl wraps exit status error types errors cmd hintdetail withdetail exec exiterror stdout wraps exit status error types withstack withstack errutil withprefix main withcommanddetails exec exiterror cluster go ycsb go ycsb go test runner go monitor failure monitor task failed t fatal was called attached stack trace stack trace main monitor waite home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main registerycsb home agent work go src github com cockroachdb cockroach pkg cmd roachtest ycsb go main registerycsb home agent work go src github com cockroachdb cockroach pkg cmd roachtest ycsb go main testrunner runtest home agent work go src github com cockroachdb cockroach pkg cmd roachtest test runner go wraps monitor failure wraps attached stack trace stack trace main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go wraps monitor task failed wraps attached stack trace stack trace main init home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go runtime doinit usr local go src runtime proc go runtime main usr local go src runtime proc go runtime goexit usr local go src runtime asm s wraps t fatal was called error types withstack withstack errutil withprefix withstack withstack errutil withprefix withstack withstack errutil leaferror more artifacts powered by
1
148,780
11,864,858,035
IssuesEvent
2020-03-25 22:40:11
longhorn/longhorn
https://api.github.com/repos/longhorn/longhorn
opened
[BUG] Nightly Upgrade Test: test_restore_inc sometimes failed
area/test bug
**Describe the bug** **To Reproduce** Steps to reproduce the behavior: 1. Install Longhorn v0.7.0 2. Upgrade Longhorn to `master` 3. Run `test_restore_inc` test **Expected behavior** test should pass **Log** ``` clients = {'longhorn-tests-01': <longhorn.Client object at 0x7fcf376a6fd0>, 'longhorn-tests-02': <longhorn.Client object at 0x7fcf37df1c10>, 'longhorn-tests-03': <longhorn.Client object at 0x7fcf37624350>} core_api = <kubernetes.client.apis.core_v1_api.CoreV1Api object at 0x7fcf376ec190> volume_name = 'longhorn-testvol-1xosau' pod = {'apiVersion': 'v1', 'kind': 'Pod', 'metadata': {'name': 'pod-sb-2-longhorn-testvol-1xosau'}, 'spec': {'containers': [...ep', ...}], 'volumes': [{'name': 'pod-data', 'persistentVolumeClaim': {'claimName': 'sb-2-longhorn-testvol-1xosau'}}]}} @pytest.mark.coretest # NOQA def test_restore_inc(clients, core_api, volume_name, pod): # NOQA for _, client in iter(clients.items()): break setting = client.by_id_setting(common.SETTING_BACKUP_TARGET) # test backupTarget for multiple settings backupstores = common.get_backupstore_url() for backupstore in backupstores: if common.is_backupTarget_s3(backupstore): backupsettings = backupstore.split("$") setting = client.update(setting, value=backupsettings[0]) assert setting.value == backupsettings[0] credential = client.by_id_setting( common.SETTING_BACKUP_TARGET_CREDENTIAL_SECRET) credential = client.update(credential, value=backupsettings[1]) assert credential.value == backupsettings[1] else: setting = client.update(setting, value=backupstore) assert setting.value == backupstore credential = client.by_id_setting( common.SETTING_BACKUP_TARGET_CREDENTIAL_SECRET) credential = client.update(credential, value="") assert credential.value == "" > restore_inc_test(client, core_api, volume_name, pod) test_basic.py:595: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_basic.py:731: in restore_inc_test _, backup2, _, data2 = create_backup(client, volume_name, data2) common.py:222: in create_backup bv, b = find_backup(client, volname, snap.name) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ client = <longhorn.Client object at 0x7fcf376a6fd0> vol_name = 'longhorn-testvol-1xosau' snap_name = 'b0f5359c-58e3-4d3b-88bf-67d3ed1952d0' def find_backup(client, vol_name, snap_name): found = False for i in range(100): bvs = client.list_backupVolume() for bv in bvs: if bv.name == vol_name: found = True break if found: break time.sleep(1) assert found found = False for i in range(20): backups = bv.backupList().data for b in backups: if b.snapshotName == snap_name: found = True break if found: break time.sleep(1) > assert found E AssertionError common.py:2169: AssertionError ``` **Environment:** - Longhorn version: 0.8.0 - Kubernetes version: v1.17.2 - Node OS type and version: Ubuntu 18.04 **Additional context:** longhorn-upgrade-tests/18
1.0
[BUG] Nightly Upgrade Test: test_restore_inc sometimes failed - **Describe the bug** **To Reproduce** Steps to reproduce the behavior: 1. Install Longhorn v0.7.0 2. Upgrade Longhorn to `master` 3. Run `test_restore_inc` test **Expected behavior** test should pass **Log** ``` clients = {'longhorn-tests-01': <longhorn.Client object at 0x7fcf376a6fd0>, 'longhorn-tests-02': <longhorn.Client object at 0x7fcf37df1c10>, 'longhorn-tests-03': <longhorn.Client object at 0x7fcf37624350>} core_api = <kubernetes.client.apis.core_v1_api.CoreV1Api object at 0x7fcf376ec190> volume_name = 'longhorn-testvol-1xosau' pod = {'apiVersion': 'v1', 'kind': 'Pod', 'metadata': {'name': 'pod-sb-2-longhorn-testvol-1xosau'}, 'spec': {'containers': [...ep', ...}], 'volumes': [{'name': 'pod-data', 'persistentVolumeClaim': {'claimName': 'sb-2-longhorn-testvol-1xosau'}}]}} @pytest.mark.coretest # NOQA def test_restore_inc(clients, core_api, volume_name, pod): # NOQA for _, client in iter(clients.items()): break setting = client.by_id_setting(common.SETTING_BACKUP_TARGET) # test backupTarget for multiple settings backupstores = common.get_backupstore_url() for backupstore in backupstores: if common.is_backupTarget_s3(backupstore): backupsettings = backupstore.split("$") setting = client.update(setting, value=backupsettings[0]) assert setting.value == backupsettings[0] credential = client.by_id_setting( common.SETTING_BACKUP_TARGET_CREDENTIAL_SECRET) credential = client.update(credential, value=backupsettings[1]) assert credential.value == backupsettings[1] else: setting = client.update(setting, value=backupstore) assert setting.value == backupstore credential = client.by_id_setting( common.SETTING_BACKUP_TARGET_CREDENTIAL_SECRET) credential = client.update(credential, value="") assert credential.value == "" > restore_inc_test(client, core_api, volume_name, pod) test_basic.py:595: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_basic.py:731: in restore_inc_test _, backup2, _, data2 = create_backup(client, volume_name, data2) common.py:222: in create_backup bv, b = find_backup(client, volname, snap.name) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ client = <longhorn.Client object at 0x7fcf376a6fd0> vol_name = 'longhorn-testvol-1xosau' snap_name = 'b0f5359c-58e3-4d3b-88bf-67d3ed1952d0' def find_backup(client, vol_name, snap_name): found = False for i in range(100): bvs = client.list_backupVolume() for bv in bvs: if bv.name == vol_name: found = True break if found: break time.sleep(1) assert found found = False for i in range(20): backups = bv.backupList().data for b in backups: if b.snapshotName == snap_name: found = True break if found: break time.sleep(1) > assert found E AssertionError common.py:2169: AssertionError ``` **Environment:** - Longhorn version: 0.8.0 - Kubernetes version: v1.17.2 - Node OS type and version: Ubuntu 18.04 **Additional context:** longhorn-upgrade-tests/18
test
nightly upgrade test test restore inc sometimes failed describe the bug to reproduce steps to reproduce the behavior install longhorn upgrade longhorn to master run test restore inc test expected behavior test should pass log clients longhorn tests longhorn tests longhorn tests core api volume name longhorn testvol pod apiversion kind pod metadata name pod sb longhorn testvol spec containers volumes pytest mark coretest noqa def test restore inc clients core api volume name pod noqa for client in iter clients items break setting client by id setting common setting backup target test backuptarget for multiple settings backupstores common get backupstore url for backupstore in backupstores if common is backuptarget backupstore backupsettings backupstore split setting client update setting value backupsettings assert setting value backupsettings credential client by id setting common setting backup target credential secret credential client update credential value backupsettings assert credential value backupsettings else setting client update setting value backupstore assert setting value backupstore credential client by id setting common setting backup target credential secret credential client update credential value assert credential value restore inc test client core api volume name pod test basic py test basic py in restore inc test create backup client volume name common py in create backup bv b find backup client volname snap name client vol name longhorn testvol snap name def find backup client vol name snap name found false for i in range bvs client list backupvolume for bv in bvs if bv name vol name found true break if found break time sleep assert found found false for i in range backups bv backuplist data for b in backups if b snapshotname snap name found true break if found break time sleep assert found e assertionerror common py assertionerror environment longhorn version kubernetes version node os type and version ubuntu additional context longhorn upgrade tests
1
2,813
10,063,518,400
IssuesEvent
2019-07-23 06:09:48
diofant/diofant
https://api.github.com/repos/diofant/diofant
opened
Throw away some modules
maintainability needs decision
Probably, some less important stuff (diffgeom, geometry, vector, stats, calculus) should be maintained separately, as independent packages, which will require diofant.
True
Throw away some modules - Probably, some less important stuff (diffgeom, geometry, vector, stats, calculus) should be maintained separately, as independent packages, which will require diofant.
non_test
throw away some modules probably some less important stuff diffgeom geometry vector stats calculus should be maintained separately as independent packages which will require diofant
0
391,511
26,895,917,539
IssuesEvent
2023-02-06 12:24:50
simonsimon006/kth_a2_continuous_integration
https://api.github.com/repos/simonsimon006/kth_a2_continuous_integration
opened
The API documentation should be generated in a browsable format
documentation P5
The API documentation should be generated in a browsable format (e.g. HTML when using Javadoc).
1.0
The API documentation should be generated in a browsable format - The API documentation should be generated in a browsable format (e.g. HTML when using Javadoc).
non_test
the api documentation should be generated in a browsable format the api documentation should be generated in a browsable format e g html when using javadoc
0
18,637
3,393,153,057
IssuesEvent
2015-11-30 22:45:26
stamen/caliparks.org
https://api.github.com/repos/stamen/caliparks.org
opened
Footer finesse
Design
The scroll arrow is missing, and there is a white stripe between the light and dark gray bits. ![screen shot 2015-11-30 at 2 37 54 pm](https://cloud.githubusercontent.com/assets/12550451/11486769/0cff0758-9770-11e5-8b37-5becaf86ce32.png) Also, I assume the sponsor logos should link out to the following locations: stamen.com www.greeninfo.org www.parksforward.com www.resourceslegacyfund.org
1.0
Footer finesse - The scroll arrow is missing, and there is a white stripe between the light and dark gray bits. ![screen shot 2015-11-30 at 2 37 54 pm](https://cloud.githubusercontent.com/assets/12550451/11486769/0cff0758-9770-11e5-8b37-5becaf86ce32.png) Also, I assume the sponsor logos should link out to the following locations: stamen.com www.greeninfo.org www.parksforward.com www.resourceslegacyfund.org
non_test
footer finesse the scroll arrow is missing and there is a white stripe between the light and dark gray bits also i assume the sponsor logos should link out to the following locations stamen com
0
33,142
7,035,541,366
IssuesEvent
2017-12-28 00:56:18
OGMS/ogms
https://api.github.com/repos/OGMS/ogms
closed
new term - 'medication'
auto-migrated Priority-Medium Type-Defect
``` Adrien Barton (at INSERM) has suggested that OGMS would be a good place for the term 'medication' (also, 'medicine'). Do we agree it belongs here? Here's a (very weak) first stab at a definition: "A obi:processed material whose function is to alleviate the signs and symptoms of a disease or negatively regulate the effects of a disorder and designed to be realized during part of a treatment process" Please help improve. In particular, we need to refine the genus somehow. ``` Original issue reported on code.google.com by `albertgo...@gmail.com` on 21 Sep 2011 at 4:34
1.0
new term - 'medication' - ``` Adrien Barton (at INSERM) has suggested that OGMS would be a good place for the term 'medication' (also, 'medicine'). Do we agree it belongs here? Here's a (very weak) first stab at a definition: "A obi:processed material whose function is to alleviate the signs and symptoms of a disease or negatively regulate the effects of a disorder and designed to be realized during part of a treatment process" Please help improve. In particular, we need to refine the genus somehow. ``` Original issue reported on code.google.com by `albertgo...@gmail.com` on 21 Sep 2011 at 4:34
non_test
new term medication adrien barton at inserm has suggested that ogms would be a good place for the term medication also medicine do we agree it belongs here here s a very weak first stab at a definition a obi processed material whose function is to alleviate the signs and symptoms of a disease or negatively regulate the effects of a disorder and designed to be realized during part of a treatment process please help improve in particular we need to refine the genus somehow original issue reported on code google com by albertgo gmail com on sep at
0
13,891
3,368,101,061
IssuesEvent
2015-11-22 18:38:18
d3athrow/vgstation13
https://api.github.com/repos/d3athrow/vgstation13
closed
(WEB REPORT BY: lorkhatosh REMOTE: 198.245.63.50:7777) Cryokinesis doesn't work
Bug Feature Loss Needs Moar Testing
Cryokinesis simply does not work, when you click cryokinesis mutation button it does nothing.
1.0
(WEB REPORT BY: lorkhatosh REMOTE: 198.245.63.50:7777) Cryokinesis doesn't work - Cryokinesis simply does not work, when you click cryokinesis mutation button it does nothing.
test
web report by lorkhatosh remote cryokinesis doesn t work cryokinesis simply does not work when you click cryokinesis mutation button it does nothing
1
183,711
31,723,067,138
IssuesEvent
2023-09-10 16:29:55
chromium/subspace
https://api.github.com/repos/chromium/subspace
opened
Iterator nth must be optimized for contiguous iterators
design
Slice iterators need to be able to perform nth() in O(1) time, which means Iterator needs to be able to delegate from the default impl to a specific one. There's a few other important methods like that as well. Then aggressively use those methods from the default methods if there's any opportunities to do so that aren't done yet (mostly they should already be). fold is also optimized for some iterators in [std.rs](https://std.rs) I believe.
1.0
Iterator nth must be optimized for contiguous iterators - Slice iterators need to be able to perform nth() in O(1) time, which means Iterator needs to be able to delegate from the default impl to a specific one. There's a few other important methods like that as well. Then aggressively use those methods from the default methods if there's any opportunities to do so that aren't done yet (mostly they should already be). fold is also optimized for some iterators in [std.rs](https://std.rs) I believe.
non_test
iterator nth must be optimized for contiguous iterators slice iterators need to be able to perform nth in o time which means iterator needs to be able to delegate from the default impl to a specific one there s a few other important methods like that as well then aggressively use those methods from the default methods if there s any opportunities to do so that aren t done yet mostly they should already be fold is also optimized for some iterators in i believe
0
335,527
30,038,171,596
IssuesEvent
2023-06-27 13:59:35
fljdin/dispatch
https://api.github.com/repos/fljdin/dispatch
closed
Indentation with Heredoc in testing
testing
For both unit and functional testing, could be more readable to use proper dedent processing * https://github.com/lithammer/dedent * inspired from https://github.com/dalibo/ldap2pg ```go c := configFromYAML(` sync_map: - roles: - name: "toto" `) ``` * bash builtin `<<-OEF` ```sh # t/tests.bats create_commands() { cat <<-EOF > commands.sh echo 1 echo 2 EOF } ```
1.0
Indentation with Heredoc in testing - For both unit and functional testing, could be more readable to use proper dedent processing * https://github.com/lithammer/dedent * inspired from https://github.com/dalibo/ldap2pg ```go c := configFromYAML(` sync_map: - roles: - name: "toto" `) ``` * bash builtin `<<-OEF` ```sh # t/tests.bats create_commands() { cat <<-EOF > commands.sh echo 1 echo 2 EOF } ```
test
indentation with heredoc in testing for both unit and functional testing could be more readable to use proper dedent processing inspired from go c configfromyaml sync map roles name toto bash builtin oef sh t tests bats create commands cat commands sh echo echo eof
1
650,006
21,332,117,258
IssuesEvent
2022-04-18 09:47:59
opencrvs/opencrvs-core
https://api.github.com/repos/opencrvs/opencrvs-core
opened
Alignment of Side menu is not vertically straight in Mobile view
👹Bug Priority: low
**Description** In mobile view, Side menu (Performance,Team,Configuration,settings,Logout) aren't aligned in accordance with the 1st line. **To Reproduce** 1. Go to https://register.farajaland-qa.opencrvs.org/ 2. Login as National Sys Admin 3. Observe Alignment of Performance,team settings and logout 4. See error **Actual Result** Side menus aren't aligned as design **Expected behaviour** Alignment of (performance,team,setting, logout) should be vertically straight **Screenshots** ![image.png](https://images.zenhubusercontent.com/619f3ae43a0c92533cd6c6db/e365195d-3db0-4c1f-b4e3-f3141eb03cdb) **Smartphone :** - Device: Samsung S+ - OS: Android - Browser: Chrome
1.0
Alignment of Side menu is not vertically straight in Mobile view - **Description** In mobile view, Side menu (Performance,Team,Configuration,settings,Logout) aren't aligned in accordance with the 1st line. **To Reproduce** 1. Go to https://register.farajaland-qa.opencrvs.org/ 2. Login as National Sys Admin 3. Observe Alignment of Performance,team settings and logout 4. See error **Actual Result** Side menus aren't aligned as design **Expected behaviour** Alignment of (performance,team,setting, logout) should be vertically straight **Screenshots** ![image.png](https://images.zenhubusercontent.com/619f3ae43a0c92533cd6c6db/e365195d-3db0-4c1f-b4e3-f3141eb03cdb) **Smartphone :** - Device: Samsung S+ - OS: Android - Browser: Chrome
non_test
alignment of side menu is not vertically straight in mobile view description in mobile view side menu performance team configuration settings logout aren t aligned in accordance with the line to reproduce go to login as national sys admin observe alignment of performance team settings and logout see error actual result side menus aren t aligned as design expected behaviour alignment of performance team setting logout should be vertically straight screenshots smartphone device samsung s os android browser chrome
0
60,773
25,256,326,613
IssuesEvent
2022-11-15 18:24:17
dotnet/arcade
https://api.github.com/repos/dotnet/arcade
closed
Create a single source of truth for our Platform Matrix
Epic area-eng-services
Migrated from https://github.com/dotnet/core-eng/issues/11077 @Chrisboh wrote: ## Motivation We continue to have lack of clarity within the division around what operating systems (OSes) we support for .NET, including details about how the environment look. The data is available but is located in multiple places (such as the official documentation ([Windows](https://docs.microsoft.com/en-us/dotnet/core/install/windows?tabs=net60#supported-releases), [macOS](https://docs.microsoft.com/en-us/dotnet/core/install/macos#supported-releases) and [Linux](https://docs.microsoft.com/en-us/dotnet/core/install/linux) platforms), OSOB YAML definitions, Kusto DB, AzDO pipelines) which makes it not suitable to be consumed by automation. We need to collect this information in a single place so it's available both to us and to our customers in forms that are readable both by humans (csvs, PowerBI reports) and by automation (e.g., JSON). Additionally, there is not process in place that allows individual developers to see what OS versions are currently being considered for support and/or request support for a specific scenario they may have. Being able to provide clarity to the division and management on what OSes we support, in which environments and how we go about adding/removing support is the end goal for this epic. The resulting processes, automation and produced data will be called "Matrix of Truth", abbreviated as "MoT" in the following description. ## Business Objectives The MoT initiative has two complementary aspects - process and tooling. Below are the business objectives for each of those aspects. ### Process Objectives The process and communication of changes to the fundamental underlying data of the MoT, such as what concrete versions of OSes are supported for a given version of .NET. The initial decisions about which OSes will be supported are made by the management and the PMs. Our responsibility is to define the mechanics of conveying, maintaining and presenting this information and their changes so that full audit trail (i.e., who changed what, when and why) is captured. - [x] Identify v-team membership - dnceng and partner teams - [x] Establish regular (monthly) sync for v-team - [ ] Document and share process for operating systems lifecycle for each type of OS (Linux, Windows, Mac) supported ### Implementation Objectives To get the full MoT picture, we will need to aggregate data from various sources in an automated way on a regular basis. - [x] Use-case scenarios for MoT are defined and reviewed by our partners - [x] System that aggregates data necessary to perform the identified scenarios in an automated manner is implemented - [x] Our customers can use the MoT data in automation (via JSON reports, etc.) - [x] Our customers can view and explore the MoT data in a human-readable form (xls, PowerBI report, etc.) - [x] The system ensures that full audit trail (who changed what, when and why) of the underlying fundamental data is kept ### Scenarios Together with our partners (discussed on Engineering Services Backlog Review on 03-Feb-2022), we have identified the following scenarios that will be performed by MoT users regularly. Each of the following scenarios represents a business objective to fulfill. #### Roles * Platform Managers - Individuals responsible for managing the lifecycle of an Operating System for .NET (PM team, Leadership, DNCEng) * Customers - Anyone who builds/tests .NET #### Initial OS Evaluation and Requests Scenario | Process Implemented ---|:-: **Scenario 1** - As a customer or platform manager, I want to know what new Operating Systems are currently being evaluated and what the status of the approval process. | ✅ **Scenario 2** - As a customer or platform manager, I want to be able to request support for a new Operating System. | ✅ **Scenario 3** - As a platform manager, I want to be able to estimate infrastructure costs of onboarding a new Operating System based on utilization data of similar OSes. | ✅ **Scenario 4** - As a platform manager, I want to be able to see the overhead cost for supporting a new Operating System. | 🔲 **Scenario 5** - As a platform manager, I want to know the estimated release dates of upcoming new OS versions so that I can plan my future work. | ✅ #### OS Onboarding and Adoption Scenario | Data Collected | Included in Output | Shown on Report | Comments ---|:-:|:-:|:-:|--- **Scenario 6** - As a platform manager, I want to track the progress of getting support for a new operating system and its adoption rate by the product teams. | ✅ | ✅ | ✅| #### Querying Current State of Our Environment Scenario | Data Collected | Included in Output | Shown on Report | Comments ---|:-:|:-:|:-:|--- **Scenario 7** - As a customer or platform manager, I want to be able to see all the Operating Systems that are officially supported for version of .NET, so I can make sure that we have the correct coverage. | ✅ | ✅ | ✅ | **Scenario 8** - As a customer, I want to see how each Operating System is supported in our environment (type of docker container / VM / physical machine, Operation System version, Build tools VS/XCODE version, Python version, etc.).​ | ✅ | ✅ | ✅ | **Scenario 9** - As a customer or platform manager, I want to see what pipelines and/or repos are targeting each of the supported Operating Systems or Helix queues. | ✅ | ✅ | ✅ | **Scenario 10** - As a platform manager, I want to see estimated utilization and time we spend for a given Operating System. | 🔲 | 🔲 | 🔲 | Based on discussion with @Chrisboh and @ilyas1974 we decided to postpone this scenario **Scenario 11** - As a platform manager, I want to see which product branches are targeting each of the supported Operating Systems. | ✅ | ✅ | ✅ | **Scenario 12** - As a customer or platform manager, I want to see all Operating Systems which are approaching their EOL, so I can make sure my workloads are moved before an Operation System goes away. | ✅ | ✅ | ✅ | **Scenario 13** - As a customer, I want to know all machine specifications which we run in our environment (physical machine, VM). | ✅ | ✅ | ✅ | #### OS Retirement and Removal Scenario | Data Collected | Included in Output | Shown on Report | Comments ---|:-:|:-:|:-:|--- **Scenario 14** - As a platform manager, I want to track the progress of removing support for an OS within our infrastructure. | ✅ | ✅| ✅ | **Scenario 15** - As a platform manager I want to see any differences between the official EOL date from the OEM and what our end of support date is | ✅ | ✅ | ✅ | ### Architecture ![image](https://user-images.githubusercontent.com/34948975/153942049-d0f62653-737a-4f17-a58b-e3f594060037.png) ### Milestones - [x] Identify business objectives in form of user scenarios and get feedback on them from our partners - [x] Design the internal data model that captures relationships of the fundamental entities (.NET versions, OSes, RIDs, Helix queues, 1ES, etc., docker) (30-January) - [x] https://github.com/dotnet/core-eng/issues/15677 (18-February) - [x] https://github.com/dotnet/core-eng/issues/15676 (25-February) - [x] Scenario 7, 8, 9 report review for March Engineering Services Backlog review (3-March) - [x] Implement report generation pipeline - [x] Collect usage telemetry (scenarios 3, 4, 10) - [x] Collect information about machine specification (scenario 13) - [x] Data collection, reports generation and process implementation for (TBD) - [x] the rest of "Querying Current State of Our Environment" scenarios (end of March) - [x] the "Initial OS Evaluation and Requests" scenarios (TBD) - [x] the "OS Onboarding and Adoption" scenario (TBD) - [x] https://github.com/dotnet/core-eng/issues/15668 # Task list ## Process related issues - [x] https://github.com/dotnet/core-eng/issues/7298 - [x] https://github.com/dotnet/core-eng/issues/10684 - [x] https://github.com/dotnet/core-eng/issues/13918 ## Proof of concept - [x] https://github.com/dotnet/core-eng/issues/15445 - [x] https://github.com/dotnet/core-eng/issues/15357 - [x] https://github.com/dotnet/core-eng/issues/15525 - [x] https://github.com/dotnet/core-eng/issues/15552 - [x] https://github.com/dotnet/core-eng/issues/15621 - [x] https://github.com/dotnet/core-eng/issues/15673 ## Refactoring and other work - [x] https://github.com/dotnet/core-eng/issues/15082 - [x] https://github.com/dotnet/core-eng/issues/15672 - [x] https://github.com/dotnet/core-eng/issues/15607 - [x] https://github.com/dotnet/core-eng/issues/15622 <!-- Do not remove this section. Triage will add new issues identified as part of this epic into this section. The v-team is responsible for triaging them into their business goals. If they do not fit into a business goals, please remove from this section. --> # Recently Triaged Issues - [x] https://github.com/dotnet/arcade/issues/8979 - [x] https://github.com/dotnet/arcade/issues/8976
1.0
Create a single source of truth for our Platform Matrix - Migrated from https://github.com/dotnet/core-eng/issues/11077 @Chrisboh wrote: ## Motivation We continue to have lack of clarity within the division around what operating systems (OSes) we support for .NET, including details about how the environment look. The data is available but is located in multiple places (such as the official documentation ([Windows](https://docs.microsoft.com/en-us/dotnet/core/install/windows?tabs=net60#supported-releases), [macOS](https://docs.microsoft.com/en-us/dotnet/core/install/macos#supported-releases) and [Linux](https://docs.microsoft.com/en-us/dotnet/core/install/linux) platforms), OSOB YAML definitions, Kusto DB, AzDO pipelines) which makes it not suitable to be consumed by automation. We need to collect this information in a single place so it's available both to us and to our customers in forms that are readable both by humans (csvs, PowerBI reports) and by automation (e.g., JSON). Additionally, there is not process in place that allows individual developers to see what OS versions are currently being considered for support and/or request support for a specific scenario they may have. Being able to provide clarity to the division and management on what OSes we support, in which environments and how we go about adding/removing support is the end goal for this epic. The resulting processes, automation and produced data will be called "Matrix of Truth", abbreviated as "MoT" in the following description. ## Business Objectives The MoT initiative has two complementary aspects - process and tooling. Below are the business objectives for each of those aspects. ### Process Objectives The process and communication of changes to the fundamental underlying data of the MoT, such as what concrete versions of OSes are supported for a given version of .NET. The initial decisions about which OSes will be supported are made by the management and the PMs. Our responsibility is to define the mechanics of conveying, maintaining and presenting this information and their changes so that full audit trail (i.e., who changed what, when and why) is captured. - [x] Identify v-team membership - dnceng and partner teams - [x] Establish regular (monthly) sync for v-team - [ ] Document and share process for operating systems lifecycle for each type of OS (Linux, Windows, Mac) supported ### Implementation Objectives To get the full MoT picture, we will need to aggregate data from various sources in an automated way on a regular basis. - [x] Use-case scenarios for MoT are defined and reviewed by our partners - [x] System that aggregates data necessary to perform the identified scenarios in an automated manner is implemented - [x] Our customers can use the MoT data in automation (via JSON reports, etc.) - [x] Our customers can view and explore the MoT data in a human-readable form (xls, PowerBI report, etc.) - [x] The system ensures that full audit trail (who changed what, when and why) of the underlying fundamental data is kept ### Scenarios Together with our partners (discussed on Engineering Services Backlog Review on 03-Feb-2022), we have identified the following scenarios that will be performed by MoT users regularly. Each of the following scenarios represents a business objective to fulfill. #### Roles * Platform Managers - Individuals responsible for managing the lifecycle of an Operating System for .NET (PM team, Leadership, DNCEng) * Customers - Anyone who builds/tests .NET #### Initial OS Evaluation and Requests Scenario | Process Implemented ---|:-: **Scenario 1** - As a customer or platform manager, I want to know what new Operating Systems are currently being evaluated and what the status of the approval process. | ✅ **Scenario 2** - As a customer or platform manager, I want to be able to request support for a new Operating System. | ✅ **Scenario 3** - As a platform manager, I want to be able to estimate infrastructure costs of onboarding a new Operating System based on utilization data of similar OSes. | ✅ **Scenario 4** - As a platform manager, I want to be able to see the overhead cost for supporting a new Operating System. | 🔲 **Scenario 5** - As a platform manager, I want to know the estimated release dates of upcoming new OS versions so that I can plan my future work. | ✅ #### OS Onboarding and Adoption Scenario | Data Collected | Included in Output | Shown on Report | Comments ---|:-:|:-:|:-:|--- **Scenario 6** - As a platform manager, I want to track the progress of getting support for a new operating system and its adoption rate by the product teams. | ✅ | ✅ | ✅| #### Querying Current State of Our Environment Scenario | Data Collected | Included in Output | Shown on Report | Comments ---|:-:|:-:|:-:|--- **Scenario 7** - As a customer or platform manager, I want to be able to see all the Operating Systems that are officially supported for version of .NET, so I can make sure that we have the correct coverage. | ✅ | ✅ | ✅ | **Scenario 8** - As a customer, I want to see how each Operating System is supported in our environment (type of docker container / VM / physical machine, Operation System version, Build tools VS/XCODE version, Python version, etc.).​ | ✅ | ✅ | ✅ | **Scenario 9** - As a customer or platform manager, I want to see what pipelines and/or repos are targeting each of the supported Operating Systems or Helix queues. | ✅ | ✅ | ✅ | **Scenario 10** - As a platform manager, I want to see estimated utilization and time we spend for a given Operating System. | 🔲 | 🔲 | 🔲 | Based on discussion with @Chrisboh and @ilyas1974 we decided to postpone this scenario **Scenario 11** - As a platform manager, I want to see which product branches are targeting each of the supported Operating Systems. | ✅ | ✅ | ✅ | **Scenario 12** - As a customer or platform manager, I want to see all Operating Systems which are approaching their EOL, so I can make sure my workloads are moved before an Operation System goes away. | ✅ | ✅ | ✅ | **Scenario 13** - As a customer, I want to know all machine specifications which we run in our environment (physical machine, VM). | ✅ | ✅ | ✅ | #### OS Retirement and Removal Scenario | Data Collected | Included in Output | Shown on Report | Comments ---|:-:|:-:|:-:|--- **Scenario 14** - As a platform manager, I want to track the progress of removing support for an OS within our infrastructure. | ✅ | ✅| ✅ | **Scenario 15** - As a platform manager I want to see any differences between the official EOL date from the OEM and what our end of support date is | ✅ | ✅ | ✅ | ### Architecture ![image](https://user-images.githubusercontent.com/34948975/153942049-d0f62653-737a-4f17-a58b-e3f594060037.png) ### Milestones - [x] Identify business objectives in form of user scenarios and get feedback on them from our partners - [x] Design the internal data model that captures relationships of the fundamental entities (.NET versions, OSes, RIDs, Helix queues, 1ES, etc., docker) (30-January) - [x] https://github.com/dotnet/core-eng/issues/15677 (18-February) - [x] https://github.com/dotnet/core-eng/issues/15676 (25-February) - [x] Scenario 7, 8, 9 report review for March Engineering Services Backlog review (3-March) - [x] Implement report generation pipeline - [x] Collect usage telemetry (scenarios 3, 4, 10) - [x] Collect information about machine specification (scenario 13) - [x] Data collection, reports generation and process implementation for (TBD) - [x] the rest of "Querying Current State of Our Environment" scenarios (end of March) - [x] the "Initial OS Evaluation and Requests" scenarios (TBD) - [x] the "OS Onboarding and Adoption" scenario (TBD) - [x] https://github.com/dotnet/core-eng/issues/15668 # Task list ## Process related issues - [x] https://github.com/dotnet/core-eng/issues/7298 - [x] https://github.com/dotnet/core-eng/issues/10684 - [x] https://github.com/dotnet/core-eng/issues/13918 ## Proof of concept - [x] https://github.com/dotnet/core-eng/issues/15445 - [x] https://github.com/dotnet/core-eng/issues/15357 - [x] https://github.com/dotnet/core-eng/issues/15525 - [x] https://github.com/dotnet/core-eng/issues/15552 - [x] https://github.com/dotnet/core-eng/issues/15621 - [x] https://github.com/dotnet/core-eng/issues/15673 ## Refactoring and other work - [x] https://github.com/dotnet/core-eng/issues/15082 - [x] https://github.com/dotnet/core-eng/issues/15672 - [x] https://github.com/dotnet/core-eng/issues/15607 - [x] https://github.com/dotnet/core-eng/issues/15622 <!-- Do not remove this section. Triage will add new issues identified as part of this epic into this section. The v-team is responsible for triaging them into their business goals. If they do not fit into a business goals, please remove from this section. --> # Recently Triaged Issues - [x] https://github.com/dotnet/arcade/issues/8979 - [x] https://github.com/dotnet/arcade/issues/8976
non_test
create a single source of truth for our platform matrix migrated from chrisboh wrote motivation we continue to have lack of clarity within the division around what operating systems oses we support for net including details about how the environment look the data is available but is located in multiple places such as the official documentation and platforms osob yaml definitions kusto db azdo pipelines which makes it not suitable to be consumed by automation we need to collect this information in a single place so it s available both to us and to our customers in forms that are readable both by humans csvs powerbi reports and by automation e g json additionally there is not process in place that allows individual developers to see what os versions are currently being considered for support and or request support for a specific scenario they may have being able to provide clarity to the division and management on what oses we support in which environments and how we go about adding removing support is the end goal for this epic the resulting processes automation and produced data will be called matrix of truth abbreviated as mot in the following description business objectives the mot initiative has two complementary aspects process and tooling below are the business objectives for each of those aspects process objectives the process and communication of changes to the fundamental underlying data of the mot such as what concrete versions of oses are supported for a given version of net the initial decisions about which oses will be supported are made by the management and the pms our responsibility is to define the mechanics of conveying maintaining and presenting this information and their changes so that full audit trail i e who changed what when and why is captured identify v team membership dnceng and partner teams establish regular monthly sync for v team document and share process for operating systems lifecycle for each type of os linux windows mac supported implementation objectives to get the full mot picture we will need to aggregate data from various sources in an automated way on a regular basis use case scenarios for mot are defined and reviewed by our partners system that aggregates data necessary to perform the identified scenarios in an automated manner is implemented our customers can use the mot data in automation via json reports etc our customers can view and explore the mot data in a human readable form xls powerbi report etc the system ensures that full audit trail who changed what when and why of the underlying fundamental data is kept scenarios together with our partners discussed on engineering services backlog review on feb we have identified the following scenarios that will be performed by mot users regularly each of the following scenarios represents a business objective to fulfill roles platform managers individuals responsible for managing the lifecycle of an operating system for net pm team leadership dnceng customers anyone who builds tests net initial os evaluation and requests scenario process implemented scenario as a customer or platform manager i want to know what new operating systems are currently being evaluated and what the status of the approval process ✅ scenario as a customer or platform manager i want to be able to request support for a new operating system ✅ scenario as a platform manager i want to be able to estimate infrastructure costs of onboarding a new operating system based on utilization data of similar oses ✅ scenario as a platform manager i want to be able to see the overhead cost for supporting a new operating system 🔲 scenario as a platform manager i want to know the estimated release dates of upcoming new os versions so that i can plan my future work ✅ os onboarding and adoption scenario data collected included in output shown on report comments scenario as a platform manager i want to track the progress of getting support for a new operating system and its adoption rate by the product teams ✅ ✅ ✅ querying current state of our environment scenario data collected included in output shown on report comments scenario as a customer or platform manager i want to be able to see all the operating systems that are officially supported for version of net so i can make sure that we have the correct coverage ✅ ✅ ✅ scenario as a customer i want to see how each operating system is supported in our environment type of docker container vm physical machine operation system version build tools vs xcode version python version etc ​ ✅ ✅ ✅ scenario as a customer or platform manager i want to see what pipelines and or repos are targeting each of the supported operating systems or helix queues ✅ ✅ ✅ scenario as a platform manager i want to see estimated utilization and time we spend for a given operating system 🔲 🔲 🔲 based on discussion with chrisboh and we decided to postpone this scenario scenario as a platform manager i want to see which product branches are targeting each of the supported operating systems ✅ ✅ ✅ scenario as a customer or platform manager i want to see all operating systems which are approaching their eol so i can make sure my workloads are moved before an operation system goes away ✅ ✅ ✅ scenario as a customer i want to know all machine specifications which we run in our environment physical machine vm ✅ ✅ ✅ os retirement and removal scenario data collected included in output shown on report comments scenario as a platform manager i want to track the progress of removing support for an os within our infrastructure ✅ ✅ ✅ scenario as a platform manager i want to see any differences between the official eol date from the oem and what our end of support date is ✅ ✅ ✅ architecture milestones identify business objectives in form of user scenarios and get feedback on them from our partners design the internal data model that captures relationships of the fundamental entities net versions oses rids helix queues etc docker january february february scenario report review for march engineering services backlog review march implement report generation pipeline collect usage telemetry scenarios collect information about machine specification scenario data collection reports generation and process implementation for tbd the rest of querying current state of our environment scenarios end of march the initial os evaluation and requests scenarios tbd the os onboarding and adoption scenario tbd task list process related issues proof of concept refactoring and other work recently triaged issues
0
39,023
6,717,428,913
IssuesEvent
2017-10-14 21:07:06
syl20bnr/spacemacs
https://api.github.com/repos/syl20bnr/spacemacs
closed
linkapp is deprecated by Homebrew.
- Bug tracker - Documentation ✏ Hacktoberfest
Although Homebrew have deprecated linkups command and suggested using cask, the instruction on readme hasn't change.
1.0
linkapp is deprecated by Homebrew. - Although Homebrew have deprecated linkups command and suggested using cask, the instruction on readme hasn't change.
non_test
linkapp is deprecated by homebrew although homebrew have deprecated linkups command and suggested using cask the instruction on readme hasn t change
0
1,452
2,545,893,535
IssuesEvent
2015-01-29 20:05:09
palantir/plottable
https://api.github.com/repos/palantir/plottable
opened
Two sets of legend tests
p2 testing
![screen shot 2015-01-29 at 12 04 29 pm](https://cloud.githubusercontent.com/assets/1440449/5965183/0795891a-a7af-11e4-9a77-3901074e787f.png) Not sure why there are two. We should merge them, and eliminate any unnecessary tests.
1.0
Two sets of legend tests - ![screen shot 2015-01-29 at 12 04 29 pm](https://cloud.githubusercontent.com/assets/1440449/5965183/0795891a-a7af-11e4-9a77-3901074e787f.png) Not sure why there are two. We should merge them, and eliminate any unnecessary tests.
test
two sets of legend tests not sure why there are two we should merge them and eliminate any unnecessary tests
1
63,548
7,725,329,595
IssuesEvent
2018-05-24 17:37:09
phetsims/equality-explorer-basics
https://api.github.com/repos/phetsims/equality-explorer-basics
closed
add Lab screen
design:general status:ready-for-review
Add a screen that allows the user to change the values of the mystery objects. @amanda-phet please specify: - [x] name for the screen (tentatively "Lab") **AM: Lab fits the paradigm we use in other sims so I think that is fine here.** - [x] icon(s) for home screen and navigation bar **square = picker** - [x] the 3 objects, and associated artwork (icon and shadow) if they are new. (Please provide .ai file and credits.) **shapes: sphere, square, triangle** - [x] default values, and value ranges **AM: defaults should all be 1. Ranges can be [1,10].** - [x] control for changing the values (tentatively 3 pickers arranged horizontally in an accordion box below the snapshots accordion box) **AM: 3 pickers arranged horizontally are working well so I'd like to keep that.** - [x] title for the accordion box **AM: Values**
1.0
add Lab screen - Add a screen that allows the user to change the values of the mystery objects. @amanda-phet please specify: - [x] name for the screen (tentatively "Lab") **AM: Lab fits the paradigm we use in other sims so I think that is fine here.** - [x] icon(s) for home screen and navigation bar **square = picker** - [x] the 3 objects, and associated artwork (icon and shadow) if they are new. (Please provide .ai file and credits.) **shapes: sphere, square, triangle** - [x] default values, and value ranges **AM: defaults should all be 1. Ranges can be [1,10].** - [x] control for changing the values (tentatively 3 pickers arranged horizontally in an accordion box below the snapshots accordion box) **AM: 3 pickers arranged horizontally are working well so I'd like to keep that.** - [x] title for the accordion box **AM: Values**
non_test
add lab screen add a screen that allows the user to change the values of the mystery objects amanda phet please specify name for the screen tentatively lab am lab fits the paradigm we use in other sims so i think that is fine here icon s for home screen and navigation bar square picker the objects and associated artwork icon and shadow if they are new please provide ai file and credits shapes sphere square triangle default values and value ranges am defaults should all be ranges can be control for changing the values tentatively pickers arranged horizontally in an accordion box below the snapshots accordion box am pickers arranged horizontally are working well so i d like to keep that title for the accordion box am values
0
310,193
26,705,421,452
IssuesEvent
2023-01-27 17:42:34
ntop/ntopng
https://api.github.com/repos/ntop/ntopng
closed
Unable to disable items in Category Lists
Ready to Test Waiting for Feedback ⌛
CentOS 8 Stream ntopng Enterprise L v.5.4.230112 What happened: Trying to disable items in Category Lists, but nothing happened. Items still enabled after edit. But on some instances lists were disabled correctly. ![image](https://user-images.githubusercontent.com/67421019/212245010-f1493890-9bb3-4363-8ef7-ad7367cf9aeb.png)
1.0
Unable to disable items in Category Lists - CentOS 8 Stream ntopng Enterprise L v.5.4.230112 What happened: Trying to disable items in Category Lists, but nothing happened. Items still enabled after edit. But on some instances lists were disabled correctly. ![image](https://user-images.githubusercontent.com/67421019/212245010-f1493890-9bb3-4363-8ef7-ad7367cf9aeb.png)
test
unable to disable items in category lists centos stream ntopng enterprise l v what happened trying to disable items in category lists but nothing happened items still enabled after edit but on some instances lists were disabled correctly
1
39,004
5,207,401,589
IssuesEvent
2017-01-24 23:25:15
e107inc/e107
https://api.github.com/repos/e107inc/e107
closed
login.php
enhancement testing required
Is this page working ok? If I'm logged in, it provides an error saying I'm logged in - good. If I'm not logged in, it redirects to last page. On line 23, should there be an ! before (empty($pref['user_reg']) && empty($pref['social_login_active']))) to check if either of those are not empty? Presumably, if they're empty the user is not logged in, if they are populated, the user is logged in?
1.0
login.php - Is this page working ok? If I'm logged in, it provides an error saying I'm logged in - good. If I'm not logged in, it redirects to last page. On line 23, should there be an ! before (empty($pref['user_reg']) && empty($pref['social_login_active']))) to check if either of those are not empty? Presumably, if they're empty the user is not logged in, if they are populated, the user is logged in?
test
login php is this page working ok if i m logged in it provides an error saying i m logged in good if i m not logged in it redirects to last page on line should there be an before empty pref empty pref to check if either of those are not empty presumably if they re empty the user is not logged in if they are populated the user is logged in
1
660,342
21,962,047,253
IssuesEvent
2022-05-24 16:41:16
dojot/dojot
https://api.github.com/repos/dojot/dojot
closed
[GUI NX] - Color Selection does not work.
Type:Bug :bug: Priority:Low Team:Frontend Complexity: Low
* **I'm submitting a ...** - [X] bug report - [ ] feature request - [ ] support request Color Selection does not work. ![image (2).png](https://images.zenhubusercontent.com/5d138e07f0489e0b0b4be7a3/9d5ff908-b6ef-46ff-878f-5c217cce2496) **Page inspection capture is without information for this problem** ![Captura de tela de 2022-04-11 10-13-13.png](https://images.zenhubusercontent.com/5d138e07f0489e0b0b4be7a3/463a3cbb-bba0-44d1-894f-0d6167d5abc5) * **Please tell us about your environment:** - Version: GUI-NX - Environment: [docker-compose] - Operating system: [Ubuntu 16.04]
1.0
[GUI NX] - Color Selection does not work. - * **I'm submitting a ...** - [X] bug report - [ ] feature request - [ ] support request Color Selection does not work. ![image (2).png](https://images.zenhubusercontent.com/5d138e07f0489e0b0b4be7a3/9d5ff908-b6ef-46ff-878f-5c217cce2496) **Page inspection capture is without information for this problem** ![Captura de tela de 2022-04-11 10-13-13.png](https://images.zenhubusercontent.com/5d138e07f0489e0b0b4be7a3/463a3cbb-bba0-44d1-894f-0d6167d5abc5) * **Please tell us about your environment:** - Version: GUI-NX - Environment: [docker-compose] - Operating system: [Ubuntu 16.04]
non_test
color selection does not work i m submitting a bug report feature request support request color selection does not work page inspection capture is without information for this problem please tell us about your environment version gui nx environment operating system
0
30,665
4,643,106,096
IssuesEvent
2016-09-30 12:20:09
SpamExperts/plesk-linux-addon
https://api.github.com/repos/SpamExperts/plesk-linux-addon
closed
Make acceptance tests run on Chrome driver
task testing
Modify tests in order to run on FirefoxDriver and ChromeDriver. In order to achieve that you will need to use CSS Locators and XPATHs with [Locator::combine](http://codeception.com/docs/reference/Locator#combine) function. Also repair tests if they are outdated.
1.0
Make acceptance tests run on Chrome driver - Modify tests in order to run on FirefoxDriver and ChromeDriver. In order to achieve that you will need to use CSS Locators and XPATHs with [Locator::combine](http://codeception.com/docs/reference/Locator#combine) function. Also repair tests if they are outdated.
test
make acceptance tests run on chrome driver modify tests in order to run on firefoxdriver and chromedriver in order to achieve that you will need to use css locators and xpaths with function also repair tests if they are outdated
1
36,897
9,920,362,550
IssuesEvent
2019-06-30 08:32:09
Kieranties/SimpleVersion
https://api.github.com/repos/Kieranties/SimpleVersion
closed
Investigate ILMerge
:building_construction: Infrastructure :sparkles: feature
Initial investigation into CoreRT looks promising, however there were some issues with its usage and LibGit2Sharp ``` Unhandled Exception: System.Exception: Method '[LibGit2Sharp]LibGit2Sharp.Core.NativeMethods.git_repository_open(git_repository*&,FilePath)' requires non-trivial marshalling that is not yet supported by this compiler. ``` Also consider: https://github.com/Hubert-Rybak/dotnet-warp
1.0
Investigate ILMerge - Initial investigation into CoreRT looks promising, however there were some issues with its usage and LibGit2Sharp ``` Unhandled Exception: System.Exception: Method '[LibGit2Sharp]LibGit2Sharp.Core.NativeMethods.git_repository_open(git_repository*&,FilePath)' requires non-trivial marshalling that is not yet supported by this compiler. ``` Also consider: https://github.com/Hubert-Rybak/dotnet-warp
non_test
investigate ilmerge initial investigation into corert looks promising however there were some issues with its usage and unhandled exception system exception method core nativemethods git repository open git repository filepath requires non trivial marshalling that is not yet supported by this compiler also consider
0
451,897
13,043,143,670
IssuesEvent
2020-07-29 00:36:31
longhorn/longhorn
https://api.github.com/repos/longhorn/longhorn
closed
[BUG] Unable to attach volumes with `node.session.scan = manual`
area/engine bug priority/3
**Describe the bug** When I set `node.session.scan = manual` in iscsid.conf, all pods cannot attach Longhorn volumes. The pod status sticks in ContainerCreating and the Longhorn volume state sticks in attaching/detaching loop. The reason why I set `node.session.scan = manual` is to avoid the following issues with another CSI Driver. - [Automatic session scanning discovers all iSCSI devices during login and can cause LUN assignment issues #90982](https://github.com/kubernetes/kubernetes/issues/90982) - [PV and volume size are different #410](https://github.com/NetApp/trident/issues/410) `node.session.scan` setting was introduced in [open-iscsi 2.0.874-5ubuntu2.10](https://launchpad.net/ubuntu/+source/open-iscsi/2.0.874-5ubuntu2.10) in Ubuntu. **To Reproduce** 1. Set `node.session.scan = manual` in `/etc/iscsi/iscsid.conf` 2. Restart iscsid `sudo systemctl restart iscsid` 3. Create Longhorn PVC and Pod 4. Pod sticks in ContainerCreating **Expected behavior** Pod can attach volumes with `node.session.scan = manual`. **Log** "Instance ... exit status 1" errors appear repeatedly in longhorn-manager ``` time="2020-07-20T06:05:22Z" level=warning msg="Instance pvc-03eb6587-158d-4d5a-bb70-42f68f2c5c37-e-e8b0d898 is state error, error message: exit status 1" ``` "fail to read magic version" errors appear repeatedly in /var/log/tgtd.log ``` tgtd: bs_thread_open(409) 16 lh_client_close_conn: Closing connection receive_msg: fail to read magic version response_process: Receive response returned error lh_client_close_conn: Closing connection lh_client_close_conn: Connection close complete tgtd: device_mgmt(246) sz:108 params:path=/var/run/longhorn-pvc-03eb6587-158d-4d5a-bb70-42f68f2c5c37.sock,bstype=longhorn,bsopts=size=2147483648 ``` **Environment:** - Longhorn version: v1.0.0 - Kubernetes version: v1.18.3, v1.17.9 - Node OS type and version: Ubuntu 18.04.4 LTS - open-iscsi: [2.0.874-5ubuntu2.10](https://launchpad.net/ubuntu/+source/open-iscsi/2.0.874-5ubuntu2.10) **Additional context**
1.0
[BUG] Unable to attach volumes with `node.session.scan = manual` - **Describe the bug** When I set `node.session.scan = manual` in iscsid.conf, all pods cannot attach Longhorn volumes. The pod status sticks in ContainerCreating and the Longhorn volume state sticks in attaching/detaching loop. The reason why I set `node.session.scan = manual` is to avoid the following issues with another CSI Driver. - [Automatic session scanning discovers all iSCSI devices during login and can cause LUN assignment issues #90982](https://github.com/kubernetes/kubernetes/issues/90982) - [PV and volume size are different #410](https://github.com/NetApp/trident/issues/410) `node.session.scan` setting was introduced in [open-iscsi 2.0.874-5ubuntu2.10](https://launchpad.net/ubuntu/+source/open-iscsi/2.0.874-5ubuntu2.10) in Ubuntu. **To Reproduce** 1. Set `node.session.scan = manual` in `/etc/iscsi/iscsid.conf` 2. Restart iscsid `sudo systemctl restart iscsid` 3. Create Longhorn PVC and Pod 4. Pod sticks in ContainerCreating **Expected behavior** Pod can attach volumes with `node.session.scan = manual`. **Log** "Instance ... exit status 1" errors appear repeatedly in longhorn-manager ``` time="2020-07-20T06:05:22Z" level=warning msg="Instance pvc-03eb6587-158d-4d5a-bb70-42f68f2c5c37-e-e8b0d898 is state error, error message: exit status 1" ``` "fail to read magic version" errors appear repeatedly in /var/log/tgtd.log ``` tgtd: bs_thread_open(409) 16 lh_client_close_conn: Closing connection receive_msg: fail to read magic version response_process: Receive response returned error lh_client_close_conn: Closing connection lh_client_close_conn: Connection close complete tgtd: device_mgmt(246) sz:108 params:path=/var/run/longhorn-pvc-03eb6587-158d-4d5a-bb70-42f68f2c5c37.sock,bstype=longhorn,bsopts=size=2147483648 ``` **Environment:** - Longhorn version: v1.0.0 - Kubernetes version: v1.18.3, v1.17.9 - Node OS type and version: Ubuntu 18.04.4 LTS - open-iscsi: [2.0.874-5ubuntu2.10](https://launchpad.net/ubuntu/+source/open-iscsi/2.0.874-5ubuntu2.10) **Additional context**
non_test
unable to attach volumes with node session scan manual describe the bug when i set node session scan manual in iscsid conf all pods cannot attach longhorn volumes the pod status sticks in containercreating and the longhorn volume state sticks in attaching detaching loop the reason why i set node session scan manual is to avoid the following issues with another csi driver node session scan setting was introduced in in ubuntu to reproduce set node session scan manual in etc iscsi iscsid conf restart iscsid sudo systemctl restart iscsid create longhorn pvc and pod pod sticks in containercreating expected behavior pod can attach volumes with node session scan manual log instance exit status errors appear repeatedly in longhorn manager time level warning msg instance pvc e is state error error message exit status fail to read magic version errors appear repeatedly in var log tgtd log tgtd bs thread open lh client close conn closing connection receive msg fail to read magic version response process receive response returned error lh client close conn closing connection lh client close conn connection close complete tgtd device mgmt sz params path var run longhorn pvc sock bstype longhorn bsopts size environment longhorn version kubernetes version node os type and version ubuntu lts open iscsi additional context
0
141,490
11,422,817,843
IssuesEvent
2020-02-03 14:51:55
elastic/kibana
https://api.github.com/repos/elastic/kibana
closed
Failing test: Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/uptime/overview·ts - Uptime app overview page pagination is cleared when filter criteria changes
Team:uptime [zube]: In Progress blocker failed-test skipped-test v7.6.0
A test failed on a tracked branch ``` Error: expected false to equal true at Assertion.assert (/dev/shm/workspace/kibana/packages/kbn-expect/expect.js:100:11) at Assertion.be.Assertion.equal (/dev/shm/workspace/kibana/packages/kbn-expect/expect.js:227:8) at Assertion.be (/dev/shm/workspace/kibana/packages/kbn-expect/expect.js:69:22) at Context.it (test/functional/apps/uptime/overview.ts:37:27) at process._tickCallback (internal/process/next_tick.js:68:7) ``` First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.x/2202/) <!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/uptime/overview·ts","test.name":"Uptime app overview page pagination is cleared when filter criteria changes","test.failCount":3}} -->
2.0
Failing test: Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/uptime/overview·ts - Uptime app overview page pagination is cleared when filter criteria changes - A test failed on a tracked branch ``` Error: expected false to equal true at Assertion.assert (/dev/shm/workspace/kibana/packages/kbn-expect/expect.js:100:11) at Assertion.be.Assertion.equal (/dev/shm/workspace/kibana/packages/kbn-expect/expect.js:227:8) at Assertion.be (/dev/shm/workspace/kibana/packages/kbn-expect/expect.js:69:22) at Context.it (test/functional/apps/uptime/overview.ts:37:27) at process._tickCallback (internal/process/next_tick.js:68:7) ``` First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.x/2202/) <!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/uptime/overview·ts","test.name":"Uptime app overview page pagination is cleared when filter criteria changes","test.failCount":3}} -->
test
failing test chrome x pack ui functional tests x pack test functional apps uptime overview·ts uptime app overview page pagination is cleared when filter criteria changes a test failed on a tracked branch error expected false to equal true at assertion assert dev shm workspace kibana packages kbn expect expect js at assertion be assertion equal dev shm workspace kibana packages kbn expect expect js at assertion be dev shm workspace kibana packages kbn expect expect js at context it test functional apps uptime overview ts at process tickcallback internal process next tick js first failure
1
608,160
18,816,838,776
IssuesEvent
2021-11-10 00:52:57
canonical-web-and-design/maas-ui
https://api.github.com/repos/canonical-web-and-design/maas-ui
closed
Allow selecting custom storage layout in UI
Priority: High Question ❓
Bug originally filed by ack at https://bugs.launchpad.net/bugs/1949455 MAAS now allows to create a custom storage layout for a machine, by providing the configuration at commissioning time (from a custom commissioning script). An additional "custom" storage layout is available, and selectable from the API. It should be possible to apply the custom layout through the UI as well. If the machine doesn't have custom layout configuration from commissioning, an error will be raised.
1.0
Allow selecting custom storage layout in UI - Bug originally filed by ack at https://bugs.launchpad.net/bugs/1949455 MAAS now allows to create a custom storage layout for a machine, by providing the configuration at commissioning time (from a custom commissioning script). An additional "custom" storage layout is available, and selectable from the API. It should be possible to apply the custom layout through the UI as well. If the machine doesn't have custom layout configuration from commissioning, an error will be raised.
non_test
allow selecting custom storage layout in ui bug originally filed by ack at maas now allows to create a custom storage layout for a machine by providing the configuration at commissioning time from a custom commissioning script an additional custom storage layout is available and selectable from the api it should be possible to apply the custom layout through the ui as well if the machine doesn t have custom layout configuration from commissioning an error will be raised
0
720,258
24,785,987,953
IssuesEvent
2022-10-24 09:48:18
tyndyrn/Visceral-Carnage_GAME
https://api.github.com/repos/tyndyrn/Visceral-Carnage_GAME
closed
Ranged Enemy is Small
bug Priority: High Not game breaking Desktop and Vr
The enemy holding the gun is smaller compared to the other enemies, completely disproportionate and unimmersive This bug is in the Semester2_Build3 build This is a visual bug, the behavior of the enemy is as expected though the model is small. The model should be at least the size of the other enemy's in the game. OS: Unreal Engine version 4.27.2 Additional context This bug is hilarious, and it will be sad to see it go
1.0
Ranged Enemy is Small - The enemy holding the gun is smaller compared to the other enemies, completely disproportionate and unimmersive This bug is in the Semester2_Build3 build This is a visual bug, the behavior of the enemy is as expected though the model is small. The model should be at least the size of the other enemy's in the game. OS: Unreal Engine version 4.27.2 Additional context This bug is hilarious, and it will be sad to see it go
non_test
ranged enemy is small the enemy holding the gun is smaller compared to the other enemies completely disproportionate and unimmersive this bug is in the build this is a visual bug the behavior of the enemy is as expected though the model is small the model should be at least the size of the other enemy s in the game os unreal engine version additional context this bug is hilarious and it will be sad to see it go
0
351,187
31,987,673,195
IssuesEvent
2023-09-21 01:37:13
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
Flaky e2e tests: failed to receive 128 entries
bug flaky test
### Component(s) _No response_ ### Describe the issue you're reporting ``` === RUN TestE2E e2e_test.go:495: Error Trace: /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/k8sattributesprocessor/e2e_test.go:495 /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/k8sattributesprocessor/e2e_test.go:83 Error: Condition never satisfied Test: TestE2E Messages: failed to receive 128 entries, received 0 metrics, 0 traces, 0 logs in 3 minutes W08[16](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5884286401/job/15958943828#step:14:17) 22:23:04.[17](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5884286401/job/15958943828#step:14:18)9271 [19](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5884286401/job/15958943828#step:14:20)566 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning W0816 22:23:04.380233 19566 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning W0816 [22](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5884286401/job/15958943828#step:14:23):[23](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5884286401/job/15958943828#step:14:24):04.578882 19566 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning --- FAIL: TestE2E (194.01s) ``` https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5884286401/job/15958943828#step:14:218 https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5883079006/job/15955266680#step:14:218 https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5883974046/job/15958005648#step:14:218 https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5883677972/job/15957116822#step:14:215 https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5883554071/job/15956758376#step:14:218
1.0
Flaky e2e tests: failed to receive 128 entries - ### Component(s) _No response_ ### Describe the issue you're reporting ``` === RUN TestE2E e2e_test.go:495: Error Trace: /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/k8sattributesprocessor/e2e_test.go:495 /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/k8sattributesprocessor/e2e_test.go:83 Error: Condition never satisfied Test: TestE2E Messages: failed to receive 128 entries, received 0 metrics, 0 traces, 0 logs in 3 minutes W08[16](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5884286401/job/15958943828#step:14:17) 22:23:04.[17](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5884286401/job/15958943828#step:14:18)9271 [19](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5884286401/job/15958943828#step:14:20)566 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning W0816 22:23:04.380233 19566 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning W0816 [22](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5884286401/job/15958943828#step:14:23):[23](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5884286401/job/15958943828#step:14:24):04.578882 19566 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning --- FAIL: TestE2E (194.01s) ``` https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5884286401/job/15958943828#step:14:218 https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5883079006/job/15955266680#step:14:218 https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5883974046/job/15958005648#step:14:218 https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5883677972/job/15957116822#step:14:215 https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5883554071/job/15956758376#step:14:218
test
flaky tests failed to receive entries component s no response describe the issue you re reporting run test go error trace home runner work opentelemetry collector contrib opentelemetry collector contrib processor test go home runner work opentelemetry collector contrib opentelemetry collector contrib processor test go error condition never satisfied test messages failed to receive entries received metrics traces logs in minutes warnings go child pods are preserved by default when jobs are deleted set propagationpolicy background to remove them or set propagationpolicy orphan to suppress this warning warnings go child pods are preserved by default when jobs are deleted set propagationpolicy background to remove them or set propagationpolicy orphan to suppress this warning warnings go child pods are preserved by default when jobs are deleted set propagationpolicy background to remove them or set propagationpolicy orphan to suppress this warning fail
1
102,117
8,818,138,715
IssuesEvent
2018-12-31 09:14:45
humera987/FXLabs-Test-Automation
https://api.github.com/repos/humera987/FXLabs-Test-Automation
closed
testing 4 : ApiV1OrgsByUserGetQueryParamPageNegativeNumber
testing 4
Project : testing 4 Job : UAT Env : UAT Region : US_WEST Result : fail Status Code : 404 Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=MzkxY2I0NDEtYzdmZi00YzgxLTg4YTgtNzM5YjgxYzI0ZDQ5; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Mon, 31 Dec 2018 09:12:59 GMT]} Endpoint : http://13.56.210.25/api/v1/api/v1/orgs/by-user?page=-1 Request : Response : { "timestamp" : "2018-12-31T09:12:59.795+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/api/v1/orgs/by-user" } Logs : 2018-12-31 09:12:59 DEBUG [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : URL [http://13.56.210.25/api/v1/api/v1/orgs/by-user?page=-1] 2018-12-31 09:12:59 DEBUG [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : Method [GET] 2018-12-31 09:12:59 DEBUG [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : Request [] 2018-12-31 09:12:59 DEBUG [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : Request-Headers [{Content-Type=[application/json], Accept=[application/json], Authorization=[Basic SHVtZXJhLy9odW1lcmFAZnhsYWJzLmlvOmh1bWVyYTEyMyQ=]}] 2018-12-31 09:12:59 DEBUG [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : Response [{ "timestamp" : "2018-12-31T09:12:59.795+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/api/v1/orgs/by-user" }] 2018-12-31 09:12:59 DEBUG [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : Response-Headers [{X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=MzkxY2I0NDEtYzdmZi00YzgxLTg4YTgtNzM5YjgxYzI0ZDQ5; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Mon, 31 Dec 2018 09:12:59 GMT]}] 2018-12-31 09:12:59 DEBUG [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : StatusCode [404] 2018-12-31 09:12:59 DEBUG [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : Time [534] 2018-12-31 09:12:59 DEBUG [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : Size [147] 2018-12-31 09:12:59 INFO [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed] 2018-12-31 09:12:59 ERROR [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed] --- FX Bot ---
1.0
testing 4 : ApiV1OrgsByUserGetQueryParamPageNegativeNumber - Project : testing 4 Job : UAT Env : UAT Region : US_WEST Result : fail Status Code : 404 Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=MzkxY2I0NDEtYzdmZi00YzgxLTg4YTgtNzM5YjgxYzI0ZDQ5; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Mon, 31 Dec 2018 09:12:59 GMT]} Endpoint : http://13.56.210.25/api/v1/api/v1/orgs/by-user?page=-1 Request : Response : { "timestamp" : "2018-12-31T09:12:59.795+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/api/v1/orgs/by-user" } Logs : 2018-12-31 09:12:59 DEBUG [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : URL [http://13.56.210.25/api/v1/api/v1/orgs/by-user?page=-1] 2018-12-31 09:12:59 DEBUG [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : Method [GET] 2018-12-31 09:12:59 DEBUG [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : Request [] 2018-12-31 09:12:59 DEBUG [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : Request-Headers [{Content-Type=[application/json], Accept=[application/json], Authorization=[Basic SHVtZXJhLy9odW1lcmFAZnhsYWJzLmlvOmh1bWVyYTEyMyQ=]}] 2018-12-31 09:12:59 DEBUG [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : Response [{ "timestamp" : "2018-12-31T09:12:59.795+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/api/v1/orgs/by-user" }] 2018-12-31 09:12:59 DEBUG [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : Response-Headers [{X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=MzkxY2I0NDEtYzdmZi00YzgxLTg4YTgtNzM5YjgxYzI0ZDQ5; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Mon, 31 Dec 2018 09:12:59 GMT]}] 2018-12-31 09:12:59 DEBUG [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : StatusCode [404] 2018-12-31 09:12:59 DEBUG [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : Time [534] 2018-12-31 09:12:59 DEBUG [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : Size [147] 2018-12-31 09:12:59 INFO [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed] 2018-12-31 09:12:59 ERROR [ApiV1OrgsByUserGetQueryParamPageNegativeNumber] : Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed] --- FX Bot ---
test
testing project testing job uat env uat region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api orgs by user logs debug url debug method debug request debug request headers accept authorization debug response timestamp status error not found message no message available path api api orgs by user debug response headers x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date debug statuscode debug time debug size info assertion resolved to result error assertion resolved to result fx bot
1
254,663
8,081,433,838
IssuesEvent
2018-08-08 03:28:51
sys-bio/tellurium
https://api.github.com/repos/sys-bio/tellurium
closed
pysces expects libsbml, not tesbml
bug priority
From @hsauro in @290. The libsbml issue is causing some problems for me too, I can't run pysces on tellurium because pysces expects to find libsbml not tesbml. Hence I can't load sbml models into pysces under tellurium.
1.0
pysces expects libsbml, not tesbml - From @hsauro in @290. The libsbml issue is causing some problems for me too, I can't run pysces on tellurium because pysces expects to find libsbml not tesbml. Hence I can't load sbml models into pysces under tellurium.
non_test
pysces expects libsbml not tesbml from hsauro in the libsbml issue is causing some problems for me too i can t run pysces on tellurium because pysces expects to find libsbml not tesbml hence i can t load sbml models into pysces under tellurium
0
48,323
5,953,636,705
IssuesEvent
2017-05-27 09:43:06
markekraus/PSRAW
https://api.github.com/repos/markekraus/PSRAW
closed
Add PSScriptAnaylzer rules to Classes
test enhancements
Classes were removed from PSScriptAnalyzer tests because of parse errors due to the way they are imported by the module in a serial fashion. They need to be re-added with some way to filter out expected parse errors.. Probably using something like this ```powershell Invoke-ScriptAnalyzer -Path $file -IncludeRule $rules.rulename -ErrorVariable PSSAErrors -ErrorAction SilentlyContinue ``` And then doing a foreach on the `$PSSAErrors` variable.
1.0
Add PSScriptAnaylzer rules to Classes - Classes were removed from PSScriptAnalyzer tests because of parse errors due to the way they are imported by the module in a serial fashion. They need to be re-added with some way to filter out expected parse errors.. Probably using something like this ```powershell Invoke-ScriptAnalyzer -Path $file -IncludeRule $rules.rulename -ErrorVariable PSSAErrors -ErrorAction SilentlyContinue ``` And then doing a foreach on the `$PSSAErrors` variable.
test
add psscriptanaylzer rules to classes classes were removed from psscriptanalyzer tests because of parse errors due to the way they are imported by the module in a serial fashion they need to be re added with some way to filter out expected parse errors probably using something like this powershell invoke scriptanalyzer path file includerule rules rulename errorvariable pssaerrors erroraction silentlycontinue and then doing a foreach on the pssaerrors variable
1