Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
4
112
repo_url
stringlengths
33
141
action
stringclasses
3 values
title
stringlengths
1
1.02k
labels
stringlengths
4
1.54k
body
stringlengths
1
262k
index
stringclasses
17 values
text_combine
stringlengths
95
262k
label
stringclasses
2 values
text
stringlengths
96
252k
binary_label
int64
0
1
98,455
8,677,622,875
IssuesEvent
2018-11-30 17:18:28
Microsoft/AzureStorageExplorer
https://api.github.com/repos/Microsoft/AzureStorageExplorer
closed
Fail to create/delete blob containers
:exclamation: must fix :gear: rbac :white_check_mark: merged testing
Storage Explorer Version: 1.5.0 Platform/OS Version: Windows 10/ MacOS High Sierra/ Linux Ubuntu 16.04 Architecture: ia32/x64 Build Number: 20181109.1 Commit: e721f1fb Regression From: Not a regression #### Steps to Reproduce: #### 1. Add permission with the role ‘Storage Blob Data Contributor(preview) + Reader’ under Storage Account level -> Select a member and save. 2. Use the selected member’s account to sign in. 3. Expand one storage account and try to create/delete a blob container. 4. Check whether the blob container can be created/deleted or not. #### Expected Experience: #### Succeed to create/delete blob containers. #### Actual Experience: #### Fail to create/delete blob containers. #### More Info: #### Closing the popped error dialog directly the created blob container shows after refreshing the Blob Containers node. ![image](https://user-images.githubusercontent.com/41351993/48406718-205b2e00-e770-11e8-8c63-97943999c2c5.png)
1.0
Fail to create/delete blob containers - Storage Explorer Version: 1.5.0 Platform/OS Version: Windows 10/ MacOS High Sierra/ Linux Ubuntu 16.04 Architecture: ia32/x64 Build Number: 20181109.1 Commit: e721f1fb Regression From: Not a regression #### Steps to Reproduce: #### 1. Add permission with the role ‘Storage Blob Data Contributor(preview) + Reader’ under Storage Account level -> Select a member and save. 2. Use the selected member’s account to sign in. 3. Expand one storage account and try to create/delete a blob container. 4. Check whether the blob container can be created/deleted or not. #### Expected Experience: #### Succeed to create/delete blob containers. #### Actual Experience: #### Fail to create/delete blob containers. #### More Info: #### Closing the popped error dialog directly the created blob container shows after refreshing the Blob Containers node. ![image](https://user-images.githubusercontent.com/41351993/48406718-205b2e00-e770-11e8-8c63-97943999c2c5.png)
test
fail to create delete blob containers storage explorer version platform os version windows macos high sierra linux ubuntu architecture build number commit regression from not a regression steps to reproduce add permission with the role ‘storage blob data contributor preview reader’ under storage account level select a member and save use the selected member’s account to sign in expand one storage account and try to create delete a blob container check whether the blob container can be created deleted or not expected experience succeed to create delete blob containers actual experience fail to create delete blob containers more info closing the popped error dialog directly the created blob container shows after refreshing the blob containers node
1
343,777
30,688,900,897
IssuesEvent
2023-07-26 14:02:48
trixi-framework/Trixi.jl
https://api.github.com/repos/trixi-framework/Trixi.jl
opened
MPI macOS tests time out sometimes
testing parallelization
We have observed CI failures of macOS with MPI due to timeout recently - not in every CI run but maybe something like every second run? Some failing jobs are - https://github.com/trixi-framework/Trixi.jl/actions/runs/5644250934/job/15287672140 - https://github.com/trixi-framework/Trixi.jl/actions/runs/5664951100/job/15349017631 - https://github.com/trixi-framework/Trixi.jl/actions/runs/5655829278/job/15325487560 A working run is - https://github.com/trixi-framework/Trixi.jl/actions/runs/5659469799/job/15333080417
1.0
MPI macOS tests time out sometimes - We have observed CI failures of macOS with MPI due to timeout recently - not in every CI run but maybe something like every second run? Some failing jobs are - https://github.com/trixi-framework/Trixi.jl/actions/runs/5644250934/job/15287672140 - https://github.com/trixi-framework/Trixi.jl/actions/runs/5664951100/job/15349017631 - https://github.com/trixi-framework/Trixi.jl/actions/runs/5655829278/job/15325487560 A working run is - https://github.com/trixi-framework/Trixi.jl/actions/runs/5659469799/job/15333080417
test
mpi macos tests time out sometimes we have observed ci failures of macos with mpi due to timeout recently not in every ci run but maybe something like every second run some failing jobs are a working run is
1
109,626
23,800,965,496
IssuesEvent
2022-09-03 09:25:01
Toma400/The_Isle_of_Ansur
https://api.github.com/repos/Toma400/The_Isle_of_Ansur
opened
Better "prioritised" system for panoramas and menu sounds & Mod Loading Order
feature suggestion code improvement
Having possibility to prioritise panoramas and sounds is cool, as they will overwrite vanilla, but... They will not overwrite other mods. To explain it a bit in detail: if more than one mod use "PR%_" system, they will both shuffle through their data, so more than one mod can be prioritised. It's okay if you just want to overwrite vanilla, but not if you want to rule over all possible mods. Solution for this can be kinda simple, kinda not, mechanic known from Morrowind: Mod Loading Order (later: MLO) This would ensure that if your mod loading order is correct, first one will be picked up. I suggest having new key, like "OV%_" to make prioritised and overwriting different behaviours, as it may be useful. If "OV%_" is used, listed backgrounds/sounds will also check which mod loaded first and limit the list only to those entries. Additional work you will need to do here is to make `Mods Screen` to be able to change MLO in-game, so they can be listed in correct order. Additional optional `info.json` value will not hurt either, and can be good way of pre-determining MLO status for your mod, without manually setting it by player. ``` { "loading_number": -50 } ``` Example above would be run **before** anyone with number greater than -50.
1.0
Better "prioritised" system for panoramas and menu sounds & Mod Loading Order - Having possibility to prioritise panoramas and sounds is cool, as they will overwrite vanilla, but... They will not overwrite other mods. To explain it a bit in detail: if more than one mod use "PR%_" system, they will both shuffle through their data, so more than one mod can be prioritised. It's okay if you just want to overwrite vanilla, but not if you want to rule over all possible mods. Solution for this can be kinda simple, kinda not, mechanic known from Morrowind: Mod Loading Order (later: MLO) This would ensure that if your mod loading order is correct, first one will be picked up. I suggest having new key, like "OV%_" to make prioritised and overwriting different behaviours, as it may be useful. If "OV%_" is used, listed backgrounds/sounds will also check which mod loaded first and limit the list only to those entries. Additional work you will need to do here is to make `Mods Screen` to be able to change MLO in-game, so they can be listed in correct order. Additional optional `info.json` value will not hurt either, and can be good way of pre-determining MLO status for your mod, without manually setting it by player. ``` { "loading_number": -50 } ``` Example above would be run **before** anyone with number greater than -50.
non_test
better prioritised system for panoramas and menu sounds mod loading order having possibility to prioritise panoramas and sounds is cool as they will overwrite vanilla but they will not overwrite other mods to explain it a bit in detail if more than one mod use pr system they will both shuffle through their data so more than one mod can be prioritised it s okay if you just want to overwrite vanilla but not if you want to rule over all possible mods solution for this can be kinda simple kinda not mechanic known from morrowind mod loading order later mlo this would ensure that if your mod loading order is correct first one will be picked up i suggest having new key like ov to make prioritised and overwriting different behaviours as it may be useful if ov is used listed backgrounds sounds will also check which mod loaded first and limit the list only to those entries additional work you will need to do here is to make mods screen to be able to change mlo in game so they can be listed in correct order additional optional info json value will not hurt either and can be good way of pre determining mlo status for your mod without manually setting it by player loading number example above would be run before anyone with number greater than
0
122,773
16,326,768,556
IssuesEvent
2021-05-12 02:29:26
Uniswap/uniswap-interface
https://api.github.com/repos/Uniswap/uniswap-interface
closed
noodle/squiggle on Pool Overview is low res
design p1
<img width="557" alt="Screen Shot 2021-05-04 at 5 09 39 PM" src="https://user-images.githubusercontent.com/44346752/117070315-8bf83380-acfb-11eb-86ac-78bc299bd159.png">
1.0
noodle/squiggle on Pool Overview is low res - <img width="557" alt="Screen Shot 2021-05-04 at 5 09 39 PM" src="https://user-images.githubusercontent.com/44346752/117070315-8bf83380-acfb-11eb-86ac-78bc299bd159.png">
non_test
noodle squiggle on pool overview is low res img width alt screen shot at pm src
0
41,916
2,869,088,229
IssuesEvent
2015-06-05 23:14:18
dart-lang/sdk
https://api.github.com/repos/dart-lang/sdk
closed
Provide a paper project template
Area-Pkg Pkg-PolymerDevExp Priority-Medium Triaged Type-Enhancement
Currently there's no good starting point for creating a QP-styled application. Would be great if there was a pub run generator for this.
1.0
Provide a paper project template - Currently there's no good starting point for creating a QP-styled application. Would be great if there was a pub run generator for this.
non_test
provide a paper project template currently there s no good starting point for creating a qp styled application would be great if there was a pub run generator for this
0
145,636
13,156,653,574
IssuesEvent
2020-08-10 11:13:45
fowado/BauphysikSE1
https://api.github.com/repos/fowado/BauphysikSE1
closed
Deployment Plan erstellen
documentation
Deployment Plan für Beta entwerfen. (Softwareübergabe an Prof. Krawietz)
1.0
Deployment Plan erstellen - Deployment Plan für Beta entwerfen. (Softwareübergabe an Prof. Krawietz)
non_test
deployment plan erstellen deployment plan für beta entwerfen softwareübergabe an prof krawietz
0
244,734
18,766,762,764
IssuesEvent
2021-11-06 03:32:02
paperpatch/open_fridge
https://api.github.com/repos/paperpatch/open_fridge
opened
v1.1 Code Refactor - JavaScript
documentation good first issue
Current: - There's a lot of repeated functions and words in JavaScript. Perhaps combine them? - (Current CSS files are unused. This will coincide with Issue #40 CSS - Mobile First / CSS) Looking For: - Better refactoring of JavaScript.
1.0
v1.1 Code Refactor - JavaScript - Current: - There's a lot of repeated functions and words in JavaScript. Perhaps combine them? - (Current CSS files are unused. This will coincide with Issue #40 CSS - Mobile First / CSS) Looking For: - Better refactoring of JavaScript.
non_test
code refactor javascript current there s a lot of repeated functions and words in javascript perhaps combine them current css files are unused this will coincide with issue css mobile first css looking for better refactoring of javascript
0
341,120
30,567,096,081
IssuesEvent
2023-07-20 18:40:01
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
opened
DISABLED test_inline_dict_mutation (__main__.MiscTests)
triaged module: flaky-tests skipped module: dynamo
Platforms: linux This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_inline_dict_mutation&suite=MiscTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/15204809608). Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_inline_dict_mutation` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. Test file path: `dynamo/test_misc.py`
1.0
DISABLED test_inline_dict_mutation (__main__.MiscTests) - Platforms: linux This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_inline_dict_mutation&suite=MiscTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/15204809608). Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_inline_dict_mutation` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. Test file path: `dynamo/test_misc.py`
test
disabled test inline dict mutation main misctests platforms linux this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with failures and successes debugging instructions after clicking on the recent samples link do not assume things are okay if the ci is green we now shield flaky tests from developers so ci will thus be green but it will be harder to parse the logs to find relevant log snippets click on the workflow logs linked above click on the test step of the job so that it is expanded otherwise the grepping will not work grep for test inline dict mutation there should be several instances run as flaky tests are rerun in ci from which you can study the logs test file path dynamo test misc py
1
11,793
3,226,889,508
IssuesEvent
2015-10-10 17:57:25
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
closed
Integration/e2e tests of kubectl support for multiple resources
area/test component/kubectl priority/P1 team/ux
Forked from #13812. We advocate that users should specify multiple resources in a single file, yet we don't have adequate testing of that feature, and we broke it when we turned on schema validation by default. It sounds like a test in test-cmd.sh would be adequate to exercise the default configuration of kubectl. There are multiple ways to specify multiple resources: * explicit List * YAML document separator `---` * JSON object concatenation(?) Apparently different input sources are handled differently, at least for alternatives to an explicit List, so we probably need to at least test file and stdin. Not sure about directory and URL. cc @liggitt @deads2k @feihujiang @smarterclayton @lavalamp
1.0
Integration/e2e tests of kubectl support for multiple resources - Forked from #13812. We advocate that users should specify multiple resources in a single file, yet we don't have adequate testing of that feature, and we broke it when we turned on schema validation by default. It sounds like a test in test-cmd.sh would be adequate to exercise the default configuration of kubectl. There are multiple ways to specify multiple resources: * explicit List * YAML document separator `---` * JSON object concatenation(?) Apparently different input sources are handled differently, at least for alternatives to an explicit List, so we probably need to at least test file and stdin. Not sure about directory and URL. cc @liggitt @deads2k @feihujiang @smarterclayton @lavalamp
test
integration tests of kubectl support for multiple resources forked from we advocate that users should specify multiple resources in a single file yet we don t have adequate testing of that feature and we broke it when we turned on schema validation by default it sounds like a test in test cmd sh would be adequate to exercise the default configuration of kubectl there are multiple ways to specify multiple resources explicit list yaml document separator json object concatenation apparently different input sources are handled differently at least for alternatives to an explicit list so we probably need to at least test file and stdin not sure about directory and url cc liggitt feihujiang smarterclayton lavalamp
1
293,740
9,008,383,866
IssuesEvent
2019-02-05 03:41:33
anishathalye/git-remote-dropbox
https://api.github.com/repos/anishathalye/git-remote-dropbox
closed
Make it faster
enhancement medium-priority
I do not have precise timing yet (will check and add some later), but subjectively, it seems that this remote is incomparably slower than e.g. ssh. It would be great if it could be sped up. Are there specific known bottlenecks, e.g. uploading only one file at a time?
1.0
Make it faster - I do not have precise timing yet (will check and add some later), but subjectively, it seems that this remote is incomparably slower than e.g. ssh. It would be great if it could be sped up. Are there specific known bottlenecks, e.g. uploading only one file at a time?
non_test
make it faster i do not have precise timing yet will check and add some later but subjectively it seems that this remote is incomparably slower than e g ssh it would be great if it could be sped up are there specific known bottlenecks e g uploading only one file at a time
0
286,924
24,794,893,474
IssuesEvent
2022-10-24 16:23:37
jdi-testing/jdi-light
https://api.github.com/repos/jdi-testing/jdi-light
closed
Update Test site: Carousels
TestSite Vuetify
Add the following features to cycled carousel: - [x] dark theme - [x] progress bar (y) - [x] reversed (y) - [x] vertical (y) - image cycling, delimiters - [x] margin
1.0
Update Test site: Carousels - Add the following features to cycled carousel: - [x] dark theme - [x] progress bar (y) - [x] reversed (y) - [x] vertical (y) - image cycling, delimiters - [x] margin
test
update test site carousels add the following features to cycled carousel dark theme progress bar y reversed y vertical y image cycling delimiters margin
1
316,173
27,142,218,222
IssuesEvent
2023-02-16 17:07:16
wazuh/wazuh
https://api.github.com/repos/wazuh/wazuh
closed
Release 4.4.0 - Revision 1 - Release Candidate RC1 - Footprint Metrics - VULNERABILITY-DETECTOR-REGISTER (2.5d)
team/cicd/automation type/release release test/4.4.0
## Footprint metrics information | | | |---------------------------------|--------------------------------------------| | **Main release candidate issue #** | #16132 | | **Main footprint metrics issue #** | #16141 | | **Version** | 4.4.0 | | **Release candidate #** | RC1 | | **Tag** | https://github.com/wazuh/wazuh/tree/4.4.0-rc1 | ## Stress test documentation ### Packages used - Repository: `packages-dev.wazuh.com` - Package path: `pre-release` - Package revision: `1` - **Jenkins build**: https://ci.wazuh.info/job/Test_stress/3742/ --- <details><summary>Manager</summary> + <details><summary>Plots</summary> ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_CPU.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_Disk.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_Disk_Read.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_Disk_Written.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_FD.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_PSS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_Read_Ops.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_RSS_MAXMIN.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_RSS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_SWAP.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_USS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_VMS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_Write_Ops.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/Test_stress_B3742_manager_analysisd_state_EDPS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/Test_stress_B3742_manager_analysisd_state_Number_Events.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/Test_stress_B3742_manager_analysisd_state_Queues_state.png) </details> + <details><summary>Logs and configuration</summary> [ossec_Test_stress_B3742_manager_2023-02-13.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/logs/ossec_Test_stress_B3742_manager_2023-02-13.zip) </details> + <details><summary>CSV</summary> [monitor-manager-Test_stress_B3742_manager-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/data/monitor-manager-Test_stress_B3742_manager-pre-release.csv) [Test_stress_B3742_manager_analysisd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/data/Test_stress_B3742_manager_analysisd_state.csv) [Test_stress_B3742_manager_remoted_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/data/Test_stress_B3742_manager_remoted_state.csv) </details> </details> <details><summary>Centos agent</summary> + <details><summary>Plots</summary> ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_CPU.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_Disk.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_Disk_Read.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_Disk_Written.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_FD.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_PSS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_Read_Ops.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_RSS_MAXMIN.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_RSS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_SWAP.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_USS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_VMS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_Write_Ops.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/Test_stress_B3742_centos_agentd_state_AgentD_Number_of_events_buffered.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/Test_stress_B3742_centos_agentd_state_AgentD_Number_of_generated_events.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/Test_stress_B3742_centos_agentd_state_AgentD_Number_of_messages.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/Test_stress_B3742_centos_agentd_state_AgentD_Status.png) </details> + <details><summary>Logs and configuration</summary> [ossec_Test_stress_B3742_centos_2023-02-13.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/logs/ossec_Test_stress_B3742_centos_2023-02-13.zip) </details> + <details><summary>CSV</summary> [monitor-agent-Test_stress_B3742_centos-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/data/monitor-agent-Test_stress_B3742_centos-pre-release.csv) [Test_stress_B3742_centos_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/data/Test_stress_B3742_centos_agentd_state.csv) </details> </details> <details><summary>Ubuntu agent</summary> + <details><summary>Plots</summary> ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_CPU.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_Disk.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_Disk_Read.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_Disk_Written.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_FD.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_PSS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_Read_Ops.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_RSS_MAXMIN.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_RSS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_SWAP.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_USS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_VMS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_Write_Ops.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/Test_stress_B3742_ubuntu_agentd_state_AgentD_Number_of_events_buffered.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/Test_stress_B3742_ubuntu_agentd_state_AgentD_Number_of_generated_events.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/Test_stress_B3742_ubuntu_agentd_state_AgentD_Number_of_messages.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/Test_stress_B3742_ubuntu_agentd_state_AgentD_Status.png) </details> + <details><summary>Logs and configuration</summary> [ossec_Test_stress_B3742_ubuntu_2023-02-13.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/logs/ossec_Test_stress_B3742_ubuntu_2023-02-13.zip) </details> + <details><summary>CSV</summary> [monitor-agent-Test_stress_B3742_ubuntu-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/data/monitor-agent-Test_stress_B3742_ubuntu-pre-release.csv) [Test_stress_B3742_ubuntu_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/data/Test_stress_B3742_ubuntu_agentd_state.csv) </details> </details> <details><summary>Windows agent</summary> + <details><summary>Plots</summary> ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_CPU.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_Disk.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_Disk_Read.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_Disk_Written.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_Handles.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_Read_Ops.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_RSS_MAXMIN.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_RSS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_USS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_VMS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_Write_Ops.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/Test_stress_B3742_windows_agentd_state_AgentD_Number_of_events_buffered.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/Test_stress_B3742_windows_agentd_state_AgentD_Number_of_generated_events.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/Test_stress_B3742_windows_agentd_state_AgentD_Number_of_messages.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/Test_stress_B3742_windows_agentd_state_AgentD_Status.png) </details> + <details><summary>Logs and configuration</summary> [ossec_Test_stress_B3742_windows_2023-02-13.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/logs/ossec_Test_stress_B3742_windows_2023-02-13.zip) </details> + <details><summary>CSV</summary> [monitor-winagent-Test_stress_B3742_windows-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/data/monitor-winagent-Test_stress_B3742_windows-pre-release.csv) [Test_stress_B3742_windows_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/data/Test_stress_B3742_windows_agentd_state.csv) </details> </details> <details><summary>macOS agent</summary> + <details><summary>Plots</summary> </details> + <details><summary>Logs and configuration</summary> </details> + <details><summary>CSV</summary> </details> </details> <details><summary>Solaris agent</summary> + <details><summary>Plots</summary> </details> + <details><summary>Logs and configuration</summary> </details> + <details><summary>CSV</summary> </details> </details>
1.0
Release 4.4.0 - Revision 1 - Release Candidate RC1 - Footprint Metrics - VULNERABILITY-DETECTOR-REGISTER (2.5d) - ## Footprint metrics information | | | |---------------------------------|--------------------------------------------| | **Main release candidate issue #** | #16132 | | **Main footprint metrics issue #** | #16141 | | **Version** | 4.4.0 | | **Release candidate #** | RC1 | | **Tag** | https://github.com/wazuh/wazuh/tree/4.4.0-rc1 | ## Stress test documentation ### Packages used - Repository: `packages-dev.wazuh.com` - Package path: `pre-release` - Package revision: `1` - **Jenkins build**: https://ci.wazuh.info/job/Test_stress/3742/ --- <details><summary>Manager</summary> + <details><summary>Plots</summary> ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_CPU.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_Disk.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_Disk_Read.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_Disk_Written.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_FD.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_PSS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_Read_Ops.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_RSS_MAXMIN.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_RSS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_SWAP.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_USS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_VMS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/monitor-manager-Test_stress_B3742_manager-pre-release_Write_Ops.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/Test_stress_B3742_manager_analysisd_state_EDPS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/Test_stress_B3742_manager_analysisd_state_Number_Events.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/plots/Test_stress_B3742_manager_analysisd_state_Queues_state.png) </details> + <details><summary>Logs and configuration</summary> [ossec_Test_stress_B3742_manager_2023-02-13.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/logs/ossec_Test_stress_B3742_manager_2023-02-13.zip) </details> + <details><summary>CSV</summary> [monitor-manager-Test_stress_B3742_manager-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/data/monitor-manager-Test_stress_B3742_manager-pre-release.csv) [Test_stress_B3742_manager_analysisd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/data/Test_stress_B3742_manager_analysisd_state.csv) [Test_stress_B3742_manager_remoted_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_manager_centos/data/Test_stress_B3742_manager_remoted_state.csv) </details> </details> <details><summary>Centos agent</summary> + <details><summary>Plots</summary> ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_CPU.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_Disk.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_Disk_Read.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_Disk_Written.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_FD.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_PSS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_Read_Ops.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_RSS_MAXMIN.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_RSS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_SWAP.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_USS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_VMS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/monitor-agent-Test_stress_B3742_centos-pre-release_Write_Ops.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/Test_stress_B3742_centos_agentd_state_AgentD_Number_of_events_buffered.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/Test_stress_B3742_centos_agentd_state_AgentD_Number_of_generated_events.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/Test_stress_B3742_centos_agentd_state_AgentD_Number_of_messages.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/plots/Test_stress_B3742_centos_agentd_state_AgentD_Status.png) </details> + <details><summary>Logs and configuration</summary> [ossec_Test_stress_B3742_centos_2023-02-13.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/logs/ossec_Test_stress_B3742_centos_2023-02-13.zip) </details> + <details><summary>CSV</summary> [monitor-agent-Test_stress_B3742_centos-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/data/monitor-agent-Test_stress_B3742_centos-pre-release.csv) [Test_stress_B3742_centos_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_centos/data/Test_stress_B3742_centos_agentd_state.csv) </details> </details> <details><summary>Ubuntu agent</summary> + <details><summary>Plots</summary> ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_CPU.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_Disk.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_Disk_Read.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_Disk_Written.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_FD.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_PSS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_Read_Ops.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_RSS_MAXMIN.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_RSS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_SWAP.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_USS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_VMS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/monitor-agent-Test_stress_B3742_ubuntu-pre-release_Write_Ops.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/Test_stress_B3742_ubuntu_agentd_state_AgentD_Number_of_events_buffered.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/Test_stress_B3742_ubuntu_agentd_state_AgentD_Number_of_generated_events.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/Test_stress_B3742_ubuntu_agentd_state_AgentD_Number_of_messages.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/plots/Test_stress_B3742_ubuntu_agentd_state_AgentD_Status.png) </details> + <details><summary>Logs and configuration</summary> [ossec_Test_stress_B3742_ubuntu_2023-02-13.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/logs/ossec_Test_stress_B3742_ubuntu_2023-02-13.zip) </details> + <details><summary>CSV</summary> [monitor-agent-Test_stress_B3742_ubuntu-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/data/monitor-agent-Test_stress_B3742_ubuntu-pre-release.csv) [Test_stress_B3742_ubuntu_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_ubuntu/data/Test_stress_B3742_ubuntu_agentd_state.csv) </details> </details> <details><summary>Windows agent</summary> + <details><summary>Plots</summary> ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_CPU.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_Disk.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_Disk_Read.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_Disk_Written.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_Handles.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_Read_Ops.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_RSS_MAXMIN.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_RSS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_USS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_VMS.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/monitor-winagent-Test_stress_B3742_windows-pre-release_Write_Ops.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/Test_stress_B3742_windows_agentd_state_AgentD_Number_of_events_buffered.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/Test_stress_B3742_windows_agentd_state_AgentD_Number_of_generated_events.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/Test_stress_B3742_windows_agentd_state_AgentD_Number_of_messages.png) ![](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/plots/Test_stress_B3742_windows_agentd_state_AgentD_Status.png) </details> + <details><summary>Logs and configuration</summary> [ossec_Test_stress_B3742_windows_2023-02-13.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/logs/ossec_Test_stress_B3742_windows_2023-02-13.zip) </details> + <details><summary>CSV</summary> [monitor-winagent-Test_stress_B3742_windows-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/data/monitor-winagent-Test_stress_B3742_windows-pre-release.csv) [Test_stress_B3742_windows_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.4.0/B3742-3600m/B3742_agent_windows/data/Test_stress_B3742_windows_agentd_state.csv) </details> </details> <details><summary>macOS agent</summary> + <details><summary>Plots</summary> </details> + <details><summary>Logs and configuration</summary> </details> + <details><summary>CSV</summary> </details> </details> <details><summary>Solaris agent</summary> + <details><summary>Plots</summary> </details> + <details><summary>Logs and configuration</summary> </details> + <details><summary>CSV</summary> </details> </details>
test
release revision release candidate footprint metrics vulnerability detector register footprint metrics information main release candidate issue main footprint metrics issue version release candidate tag stress test documentation packages used repository packages dev wazuh com package path pre release package revision jenkins build manager plots logs and configuration csv centos agent plots logs and configuration csv ubuntu agent plots logs and configuration csv windows agent plots logs and configuration csv macos agent plots logs and configuration csv solaris agent plots logs and configuration csv
1
104,539
22,687,817,323
IssuesEvent
2022-07-04 15:46:15
GoogleForCreators/web-stories-wp
https://api.github.com/repos/GoogleForCreators/web-stories-wp
closed
Code Quality: Reference `@wordpress/data` stores by store definition
Group: WordPress Type: Code Quality Pod: WP Group: Blocks
<!-- NOTE: For help requests, support questions, or general feedback, please use the WordPress.org forums instead: https://wordpress.org/support/plugin/web-stories/ --> ## Task Description <!-- A clear and concise description of what this task is about. --> Now that we require WordPress 5.7+, it's possible for us to follow the best practice of referencing data stores by their definitions. See https://make.wordpress.org/core/2021/02/22/changes-in-wordpress-data-api/ This applies to the `stories-block` package
1.0
Code Quality: Reference `@wordpress/data` stores by store definition - <!-- NOTE: For help requests, support questions, or general feedback, please use the WordPress.org forums instead: https://wordpress.org/support/plugin/web-stories/ --> ## Task Description <!-- A clear and concise description of what this task is about. --> Now that we require WordPress 5.7+, it's possible for us to follow the best practice of referencing data stores by their definitions. See https://make.wordpress.org/core/2021/02/22/changes-in-wordpress-data-api/ This applies to the `stories-block` package
non_test
code quality reference wordpress data stores by store definition task description now that we require wordpress it s possible for us to follow the best practice of referencing data stores by their definitions see this applies to the stories block package
0
631,473
20,152,252,981
IssuesEvent
2022-02-09 13:31:58
ita-social-projects/horondi_client_be
https://api.github.com/repos/ita-social-projects/horondi_client_be
reopened
[Checkout Page] after entering too long phone number into "Phone" field page crashes
bug priority: medium functional severity: high
Environment Windows 10, Chrome Version 96.0.4664.110 Reproducible: Always **Preconditions:** 1. Go to https://horondi-front-staging.azurewebsites.net/ 2. Log in as a user 3. Add any item to the cart **Steps to reproduce:** 1. Go to cart 2. Click on 'Checkout' button 3. Fill in correct contact information ('First name', 'Last name', 'Email') 4. Fill in incorrect 'Phone' (e.g. 09876545678765456321) 5. Chose any delivery method 6. Fill all mandatory fields regarding the delivery address 6. Chose payment method 'Cash' 7. Click on 'Confirm order' button **Actual result:** page crashes (Please, see the screenshot) ![1](https://user-images.githubusercontent.com/96176970/147666091-b35b407d-7f34-449f-832d-2a513f215daf.PNG) **Expected result:**'Phone' field is highlighted in red and the error message appears below the field
1.0
[Checkout Page] after entering too long phone number into "Phone" field page crashes - Environment Windows 10, Chrome Version 96.0.4664.110 Reproducible: Always **Preconditions:** 1. Go to https://horondi-front-staging.azurewebsites.net/ 2. Log in as a user 3. Add any item to the cart **Steps to reproduce:** 1. Go to cart 2. Click on 'Checkout' button 3. Fill in correct contact information ('First name', 'Last name', 'Email') 4. Fill in incorrect 'Phone' (e.g. 09876545678765456321) 5. Chose any delivery method 6. Fill all mandatory fields regarding the delivery address 6. Chose payment method 'Cash' 7. Click on 'Confirm order' button **Actual result:** page crashes (Please, see the screenshot) ![1](https://user-images.githubusercontent.com/96176970/147666091-b35b407d-7f34-449f-832d-2a513f215daf.PNG) **Expected result:**'Phone' field is highlighted in red and the error message appears below the field
non_test
after entering too long phone number into phone field page crashes environment windows chrome version reproducible always preconditions go to log in as a user add any item to the cart steps to reproduce go to cart click on checkout button fill in correct contact information first name last name email fill in incorrect phone e g chose any delivery method fill all mandatory fields regarding the delivery address chose payment method cash click on confirm order button actual result page crashes please see the screenshot expected result phone field is highlighted in red and the error message appears below the field
0
80,119
15,356,920,486
IssuesEvent
2021-03-01 13:03:50
Leafwing-Studios/fop-game
https://api.github.com/repos/Leafwing-Studios/fop-game
opened
Change all integers to usize and isize
code-quality
Simpler, and likely faster due to lack of conversions. Reduce later if need arises.
1.0
Change all integers to usize and isize - Simpler, and likely faster due to lack of conversions. Reduce later if need arises.
non_test
change all integers to usize and isize simpler and likely faster due to lack of conversions reduce later if need arises
0
187,281
14,427,394,947
IssuesEvent
2020-12-06 03:45:40
kalexmills/github-vet-tests-dec2020
https://api.github.com/repos/kalexmills/github-vet-tests-dec2020
closed
plotozhu/MDCMainnet: swarm/pss/forwarding_test.go; 3 LoC
fresh test tiny
Found a possible issue in [plotozhu/MDCMainnet](https://www.github.com/plotozhu/MDCMainnet) at [swarm/pss/forwarding_test.go](https://github.com/plotozhu/MDCMainnet/blob/353d5a5e70a2a86b28467eefe2e82d1b0bfbe147/swarm/pss/forwarding_test.go#L234-L236) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > function call which takes a reference to c at line 235 may start a goroutine [Click here to see the code in its original context.](https://github.com/plotozhu/MDCMainnet/blob/353d5a5e70a2a86b28467eefe2e82d1b0bfbe147/swarm/pss/forwarding_test.go#L234-L236) <details> <summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary> ```go for _, c := range testCases { testForwardMsg(t, ps, &c) } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 353d5a5e70a2a86b28467eefe2e82d1b0bfbe147
1.0
plotozhu/MDCMainnet: swarm/pss/forwarding_test.go; 3 LoC - Found a possible issue in [plotozhu/MDCMainnet](https://www.github.com/plotozhu/MDCMainnet) at [swarm/pss/forwarding_test.go](https://github.com/plotozhu/MDCMainnet/blob/353d5a5e70a2a86b28467eefe2e82d1b0bfbe147/swarm/pss/forwarding_test.go#L234-L236) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > function call which takes a reference to c at line 235 may start a goroutine [Click here to see the code in its original context.](https://github.com/plotozhu/MDCMainnet/blob/353d5a5e70a2a86b28467eefe2e82d1b0bfbe147/swarm/pss/forwarding_test.go#L234-L236) <details> <summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary> ```go for _, c := range testCases { testForwardMsg(t, ps, &c) } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 353d5a5e70a2a86b28467eefe2e82d1b0bfbe147
test
plotozhu mdcmainnet swarm pss forwarding test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to c at line may start a goroutine click here to show the line s of go which triggered the analyzer go for c range testcases testforwardmsg t ps c leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
1
431,333
12,477,767,939
IssuesEvent
2020-05-29 15:30:01
kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines
closed
Client.wait_for_run_completion and "Terminate" in the UI
area/backend area/sdk kind/bug priority/p1 status/triaged
### What steps did you take: Start a pipeline via python code, and wait for it's execution: ``` client = Client() # fill this with whatever works for you, just start a random pipeline from python that does not return immediately run = client.run_pipeline(experiment_id="test", job_name="test job", pipeline_id="123456") # we don't care how long it takes, just wait completed_run = client.wait_for_run_completion(run_id=run.id, timeout=float("inf")) ``` Once it runs, click "Terminate" inside the kubeflow pipelines UI on the top right. For me, I hit terminate while the run was stuck on "ImagePullBackoff" (since some permissions in our docker registry changed), so before anything was actually being run, but while the step was in "Pending State". ### What happened: The code is now blocked forever, and does not receive knowledge of the termination. ### What did you expect to happen: The code returns with a message that the run was terminated, or crashes with the same message. Either is fine. I am not sure if this is the same bug as #1992 and #2588, since this also applies (the run is still in pending state after hitting terminate), nevertheless I wanted to state that it does not trigger "wait_for_run_completion" either. ### Environment: KFP version: Build commit: ee207f2 KFP SDK version: 0.4.0 /kind bug /area backend /area sdk
1.0
Client.wait_for_run_completion and "Terminate" in the UI - ### What steps did you take: Start a pipeline via python code, and wait for it's execution: ``` client = Client() # fill this with whatever works for you, just start a random pipeline from python that does not return immediately run = client.run_pipeline(experiment_id="test", job_name="test job", pipeline_id="123456") # we don't care how long it takes, just wait completed_run = client.wait_for_run_completion(run_id=run.id, timeout=float("inf")) ``` Once it runs, click "Terminate" inside the kubeflow pipelines UI on the top right. For me, I hit terminate while the run was stuck on "ImagePullBackoff" (since some permissions in our docker registry changed), so before anything was actually being run, but while the step was in "Pending State". ### What happened: The code is now blocked forever, and does not receive knowledge of the termination. ### What did you expect to happen: The code returns with a message that the run was terminated, or crashes with the same message. Either is fine. I am not sure if this is the same bug as #1992 and #2588, since this also applies (the run is still in pending state after hitting terminate), nevertheless I wanted to state that it does not trigger "wait_for_run_completion" either. ### Environment: KFP version: Build commit: ee207f2 KFP SDK version: 0.4.0 /kind bug /area backend /area sdk
non_test
client wait for run completion and terminate in the ui what steps did you take start a pipeline via python code and wait for it s execution client client fill this with whatever works for you just start a random pipeline from python that does not return immediately run client run pipeline experiment id test job name test job pipeline id we don t care how long it takes just wait completed run client wait for run completion run id run id timeout float inf once it runs click terminate inside the kubeflow pipelines ui on the top right for me i hit terminate while the run was stuck on imagepullbackoff since some permissions in our docker registry changed so before anything was actually being run but while the step was in pending state what happened the code is now blocked forever and does not receive knowledge of the termination what did you expect to happen the code returns with a message that the run was terminated or crashes with the same message either is fine i am not sure if this is the same bug as and since this also applies the run is still in pending state after hitting terminate nevertheless i wanted to state that it does not trigger wait for run completion either environment kfp version build commit kfp sdk version kind bug area backend area sdk
0
22,437
7,175,873,602
IssuesEvent
2018-01-31 07:55:02
grpc/grpc
https://api.github.com/repos/grpc/grpc
closed
c++_linux_opt_native.bins/opt/end2end_test --gtest_filter=ProxyEnd2end/ProxyEnd2endTest.EchoDeadline fails in Master
infra/BUILDPONY lang/c++ priority/P1
https://grpc-testing.appspot.com/job/gRPC_master_linux/1727/testReport/junit/(root)/c++_linux_opt_native/bins_opt_end2end_test___gtest_filter_ProxyEnd2end_ProxyEnd2endTest_EchoDeadline_3__GRPC_POLL_STRATEGY_epoll/ ``` Note: Google Test filter = ProxyEnd2end/ProxyEnd2endTest.EchoDeadline/3 [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from ProxyEnd2end/ProxyEnd2endTest [ RUN ] ProxyEnd2end/ProxyEnd2endTest.EchoDeadline/3 D0310 15:35:27.349632201 14727 end2end_test.cc:230] TestScenario{use_proxy=true, credentials='INSECURE_CREDENTIALS'} D0310 15:35:27.349675026 14727 ev_posix.c:107] Using polling engine: epoll I0310 15:35:27.352158780 14727 server_builder.cc:247] Synchronous server. Num CQs: 4, Min pollers: 1, Max Pollers: 2147483647, CQ timeout (msec): 10 D0310 15:35:27.352207037 14727 ev_posix.c:107] Using polling engine: epoll I0310 15:35:27.370007394 14727 server_builder.cc:247] Synchronous server. Num CQs: 4, Min pollers: 1, Max Pollers: 2147483647, CQ timeout (msec): 10 E0310 15:35:27.381182981 14727 sync_posix.c:66] assertion failed: pthread_mutex_unlock(mu) == 0 ******************************* Caught signal SIGABRT ```
1.0
c++_linux_opt_native.bins/opt/end2end_test --gtest_filter=ProxyEnd2end/ProxyEnd2endTest.EchoDeadline fails in Master - https://grpc-testing.appspot.com/job/gRPC_master_linux/1727/testReport/junit/(root)/c++_linux_opt_native/bins_opt_end2end_test___gtest_filter_ProxyEnd2end_ProxyEnd2endTest_EchoDeadline_3__GRPC_POLL_STRATEGY_epoll/ ``` Note: Google Test filter = ProxyEnd2end/ProxyEnd2endTest.EchoDeadline/3 [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from ProxyEnd2end/ProxyEnd2endTest [ RUN ] ProxyEnd2end/ProxyEnd2endTest.EchoDeadline/3 D0310 15:35:27.349632201 14727 end2end_test.cc:230] TestScenario{use_proxy=true, credentials='INSECURE_CREDENTIALS'} D0310 15:35:27.349675026 14727 ev_posix.c:107] Using polling engine: epoll I0310 15:35:27.352158780 14727 server_builder.cc:247] Synchronous server. Num CQs: 4, Min pollers: 1, Max Pollers: 2147483647, CQ timeout (msec): 10 D0310 15:35:27.352207037 14727 ev_posix.c:107] Using polling engine: epoll I0310 15:35:27.370007394 14727 server_builder.cc:247] Synchronous server. Num CQs: 4, Min pollers: 1, Max Pollers: 2147483647, CQ timeout (msec): 10 E0310 15:35:27.381182981 14727 sync_posix.c:66] assertion failed: pthread_mutex_unlock(mu) == 0 ******************************* Caught signal SIGABRT ```
non_test
c linux opt native bins opt test gtest filter echodeadline fails in master note google test filter echodeadline running test from test case global test environment set up test from echodeadline test cc testscenario use proxy true credentials insecure credentials ev posix c using polling engine epoll server builder cc synchronous server num cqs min pollers max pollers cq timeout msec ev posix c using polling engine epoll server builder cc synchronous server num cqs min pollers max pollers cq timeout msec sync posix c assertion failed pthread mutex unlock mu caught signal sigabrt
0
114,001
9,670,894,410
IssuesEvent
2019-05-21 21:04:53
pyro-ppl/pyro
https://api.github.com/repos/pyro-ppl/pyro
closed
Incompatibility with numpy 1.16.3?
testing
### Description On a native conda installation, `make test` fails at 0f6b7689440310cfb2dffdc2cde3918d6132eb65 seemingly due to numpy incompatibility ### Details On `osx==10.14`, with `conda==4.6.14` and `python=3.7.3`: ``` git checkout 0f6b76 conda update -n root conda conda create -n pyro python=3 conda activate pyro make install make test ``` Error: `numpy.ufunc size changed, may indicate binary incompatibility` [full output](https://github.com/pyro-ppl/pyro/files/3200006/make_test1.txt) [`pip freeze`](https://github.com/pyro-ppl/pyro/files/3200008/pip_freeze.txt) ### Temporary resolution Downgrading numpy via `pip install numpy==1.15.0` seems to have `make test` happy.
1.0
Incompatibility with numpy 1.16.3? - ### Description On a native conda installation, `make test` fails at 0f6b7689440310cfb2dffdc2cde3918d6132eb65 seemingly due to numpy incompatibility ### Details On `osx==10.14`, with `conda==4.6.14` and `python=3.7.3`: ``` git checkout 0f6b76 conda update -n root conda conda create -n pyro python=3 conda activate pyro make install make test ``` Error: `numpy.ufunc size changed, may indicate binary incompatibility` [full output](https://github.com/pyro-ppl/pyro/files/3200006/make_test1.txt) [`pip freeze`](https://github.com/pyro-ppl/pyro/files/3200008/pip_freeze.txt) ### Temporary resolution Downgrading numpy via `pip install numpy==1.15.0` seems to have `make test` happy.
test
incompatibility with numpy description on a native conda installation make test fails at seemingly due to numpy incompatibility details on osx with conda and python git checkout conda update n root conda conda create n pyro python conda activate pyro make install make test error numpy ufunc size changed may indicate binary incompatibility temporary resolution downgrading numpy via pip install numpy seems to have make test happy
1
301,384
26,043,850,454
IssuesEvent
2022-12-22 12:48:22
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
DISABLED test_broadcast_in_dim_cuda_float32 (__main__.TestPrimsCUDA)
module: rocm module: tests triaged skipped
Platforms: rocm This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/test_broadcast_in_dim_cuda_float32%2C%20TestPrimsCUDA)). cc @jeffdaily @sunway513 @jithunnair-amd @ROCmSupport @KyleCZH @mruberry
1.0
DISABLED test_broadcast_in_dim_cuda_float32 (__main__.TestPrimsCUDA) - Platforms: rocm This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/test_broadcast_in_dim_cuda_float32%2C%20TestPrimsCUDA)). cc @jeffdaily @sunway513 @jithunnair-amd @ROCmSupport @KyleCZH @mruberry
test
disabled test broadcast in dim cuda main testprimscuda platforms rocm this test was disabled because it is failing on master cc jeffdaily jithunnair amd rocmsupport kyleczh mruberry
1
58,781
7,178,791,435
IssuesEvent
2018-01-31 17:31:44
ampproject/amphtml
https://api.github.com/repos/ampproject/amphtml
closed
Design Review 2018-01-31 16:30 UTC (Africa/Europe/western Asia)
P2: Soon Type: Design Review
Time: [2018-01-31 16:30 UTC](https://www.timeanddate.com/worldclock/meeting.html?year=2018&month=1&day=31&iv=0) ([add to Google Calendar](http://www.google.com/calendar/event?action=TEMPLATE&text=AMP%20Project%20Design%20Review&dates=20180131T163000Z/20180131T173000Z&details=https%3A%2F%2Fhangouts.google.com%2Fhangouts%2F_%2Fgoogle.com%2Famp-eng-desrev)) Location: [Video conference via Google Hangouts](https://hangouts.google.com/hangouts/_/google.com/amp-eng-desrev) The AMP Project holds weekly engineering [design reviews](https://github.com/ampproject/amphtml/blob/master/contributing/design-reviews.md). **We encourage everyone in the community to participate in these design reviews.** If you are interested in bringing your design to design review, read the [design review documentation](https://github.com/ampproject/amphtml/blob/master/contributing/design-reviews.md) and add a link to your design doc by the Monday before your design review. When attending a design review please read through the designs *before* the design review starts. This allows us to spend more time on discussion of the design. We rotate our design review between times that work better for different parts of the world as described in our [design review documentation](https://github.com/ampproject/amphtml/blob/master/contributing/design-reviews.md), but you are welcome to attend any design review. If you cannot make any of the design reviews but have a design to discuss please let mrjoro@ know on [Slack](https://github.com/ampproject/amphtml/blob/master/CONTRIBUTING.md#discussion-channels) and we will find a time that works for you.
1.0
Design Review 2018-01-31 16:30 UTC (Africa/Europe/western Asia) - Time: [2018-01-31 16:30 UTC](https://www.timeanddate.com/worldclock/meeting.html?year=2018&month=1&day=31&iv=0) ([add to Google Calendar](http://www.google.com/calendar/event?action=TEMPLATE&text=AMP%20Project%20Design%20Review&dates=20180131T163000Z/20180131T173000Z&details=https%3A%2F%2Fhangouts.google.com%2Fhangouts%2F_%2Fgoogle.com%2Famp-eng-desrev)) Location: [Video conference via Google Hangouts](https://hangouts.google.com/hangouts/_/google.com/amp-eng-desrev) The AMP Project holds weekly engineering [design reviews](https://github.com/ampproject/amphtml/blob/master/contributing/design-reviews.md). **We encourage everyone in the community to participate in these design reviews.** If you are interested in bringing your design to design review, read the [design review documentation](https://github.com/ampproject/amphtml/blob/master/contributing/design-reviews.md) and add a link to your design doc by the Monday before your design review. When attending a design review please read through the designs *before* the design review starts. This allows us to spend more time on discussion of the design. We rotate our design review between times that work better for different parts of the world as described in our [design review documentation](https://github.com/ampproject/amphtml/blob/master/contributing/design-reviews.md), but you are welcome to attend any design review. If you cannot make any of the design reviews but have a design to discuss please let mrjoro@ know on [Slack](https://github.com/ampproject/amphtml/blob/master/CONTRIBUTING.md#discussion-channels) and we will find a time that works for you.
non_test
design review utc africa europe western asia time location the amp project holds weekly engineering we encourage everyone in the community to participate in these design reviews if you are interested in bringing your design to design review read the and add a link to your design doc by the monday before your design review when attending a design review please read through the designs before the design review starts this allows us to spend more time on discussion of the design we rotate our design review between times that work better for different parts of the world as described in our but you are welcome to attend any design review if you cannot make any of the design reviews but have a design to discuss please let mrjoro know on and we will find a time that works for you
0
255,240
21,912,891,788
IssuesEvent
2022-05-21 10:42:03
neondatabase/neon
https://api.github.com/repos/neondatabase/neon
opened
`test_pageserver_http_get_wal_receiver_success` is flaky
c/storage/pageserver c/test-runner
E.g. see here: https://app.circleci.com/pipelines/github/neondatabase/neon/6589/workflows/8ef21c54-1f2e-4346-98e3-accb4f4f0910/jobs/66732 ``` test_runner/batch_others/test_pageserver_api.py:87: in test_pageserver_http_get_wal_receiver_success assert res2["last_received_msg_lsn"] > res["last_received_msg_lsn"] E TypeError: '>' not supported between instances of 'str' and 'NoneType' ``` My suspicion is that there is no guarantee that a timeline is fully(?) propagated to Pageserver once `env.postgres.create_start` is completed, so it's possible that `env.pageserver.http_client().wal_receiver_get(....)` returns `None` in the `last_received_msg_lsn` field. If it's the case, who does the propagation? Is it just delayed by local network or is it done by some background thread on timeout? Should we wait for it instead? Should we wait in the `create_start` function itself?
1.0
`test_pageserver_http_get_wal_receiver_success` is flaky - E.g. see here: https://app.circleci.com/pipelines/github/neondatabase/neon/6589/workflows/8ef21c54-1f2e-4346-98e3-accb4f4f0910/jobs/66732 ``` test_runner/batch_others/test_pageserver_api.py:87: in test_pageserver_http_get_wal_receiver_success assert res2["last_received_msg_lsn"] > res["last_received_msg_lsn"] E TypeError: '>' not supported between instances of 'str' and 'NoneType' ``` My suspicion is that there is no guarantee that a timeline is fully(?) propagated to Pageserver once `env.postgres.create_start` is completed, so it's possible that `env.pageserver.http_client().wal_receiver_get(....)` returns `None` in the `last_received_msg_lsn` field. If it's the case, who does the propagation? Is it just delayed by local network or is it done by some background thread on timeout? Should we wait for it instead? Should we wait in the `create_start` function itself?
test
test pageserver http get wal receiver success is flaky e g see here test runner batch others test pageserver api py in test pageserver http get wal receiver success assert res e typeerror not supported between instances of str and nonetype my suspicion is that there is no guarantee that a timeline is fully propagated to pageserver once env postgres create start is completed so it s possible that env pageserver http client wal receiver get returns none in the last received msg lsn field if it s the case who does the propagation is it just delayed by local network or is it done by some background thread on timeout should we wait for it instead should we wait in the create start function itself
1
51,514
6,175,838,687
IssuesEvent
2017-07-01 07:06:00
apache/bookkeeper
https://api.github.com/repos/apache/bookkeeper
closed
TestAuth#testCloseMethodCalledOnAuthProvider is failing
area/tests priority/blocker type/bug
**BUG REPORT** 1. Please describe the issue you observed: - What did you do? Run TestAuth. - What did you expect to see? The test should pass. - What did you see instead? The test failed. Because the shutdown is called twice after this change 9ddd9e6f9e48b03a57a2c78ec2630303abd49782 . So the assertion fails.
1.0
TestAuth#testCloseMethodCalledOnAuthProvider is failing - **BUG REPORT** 1. Please describe the issue you observed: - What did you do? Run TestAuth. - What did you expect to see? The test should pass. - What did you see instead? The test failed. Because the shutdown is called twice after this change 9ddd9e6f9e48b03a57a2c78ec2630303abd49782 . So the assertion fails.
test
testauth testclosemethodcalledonauthprovider is failing bug report please describe the issue you observed what did you do run testauth what did you expect to see the test should pass what did you see instead the test failed because the shutdown is called twice after this change so the assertion fails
1
17,153
11,733,244,458
IssuesEvent
2020-03-11 06:32:26
godotengine/godot
https://api.github.com/repos/godotengine/godot
closed
Auto-completion in comments not entirely useful for apostrophes.
bug topic:editor usability
**Godot version:** 3.2.0 stable **OS/device including version:** N/A **Issue description:** Auto-completion has its issues when you're writing comments. For example `# Can't and won't do this.` would duplicate the `'` character (i.e. treat it as a single quote). This is a minor issue, but makes contractions (i.e. "it's", "can't" etc) slightly more difficult than it needs to be. Maybe look at the way other projects such as Inform 7 (for example) handle this? **Steps to reproduce:** Just press `'`.
True
Auto-completion in comments not entirely useful for apostrophes. - **Godot version:** 3.2.0 stable **OS/device including version:** N/A **Issue description:** Auto-completion has its issues when you're writing comments. For example `# Can't and won't do this.` would duplicate the `'` character (i.e. treat it as a single quote). This is a minor issue, but makes contractions (i.e. "it's", "can't" etc) slightly more difficult than it needs to be. Maybe look at the way other projects such as Inform 7 (for example) handle this? **Steps to reproduce:** Just press `'`.
non_test
auto completion in comments not entirely useful for apostrophes godot version stable os device including version n a issue description auto completion has its issues when you re writing comments for example can t and won t do this would duplicate the character i e treat it as a single quote this is a minor issue but makes contractions i e it s can t etc slightly more difficult than it needs to be maybe look at the way other projects such as inform for example handle this steps to reproduce just press
0
294,540
9,036,667,860
IssuesEvent
2019-02-09 01:55:54
RobotLocomotion/drake
https://api.github.com/repos/RobotLocomotion/drake
opened
systems: Non-owning `FixInputPort`?
priority: low team: dynamics
Relates #10592 For some of my workflows, it's nice to have a non-Diagram built System, and just mock the inputs to quickly test the input, then throw the system away (or leave it completely decoupled). This is nice because for MBP+SG workflows, adding any system to the diagram requires exclusive ownership transfer, meaning you can only do it once. It's nice to quickly test stuff without having to own stuff. In Anzu, I'm porting stuff from RBT to MBP. One thing I had before was a simple wrapper around the RBT `RgbdCamera` that handled all the systems stuff, so I could just write something like: ``` color, depth, label = SimpleCamera(tree, frame).render(q) ``` and have `np.ndarray` images. Right now, I'm seeing how much I can avoid needing to transfer ownership, and am hacking around. I was _almost_ there, with (some hacky) code like this: ```py class WrapperCamera(object): def __init__(self, source_id, scene_graph, camera): ... def render(self, sg_context): ... rsg_context = render_sg.CreateDefaultContext() poses = scene_graph.get_source_pose_port( self._source_id).EvalAbstract(sg_context) rsg_context.FixInputPort( render_sg.get_source_pose_port(self._source_id).get_index(), poses.Clone()) c_context = camera.CreateDefaultContext() camera_context.FixInputPort( camera.query_object_input_port().get_index(), # Tip the domino... render_sg.get_query_output_port().EvalAbstract(rsg_context).Clone()) # Render. # FAIL: QueryObject is (quite correctly?) non-copyable, and triggers # failure. color_image = camera.color_image_output_port().Eval(c_context) ``` Since this is throwaway objects, it'd be nice to temporarily affix a non-owned value to the input port. The workaround, since luckily this is localized to `(dev::SceneGraph, RgbdCamera)`, is to put them both in a diagram. However, this workaround will evaporate once `SceneGraph` gains rendering super powers. @sherm1 @SeanCurtis-TRI @jwnimmer-tri Can I ask your thoughts on this?
1.0
systems: Non-owning `FixInputPort`? - Relates #10592 For some of my workflows, it's nice to have a non-Diagram built System, and just mock the inputs to quickly test the input, then throw the system away (or leave it completely decoupled). This is nice because for MBP+SG workflows, adding any system to the diagram requires exclusive ownership transfer, meaning you can only do it once. It's nice to quickly test stuff without having to own stuff. In Anzu, I'm porting stuff from RBT to MBP. One thing I had before was a simple wrapper around the RBT `RgbdCamera` that handled all the systems stuff, so I could just write something like: ``` color, depth, label = SimpleCamera(tree, frame).render(q) ``` and have `np.ndarray` images. Right now, I'm seeing how much I can avoid needing to transfer ownership, and am hacking around. I was _almost_ there, with (some hacky) code like this: ```py class WrapperCamera(object): def __init__(self, source_id, scene_graph, camera): ... def render(self, sg_context): ... rsg_context = render_sg.CreateDefaultContext() poses = scene_graph.get_source_pose_port( self._source_id).EvalAbstract(sg_context) rsg_context.FixInputPort( render_sg.get_source_pose_port(self._source_id).get_index(), poses.Clone()) c_context = camera.CreateDefaultContext() camera_context.FixInputPort( camera.query_object_input_port().get_index(), # Tip the domino... render_sg.get_query_output_port().EvalAbstract(rsg_context).Clone()) # Render. # FAIL: QueryObject is (quite correctly?) non-copyable, and triggers # failure. color_image = camera.color_image_output_port().Eval(c_context) ``` Since this is throwaway objects, it'd be nice to temporarily affix a non-owned value to the input port. The workaround, since luckily this is localized to `(dev::SceneGraph, RgbdCamera)`, is to put them both in a diagram. However, this workaround will evaporate once `SceneGraph` gains rendering super powers. @sherm1 @SeanCurtis-TRI @jwnimmer-tri Can I ask your thoughts on this?
non_test
systems non owning fixinputport relates for some of my workflows it s nice to have a non diagram built system and just mock the inputs to quickly test the input then throw the system away or leave it completely decoupled this is nice because for mbp sg workflows adding any system to the diagram requires exclusive ownership transfer meaning you can only do it once it s nice to quickly test stuff without having to own stuff in anzu i m porting stuff from rbt to mbp one thing i had before was a simple wrapper around the rbt rgbdcamera that handled all the systems stuff so i could just write something like color depth label simplecamera tree frame render q and have np ndarray images right now i m seeing how much i can avoid needing to transfer ownership and am hacking around i was almost there with some hacky code like this py class wrappercamera object def init self source id scene graph camera def render self sg context rsg context render sg createdefaultcontext poses scene graph get source pose port self source id evalabstract sg context rsg context fixinputport render sg get source pose port self source id get index poses clone c context camera createdefaultcontext camera context fixinputport camera query object input port get index tip the domino render sg get query output port evalabstract rsg context clone render fail queryobject is quite correctly non copyable and triggers failure color image camera color image output port eval c context since this is throwaway objects it d be nice to temporarily affix a non owned value to the input port the workaround since luckily this is localized to dev scenegraph rgbdcamera is to put them both in a diagram however this workaround will evaporate once scenegraph gains rendering super powers seancurtis tri jwnimmer tri can i ask your thoughts on this
0
24,750
11,090,229,942
IssuesEvent
2019-12-15 01:03:23
uniquelyparticular/sync-stripe-to-zendesk
https://api.github.com/repos/uniquelyparticular/sync-stripe-to-zendesk
opened
WS-2019-0307 (Medium) detected in mem-1.1.0.tgz
security vulnerability
## WS-2019-0307 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mem-1.1.0.tgz</b></p></summary> <p>Memoize functions - An optimization used to speed up consecutive function calls by caching the result of calls with identical input</p> <p>Library home page: <a href="https://registry.npmjs.org/mem/-/mem-1.1.0.tgz">https://registry.npmjs.org/mem/-/mem-1.1.0.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/sync-stripe-to-zendesk/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/sync-stripe-to-zendesk/node_modules/npm/node_modules/mem/package.json</p> <p> Dependency Hierarchy: - micro-dev-3.0.0.tgz (Root Library) - jsome-2.5.0.tgz - yargs-11.1.0.tgz - os-locale-2.1.0.tgz - :x: **mem-1.1.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/uniquelyparticular/sync-stripe-to-zendesk/commit/3e7abf486f7e55b3c48d47ce7e81077ff5fcd0bc">3e7abf486f7e55b3c48d47ce7e81077ff5fcd0bc</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Denial of Service (DoS) vulnerability found in mem before 4.0.0. There is a failure in removal of old values from the cache. As a result, attacker may exhaust the system's memory. <p>Publish Date: 2019-12-01 <p>URL: <a href=https://github.com/sindresorhus/mem/commit/da4e4398cb27b602de3bd55f746efa9b4a31702b>WS-2019-0307</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1084">https://www.npmjs.com/advisories/1084</a></p> <p>Release Date: 2019-12-01</p> <p>Fix Resolution: mem - 4.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2019-0307 (Medium) detected in mem-1.1.0.tgz - ## WS-2019-0307 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mem-1.1.0.tgz</b></p></summary> <p>Memoize functions - An optimization used to speed up consecutive function calls by caching the result of calls with identical input</p> <p>Library home page: <a href="https://registry.npmjs.org/mem/-/mem-1.1.0.tgz">https://registry.npmjs.org/mem/-/mem-1.1.0.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/sync-stripe-to-zendesk/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/sync-stripe-to-zendesk/node_modules/npm/node_modules/mem/package.json</p> <p> Dependency Hierarchy: - micro-dev-3.0.0.tgz (Root Library) - jsome-2.5.0.tgz - yargs-11.1.0.tgz - os-locale-2.1.0.tgz - :x: **mem-1.1.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/uniquelyparticular/sync-stripe-to-zendesk/commit/3e7abf486f7e55b3c48d47ce7e81077ff5fcd0bc">3e7abf486f7e55b3c48d47ce7e81077ff5fcd0bc</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Denial of Service (DoS) vulnerability found in mem before 4.0.0. There is a failure in removal of old values from the cache. As a result, attacker may exhaust the system's memory. <p>Publish Date: 2019-12-01 <p>URL: <a href=https://github.com/sindresorhus/mem/commit/da4e4398cb27b602de3bd55f746efa9b4a31702b>WS-2019-0307</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1084">https://www.npmjs.com/advisories/1084</a></p> <p>Release Date: 2019-12-01</p> <p>Fix Resolution: mem - 4.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
ws medium detected in mem tgz ws medium severity vulnerability vulnerable library mem tgz memoize functions an optimization used to speed up consecutive function calls by caching the result of calls with identical input library home page a href path to dependency file tmp ws scm sync stripe to zendesk package json path to vulnerable library tmp ws scm sync stripe to zendesk node modules npm node modules mem package json dependency hierarchy micro dev tgz root library jsome tgz yargs tgz os locale tgz x mem tgz vulnerable library found in head commit a href vulnerability details denial of service dos vulnerability found in mem before there is a failure in removal of old values from the cache as a result attacker may exhaust the system s memory publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution mem step up your open source security game with whitesource
0
176,547
14,590,859,007
IssuesEvent
2020-12-19 10:02:40
jsheunis/fMRwhy
https://api.github.com/repos/jsheunis/fMRwhy
opened
Feedback Sample QC report
documentation
Hi Stephan, I just checked your sample QC report. First of all it is very clear already and well written. I still have some minor comments, which are the following: - The file itself, the sample QC report is not in the fmrwhy Github page itself. Is it only accessible by clicking on the link inside quality.rst? - 'Masks were generated… ( a combination – logical OR after thresholding – of the previous masks)' --> masks are already generated after thresholding, what do you mean here? - 'If physiology data are available for a run, quality metrics for these are extracted from the PhysIO pipeline included as a dependency in this toolbox.' --> Maybe add what happens if PhysIO is not downloaded, will it just omit the plot or raise a warning? - 'The time series mean gives a quick view of the general quality of the time series and can also spikes or interference artefacts.' --> 'can also **reveal** spikes or interference art**i**facts.' - In the description of Time series QC plots: ‘The vertical axis indicates single voxels, either grouped together per tissue type (compartment ordered) or ordered from top to bottom according to the voxel's time series correlation strength to the global signal.’ --> Maybe mention that the color axis indicate the voxels per tissue type with red being global signal, yellow WM and CSF blue (even though it is already displayed in the time series above it might be useful because users might miss this). - ‘Single traces above the grayplot are also shown, including tissue compartment signals, respiration, heart rate, and framewise displacement. ‘. --> Are these the measured respiration and heart rate and only shown if these are externally measured / PhysIO was downloaded? Or are they the estimated respiration / heart rate confounders? - ‘The top plot displays temporal lag between derived heart beats within thresholds for outliers’ --> What are these outliers thresholds and how were they determined? Maybe if you will describe it later somewhere, add a link to it or shortly describe it here. - In the table of functional QC metrics you show FD outliers but this abbreviation is not defined yet. Maybe add it here: “This includes framwise displacement (FD) means and outliers.. “. This also holds for GM and WM and maybe also write out CSF once. - In the table Mean tSNR (brain) you could perhaps change it to Mean tSNR (whole brain) because you describe how you obtained the ‘whole brain’ maps before by using logical OR combination. Or is it also including CSF? - Temporal QC plots: With global signal do you mean signal from voxels of the whole brain or literally all voxels including CSF?
1.0
Feedback Sample QC report - Hi Stephan, I just checked your sample QC report. First of all it is very clear already and well written. I still have some minor comments, which are the following: - The file itself, the sample QC report is not in the fmrwhy Github page itself. Is it only accessible by clicking on the link inside quality.rst? - 'Masks were generated… ( a combination – logical OR after thresholding – of the previous masks)' --> masks are already generated after thresholding, what do you mean here? - 'If physiology data are available for a run, quality metrics for these are extracted from the PhysIO pipeline included as a dependency in this toolbox.' --> Maybe add what happens if PhysIO is not downloaded, will it just omit the plot or raise a warning? - 'The time series mean gives a quick view of the general quality of the time series and can also spikes or interference artefacts.' --> 'can also **reveal** spikes or interference art**i**facts.' - In the description of Time series QC plots: ‘The vertical axis indicates single voxels, either grouped together per tissue type (compartment ordered) or ordered from top to bottom according to the voxel's time series correlation strength to the global signal.’ --> Maybe mention that the color axis indicate the voxels per tissue type with red being global signal, yellow WM and CSF blue (even though it is already displayed in the time series above it might be useful because users might miss this). - ‘Single traces above the grayplot are also shown, including tissue compartment signals, respiration, heart rate, and framewise displacement. ‘. --> Are these the measured respiration and heart rate and only shown if these are externally measured / PhysIO was downloaded? Or are they the estimated respiration / heart rate confounders? - ‘The top plot displays temporal lag between derived heart beats within thresholds for outliers’ --> What are these outliers thresholds and how were they determined? Maybe if you will describe it later somewhere, add a link to it or shortly describe it here. - In the table of functional QC metrics you show FD outliers but this abbreviation is not defined yet. Maybe add it here: “This includes framwise displacement (FD) means and outliers.. “. This also holds for GM and WM and maybe also write out CSF once. - In the table Mean tSNR (brain) you could perhaps change it to Mean tSNR (whole brain) because you describe how you obtained the ‘whole brain’ maps before by using logical OR combination. Or is it also including CSF? - Temporal QC plots: With global signal do you mean signal from voxels of the whole brain or literally all voxels including CSF?
non_test
feedback sample qc report hi stephan i just checked your sample qc report first of all it is very clear already and well written i still have some minor comments which are the following the file itself the sample qc report is not in the fmrwhy github page itself is it only accessible by clicking on the link inside quality rst masks were generated… a combination – logical or after thresholding – of the previous masks masks are already generated after thresholding what do you mean here if physiology data are available for a run quality metrics for these are extracted from the physio pipeline included as a dependency in this toolbox maybe add what happens if physio is not downloaded will it just omit the plot or raise a warning the time series mean gives a quick view of the general quality of the time series and can also spikes or interference artefacts can also reveal spikes or interference art i facts in the description of time series qc plots ‘the vertical axis indicates single voxels either grouped together per tissue type compartment ordered or ordered from top to bottom according to the voxel s time series correlation strength to the global signal ’ maybe mention that the color axis indicate the voxels per tissue type with red being global signal yellow wm and csf blue even though it is already displayed in the time series above it might be useful because users might miss this ‘single traces above the grayplot are also shown including tissue compartment signals respiration heart rate and framewise displacement ‘ are these the measured respiration and heart rate and only shown if these are externally measured physio was downloaded or are they the estimated respiration heart rate confounders ‘the top plot displays temporal lag between derived heart beats within thresholds for outliers’ what are these outliers thresholds and how were they determined maybe if you will describe it later somewhere add a link to it or shortly describe it here in the table of functional qc metrics you show fd outliers but this abbreviation is not defined yet maybe add it here “this includes framwise displacement fd means and outliers “ this also holds for gm and wm and maybe also write out csf once in the table mean tsnr brain you could perhaps change it to mean tsnr whole brain because you describe how you obtained the ‘whole brain’ maps before by using logical or combination or is it also including csf temporal qc plots with global signal do you mean signal from voxels of the whole brain or literally all voxels including csf
0
244,596
20,678,704,556
IssuesEvent
2022-03-10 11:48:41
Uuvana-Studios/longvinter-windows-client
https://api.github.com/repos/Uuvana-Studios/longvinter-windows-client
closed
Harry Potter Housing Glitch
bug Not Tested
**Describe the bug** [15s video showing the bug](https://i.imgur.com/mCHKFpS.mp4) **To Reproduce** Steps to reproduce the behavior: First you need to set up two tents directly behind each other, or houses, it works with both. <img src="https://user-images.githubusercontent.com/27786664/156955684-36b147f8-a065-463d-a33c-3f3456ec892a.png" width="200" /> It works best when these are as close and parallel to each other as possible When the tents are set up you can walk in the lower tent from the bottom and out again from the top. <img src="https://user-images.githubusercontent.com/27786664/156955890-e88f8b86-0b57-40cc-baa8-2bc473e6022b.png" width="300" /> **Expected behavior** The behavior of the wall should not change because one is so close to the entrance of another tent. **Desktop (please complete the following information):** - OS: Windows 10 - Game Version: Beta 1.0.0 - Steam Version ![image](https://user-images.githubusercontent.com/27786664/156956142-de0dc913-b6de-4916-89bd-62990b707c20.png)
1.0
Harry Potter Housing Glitch - **Describe the bug** [15s video showing the bug](https://i.imgur.com/mCHKFpS.mp4) **To Reproduce** Steps to reproduce the behavior: First you need to set up two tents directly behind each other, or houses, it works with both. <img src="https://user-images.githubusercontent.com/27786664/156955684-36b147f8-a065-463d-a33c-3f3456ec892a.png" width="200" /> It works best when these are as close and parallel to each other as possible When the tents are set up you can walk in the lower tent from the bottom and out again from the top. <img src="https://user-images.githubusercontent.com/27786664/156955890-e88f8b86-0b57-40cc-baa8-2bc473e6022b.png" width="300" /> **Expected behavior** The behavior of the wall should not change because one is so close to the entrance of another tent. **Desktop (please complete the following information):** - OS: Windows 10 - Game Version: Beta 1.0.0 - Steam Version ![image](https://user-images.githubusercontent.com/27786664/156956142-de0dc913-b6de-4916-89bd-62990b707c20.png)
test
harry potter housing glitch describe the bug to reproduce steps to reproduce the behavior first you need to set up two tents directly behind each other or houses it works with both it works best when these are as close and parallel to each other as possible when the tents are set up you can walk in the lower tent from the bottom and out again from the top expected behavior the behavior of the wall should not change because one is so close to the entrance of another tent desktop please complete the following information os windows game version beta steam version
1
202,007
15,244,079,392
IssuesEvent
2021-02-19 12:14:27
dmariel/scrapy
https://api.github.com/repos/dmariel/scrapy
opened
Improve test coverage for _get_form in http/request/form.py
test_coverage
Branch not covered @104-105 @http/request/form.py for _get_form
1.0
Improve test coverage for _get_form in http/request/form.py - Branch not covered @104-105 @http/request/form.py for _get_form
test
improve test coverage for get form in http request form py branch not covered http request form py for get form
1
125,324
26,638,234,448
IssuesEvent
2023-01-25 00:34:21
dotnet/razor
https://api.github.com/repos/dotnet/razor
closed
[Linux] Rename, code lens doesn't work until edit is made in file
bug vscode
Found via testing on Linux: Before edit is made in file, code lens (and rename) doesn't work: ![razorlinux3](https://user-images.githubusercontent.com/16968319/97934023-4ac19f80-1d29-11eb-8be4-3246c2896640.png) After edit is made in file, code lens (and rename) works - e.g. in screenshot below, new line was added and now code lens works (yay): ![razorlinux2](https://user-images.githubusercontent.com/16968319/97934031-501eea00-1d29-11eb-9ac9-5c65f51941c1.png)
1.0
[Linux] Rename, code lens doesn't work until edit is made in file - Found via testing on Linux: Before edit is made in file, code lens (and rename) doesn't work: ![razorlinux3](https://user-images.githubusercontent.com/16968319/97934023-4ac19f80-1d29-11eb-8be4-3246c2896640.png) After edit is made in file, code lens (and rename) works - e.g. in screenshot below, new line was added and now code lens works (yay): ![razorlinux2](https://user-images.githubusercontent.com/16968319/97934031-501eea00-1d29-11eb-9ac9-5c65f51941c1.png)
non_test
rename code lens doesn t work until edit is made in file found via testing on linux before edit is made in file code lens and rename doesn t work after edit is made in file code lens and rename works e g in screenshot below new line was added and now code lens works yay
0
65,261
6,953,313,248
IssuesEvent
2017-12-06 20:33:41
pouchdb/pouchdb
https://api.github.com/repos/pouchdb/pouchdb
closed
fetch() tests fail on Firefox 50
bug has test case wontfix
Right now we test in Travis on Firefox 47, but https://github.com/pouchdb/pouchdb/pull/6072 proved that upgrading to Firefox 50 generates a genuine failure for the `fetch()` tests. We should investigate to figure out if this is A) a PouchDB bug or B) a Firefox bug.
1.0
fetch() tests fail on Firefox 50 - Right now we test in Travis on Firefox 47, but https://github.com/pouchdb/pouchdb/pull/6072 proved that upgrading to Firefox 50 generates a genuine failure for the `fetch()` tests. We should investigate to figure out if this is A) a PouchDB bug or B) a Firefox bug.
test
fetch tests fail on firefox right now we test in travis on firefox but proved that upgrading to firefox generates a genuine failure for the fetch tests we should investigate to figure out if this is a a pouchdb bug or b a firefox bug
1
144,119
11,595,550,195
IssuesEvent
2020-02-24 17:12:13
saltstack/salt
https://api.github.com/repos/saltstack/salt
closed
integration.modules.test_state.StateModuleTest.test_retry_option_eventual_success
Confirmed Test Failure
Flaky test https://jenkinsci.saltstack.com/job/pr-kitchen-amazon2-py2/job/PR-54590/4/
1.0
integration.modules.test_state.StateModuleTest.test_retry_option_eventual_success - Flaky test https://jenkinsci.saltstack.com/job/pr-kitchen-amazon2-py2/job/PR-54590/4/
test
integration modules test state statemoduletest test retry option eventual success flaky test
1
734,658
25,357,544,171
IssuesEvent
2022-11-20 14:17:34
bounswe/bounswe2022group7
https://api.github.com/repos/bounswe/bounswe2022group7
opened
Implementation of Discussion Post and Comment endpoints
Status: Not Yet Started Priority: Low Difficulty: Medium Type: Implementation Target: Backend
Since I completed my weekly tasks and still have time, I am going to write the upcoming week's discussion post and comment related endpoints that are already discussed in the [team meeting](https://github.com/bounswe/bounswe2022group7/wiki/CMPE451-Meeting-Notes-%236) - [ ] GET DiscussionForum - [ ] GET Discussion - [ ] POST Discussion - [ ] POST Comment - [ ] GET Comment (?) **Reviewer:** @demet47 **Deadline:** 28/11/2022
1.0
Implementation of Discussion Post and Comment endpoints - Since I completed my weekly tasks and still have time, I am going to write the upcoming week's discussion post and comment related endpoints that are already discussed in the [team meeting](https://github.com/bounswe/bounswe2022group7/wiki/CMPE451-Meeting-Notes-%236) - [ ] GET DiscussionForum - [ ] GET Discussion - [ ] POST Discussion - [ ] POST Comment - [ ] GET Comment (?) **Reviewer:** @demet47 **Deadline:** 28/11/2022
non_test
implementation of discussion post and comment endpoints since i completed my weekly tasks and still have time i am going to write the upcoming week s discussion post and comment related endpoints that are already discussed in the get discussionforum get discussion post discussion post comment get comment reviewer deadline
0
231,693
17,703,757,276
IssuesEvent
2021-08-25 03:44:30
serilog-contrib/Serilog.Logfmt
https://api.github.com/repos/serilog-contrib/Serilog.Logfmt
opened
Add Serilog Extension icon to Serilog.Logfmt NuGet package
documentation
The `Serilog.Logfmt` NuGet package doesn't yet have an icon. ![image](https://user-images.githubusercontent.com/177608/130722905-32c6d222-3488-449f-93a3-bf45b19c7e8a.png) I suggest using the standard icon for extensions: ![serilog standard extensions icon](https://serilog.net/images/serilog-extension-nuget.png)
1.0
Add Serilog Extension icon to Serilog.Logfmt NuGet package - The `Serilog.Logfmt` NuGet package doesn't yet have an icon. ![image](https://user-images.githubusercontent.com/177608/130722905-32c6d222-3488-449f-93a3-bf45b19c7e8a.png) I suggest using the standard icon for extensions: ![serilog standard extensions icon](https://serilog.net/images/serilog-extension-nuget.png)
non_test
add serilog extension icon to serilog logfmt nuget package the serilog logfmt nuget package doesn t yet have an icon i suggest using the standard icon for extensions
0
68,317
28,351,631,158
IssuesEvent
2023-04-12 03:10:25
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
The information about hybrid connections being OS-independent is contradictory
app-service/svc triaged cxp doc-enhancement Pri1
See these two texts. The first one says it is OS-independent: >This enables your apps to access resources on any OS, provided it's a TCP endpoint The second one makes one think the only relay agent is available for Windows. >The Hybrid Connections feature requires a relay agent in the network that hosts your Hybrid Connection endpoint. That relay agent is called the Hybrid Connection Manager (HCM). To download HCM, from your app in the Azure portal, select Networking > Configure your Hybrid Connection endpoints. > >This tool runs on Windows Server 2012 and later. The HCM runs as a service and connects outbound to Azure Relay on port 443. --- #### Document Details ⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.* * ID: 645d01fc-eae4-67c0-ef63-77d34d854d5e * Version Independent ID: 516bc8a7-d927-d080-36a6-69ca95686feb * Content: [Hybrid connections - Azure App Service](https://learn.microsoft.com/en-us/azure/app-service/app-service-hybrid-connections#hybrid-connection-manager) * Content Source: [articles/app-service/app-service-hybrid-connections.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/app-service/app-service-hybrid-connections.md) * Service: **app-service** * GitHub Login: @madsd * Microsoft Alias: **madsd**
1.0
The information about hybrid connections being OS-independent is contradictory - See these two texts. The first one says it is OS-independent: >This enables your apps to access resources on any OS, provided it's a TCP endpoint The second one makes one think the only relay agent is available for Windows. >The Hybrid Connections feature requires a relay agent in the network that hosts your Hybrid Connection endpoint. That relay agent is called the Hybrid Connection Manager (HCM). To download HCM, from your app in the Azure portal, select Networking > Configure your Hybrid Connection endpoints. > >This tool runs on Windows Server 2012 and later. The HCM runs as a service and connects outbound to Azure Relay on port 443. --- #### Document Details ⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.* * ID: 645d01fc-eae4-67c0-ef63-77d34d854d5e * Version Independent ID: 516bc8a7-d927-d080-36a6-69ca95686feb * Content: [Hybrid connections - Azure App Service](https://learn.microsoft.com/en-us/azure/app-service/app-service-hybrid-connections#hybrid-connection-manager) * Content Source: [articles/app-service/app-service-hybrid-connections.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/app-service/app-service-hybrid-connections.md) * Service: **app-service** * GitHub Login: @madsd * Microsoft Alias: **madsd**
non_test
the information about hybrid connections being os independent is contradictory see these two texts the first one says it is os independent this enables your apps to access resources on any os provided it s a tcp endpoint the second one makes one think the only relay agent is available for windows the hybrid connections feature requires a relay agent in the network that hosts your hybrid connection endpoint that relay agent is called the hybrid connection manager hcm to download hcm from your app in the azure portal select networking configure your hybrid connection endpoints this tool runs on windows server and later the hcm runs as a service and connects outbound to azure relay on port document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service app service github login madsd microsoft alias madsd
0
58,153
6,575,787,422
IssuesEvent
2017-09-11 17:19:37
First-timers-bot/first-timers-test
https://api.github.com/repos/First-timers-bot/first-timers-test
opened
change title
first-timers-only-testing
### 🆕🐥☝ First Timers Only. This issue is reserved for people who never contributed to Open Source before. We know that the process of creating a pull request is the biggest barrier for new contributors. This issue is for you 💝 [About First Timers Only](http://www.firsttimersonly.com/). ### 🤔 What you will need to know. Nothing. This issue is meant to welcome you to Open Source :) We are happy to walk you through the process. ### 📋 Step by Step - [ ] 🙋 **Claim this issue**: Comment below. Once claimed we add you as contributor to this repository. - [ ] 👌 **Accept our invitation** to this repository. Once accepted, assign yourself to this repository - [ ] 📝 **Update** the file [README.md](https://github.com/First-timers-bot/first-timers-test/blob/78f11d25ae3287c3889ee9d3b8baeb41dc82c0ea/README.md) in the `first-timers-test` repository (press the little pen Icon) and edit the line as shown below. ```diff @@ -1 +1 @@ -# first-timers-test \ No newline at end of file +# hello world ``` - [ ] 💾 **Commit** your changes - [ ] 🔀 **Start a Pull Request**. There are two ways how you can start a pull request: 1. If you are familiar with the terminal or would like to learn it, [here is a great tutorial](https://egghead.io/series/how-to-contribute-to-an-open-source-project-on-github) on how to send a pull request using the terminal. 2. You can [edit files directly in your browser](https://help.github.com/articles/editing-files-in-your-repository/) - [ ] 🏁 **Done** Ask in comments for a review :) ### 🤔❓ Questions Leave a comment below! This issue was created by [First-Timers-Bot](https://github.com/hoodiehq/first-timers-bot).
1.0
change title - ### 🆕🐥☝ First Timers Only. This issue is reserved for people who never contributed to Open Source before. We know that the process of creating a pull request is the biggest barrier for new contributors. This issue is for you 💝 [About First Timers Only](http://www.firsttimersonly.com/). ### 🤔 What you will need to know. Nothing. This issue is meant to welcome you to Open Source :) We are happy to walk you through the process. ### 📋 Step by Step - [ ] 🙋 **Claim this issue**: Comment below. Once claimed we add you as contributor to this repository. - [ ] 👌 **Accept our invitation** to this repository. Once accepted, assign yourself to this repository - [ ] 📝 **Update** the file [README.md](https://github.com/First-timers-bot/first-timers-test/blob/78f11d25ae3287c3889ee9d3b8baeb41dc82c0ea/README.md) in the `first-timers-test` repository (press the little pen Icon) and edit the line as shown below. ```diff @@ -1 +1 @@ -# first-timers-test \ No newline at end of file +# hello world ``` - [ ] 💾 **Commit** your changes - [ ] 🔀 **Start a Pull Request**. There are two ways how you can start a pull request: 1. If you are familiar with the terminal or would like to learn it, [here is a great tutorial](https://egghead.io/series/how-to-contribute-to-an-open-source-project-on-github) on how to send a pull request using the terminal. 2. You can [edit files directly in your browser](https://help.github.com/articles/editing-files-in-your-repository/) - [ ] 🏁 **Done** Ask in comments for a review :) ### 🤔❓ Questions Leave a comment below! This issue was created by [First-Timers-Bot](https://github.com/hoodiehq/first-timers-bot).
test
change title 🆕🐥☝ first timers only this issue is reserved for people who never contributed to open source before we know that the process of creating a pull request is the biggest barrier for new contributors this issue is for you 💝 🤔 what you will need to know nothing this issue is meant to welcome you to open source we are happy to walk you through the process 📋 step by step 🙋 claim this issue comment below once claimed we add you as contributor to this repository 👌 accept our invitation to this repository once accepted assign yourself to this repository 📝 update the file in the first timers test repository press the little pen icon and edit the line as shown below diff first timers test no newline at end of file hello world 💾 commit your changes 🔀 start a pull request there are two ways how you can start a pull request if you are familiar with the terminal or would like to learn it on how to send a pull request using the terminal you can 🏁 done ask in comments for a review 🤔❓ questions leave a comment below this issue was created by
1
309,342
26,660,818,674
IssuesEvent
2023-01-25 20:56:38
MPMG-DCC-UFMG/F01
https://api.github.com/repos/MPMG-DCC-UFMG/F01
closed
Teste de generalizacao para a tag Concursos Públicos - Dados do concurso - Confins
generalization test development template - ABO (21) tag - Concursos Públicos subtag - Dados do Concurso
DoD: Realizar o teste de Generalização do validador da tag Concursos Públicos - Dados do concurso para o Município de Confins.
1.0
Teste de generalizacao para a tag Concursos Públicos - Dados do concurso - Confins - DoD: Realizar o teste de Generalização do validador da tag Concursos Públicos - Dados do concurso para o Município de Confins.
test
teste de generalizacao para a tag concursos públicos dados do concurso confins dod realizar o teste de generalização do validador da tag concursos públicos dados do concurso para o município de confins
1
303,414
26,205,412,447
IssuesEvent
2023-01-03 22:00:33
unifyai/ivy
https://api.github.com/repos/unifyai/ivy
closed
Fix tensor.test_torch_instance_bool
PyTorch Frontend Sub Task Failing Test
| | | |---|---| |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/3827788964/jobs/6512702035" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/3829547956/jobs/6516369685" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/3827788964/jobs/6512697158" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/3827788964/jobs/6512702035" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
1.0
Fix tensor.test_torch_instance_bool - | | | |---|---| |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/3827788964/jobs/6512702035" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/3829547956/jobs/6516369685" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/3827788964/jobs/6512697158" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/3827788964/jobs/6512702035" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
test
fix tensor test torch instance bool numpy img src torch img src tensorflow img src jax img src
1
156,723
12,336,029,792
IssuesEvent
2020-05-14 12:59:06
apache/cloudstack-primate
https://api.github.com/repos/apache/cloudstack-primate
closed
Testing high-level test plan template
testing
**Common** - Project selector - [x] pass - [ ] fail #151 - [ ] Language selector | | | | - [ ] Notifications / clear notifications | | | | - [ ] Profile | | | | | Feature | Tester | Result | Related Issues | |---------------|--------|--------|----------------| | **Common** | | | | | - [ ] Project selector | | | | | - [ ] Language selector | | | | | Notifications / clear notifications | | | | | Profile | | | | | Help | | | | | Logout | | | | | | | | | | **Dashboard** | | | | | Fetch latest | | | | | View hosts in alert state | || | | View alerts | | | | | View events | | | | | | | | | | **Compute > Instances** | | | | | Basic search | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Links | | | | | Filter by | | | | | Pagination | | | | | Create new instance | || | | | | | | | **Compute > Kubernetes** | | | | | Sort | | | | | Refresh| | | | | Links | | | | |Sub menus | | || | | | | | | **Compute > Instances > selected instance** | | | | | View console | | | | | Reboot instance | | | | | Update instance | | | | | Start/Stop instance | | | | | Reinstall instance | | | | | Take snapshot | | | | | Assign VM to backup offering | | | | | Attach ISO | | | | | Scale VM | | | | | Migrate instance to another host | | | | | Change affinity | | | | | Change service offering | | | | | Reset Instance Password | | | | | Assign Instance to Another Account | | | | | Network adapters | | | | | - Add network to VM | | | | | - Set default NIC | | | | | - Add/delete secondary IP address | | | | | - Delete VM network | | | | | Settings | | | | | - Add setting | | | | | - Update setting | | | | | - Delete setting | | | | | Add / delete comment | | | | | Add / delete tags | | | | | Refresh | | | | | Links | | | | | | | | | | **Compute > Instance groups** | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Links | | | | | New instance group | | | | | Pagination | | | | | | | | | | **Compute > Instance groups > selected instance group** | | | | | Refresh | | | | | Links | | | | | Update instance group | | | | | Delete instance group | | | | | | | | | | **Compute > SSH Key Pairs** | | | | | Search | | | | | Sorting | | | | | Refresh | | | | | Links | | | | | Pagination | | | | | New SSH key pair | | | | | | | | | | **Compute > SSH Key Pairs > selected SSH key pair** | | | | | Refresh | | | | | Links | | | | | Delete SSH key pair | | | | | | | | | | **Compute > Affinity Groups** | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Links | | | | | New affinity group | | | | | | | | | | **Compute > Affinity Groups > selected affinity group** | | | | | Refresh | | | | | Links | | | | | Delete affinity group | | | | | | | | | | **Storage > Volumes** | | | | | Basic earch | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Links | | | | | Create volume | | | | | Upload local volume | | | | | Upload volume from URL | | | | | Pagination | | | | | | | | | | **Storage > Volumes > selected volume** | | | | | Detach volume | | | | | Take snapshot | | | | | Recurring snapshot | | | | | Resize volume | | | | | Migrate volume | | | | | Download volume | | | | | Delete volume | | | | | Refresh | | | | | Links | | | | | Add/delete tags | | | | | | | | | | **Storage > Snapshots** | | | | | Basic search | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Links | | | | | Pagination | | | | | | | | | | **Storage > Snapshots > selected snapshot** | | | | | Refresh | | | | | Links | | | | | Add/delete tags | | | | | Create template | ||| | Create volume | | | | | Revert snapshot | | | | | Delete snapshot | | | | | | | | | | **Storage > VM Snapshots** | | | | | Basic search | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Links | | | | | Pagination | | | | | | | | | | **Storage > VM Snapshots > selected snapshot** | | | | | Refresh | | | | | Links | | | | | Add/delete tags | | | | | Revert VM snapshot | | | | | Delete VM snapshot | | | | | | | | | | **Storage > VM Snapshots** | | | | | | | | | | **Network > Guest networks** | | | | | Basic search | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Links | | | | | Add network | | | | | Pagination | | | | | | | | | | **Network > Guest networks > selected network** | | | | | Links | | | | | Refresh | | | | | Add/delete tags | | | | | Update network | | | | | Restart network | | | | | Delete network | | | | | Acquire new IP (only for isolated networks) | | | | | Replace ACL list(only for isolated networks) | | | | | Delete public IP address (only for isolated networks) | | | | | Add/delete egress rule (only for isolated networks) | | | | | Add/delete egress tags (only for isolated networks) | | | | | | | | | | **Network > VPC ** | | | | | Basic search | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Links | | | | | Add VPC | | | | | Pagination | | | | | | | | | | **Network > VPC > selected VPC** | Links | | | | | Refresh | | | | | Update VPC | | | | | Restart VPC | | | | | Delete VPC | | | | | Networks | | | | | - Links | | | | | - Paginations | | | | | - Add network | | | | | - Add internal LB | | | | | Public IP addresses | | | | | - Links | | | | | - Pagination | | | | | - Select tier | | | | | - Acquire new IP | | | | | - Delete IP address | | | | | Network ACL Lists | | | | | - Links | | | | | - Pagination | | | | | - Add network ACL list | | | | | Private Gateways | | | | | - Links | | | | | - Pagination | | | | | - Add private gateway | | | | | VPN Gateway | | | | | - Links | | | | | VPN Connections | | | | | - Links | | | | | - Pagination | | | | | - Create Site-to-site VPN connection | || | | Virtual routers | | | | | - Links | | | | | Add/delete tags | | | | | | | | | | **Network > Security groups** | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Links | | | | | Add security group | | | | | Pagination | | | | | | | | | | **Network > Security groups > selected security group** | | | | | Links | | | | | Refresh | | | | | Add/delete tags | | | | | Add ingress rule by CIDR | | | | | Add ingress rule by Account | | | | | Ingress rule - add/delete tags | | | | | Ingress rule - delete | | | | | Add egress rule by CIDR | | | | | Add egress rule by Account | | | | | Egress rule - add/delete tags | | | | | Egress rule - delete | | | | | Ingress/egress rules pagination | | | | | | | | | | **Network > Public IP Addresses** | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Links | | | | | Acquire new IP | | | | | Pagination | | | | | | | | | | **Network > Public IP Addresses > selected IP address** | | | | | Links | | | | | Refresh | | | | | Add/delete tags | | | | | Enable/Disable static NAT | | | | | Release IP | | | | | Firewall - add rule | | | | | Firewall rule - add/delete tags | | | | | Firewall rule - delete | | | | | VPN - Enable/Disable VPN | | | | | VPN - Manage VPN Users | | | | | | | | | | **Network > VPN Users** || || | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Add VPN user | | | | | | | | | | **Network > VPN Users > selected VPN user** | | | | | Links | | | | | Refresh | | | | | Delete VPN User | | | | | | | | | | **Network > VPN Customer Gateway** | | | | | Links | | | | | Basic search | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Add VPN Customer Gateway | | | | | | | | | | **Network > VPN Customer Gateway > selected gateway** | | | | | Links | | | | | Refresh | | | | | Pagination | | | | | Edit VPN Customer Gateway | | | | | Delete VPN Customer Gateway | | | | | Add/delete tags | | | | | | | | | | **Images > Templates** | | | | | Links | | | | | Basic search | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Change order (move to the top/bottom, move one row up/down) | | | | | Register template | | | | | Upload local template | | | | | | | | | | **Images > Templates > selected template** | | | | | Links | | | | | Refresh | | | | | Add/delete tags | | | | | Edit template | | | | | Copy template | || | | Update template permissions | | | | | Delete template | | | | | Download template | | | | | Zones pagination | | | | | Settings - add/edit/remove setting | | | | | | | | | | **Images > ISOs** | | | | | Links | | | | | Basic search | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Change order (move to the top/bottom, move one row up/down) | | | | | Register ISO | | | | | Upload local ISO | | | | | | | | | | **Images > ISOs > selected ISO** | | | | | Links | | | | | Refresh | | | | | Add/delete tags | | | | | Edit ISO | | | | | Download ISO | | | | | Update ISO permissions | | | | | Copy ISO | | || | Delete ISO | | | | | Zones - pagination | | | | | | | | | | **Images > Kubernetes ISOs** | | | | | Links | | || | Basic search | | || | Sort | | || | Refresh | | || | Pagination | | || | Enable/Disable | | || | Add Kubernetes Version | | || | | | | | | **Projects** | | | | | Links | | | | | Basic search | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Switch to project | | | | | New project | | | | | Enter token | | | | | Project invitations | | | | | | | | | | **Projects > selected project** | | | | | Links | | | | | Refresh | | | | | Add/delete tags | | | | | Edit project | | | | | Suspend/Activate project | | | | | Add account to project | | | | | Accounts - Make account project owner | | | | | Accounts - Remove account from project | | | | | Delete project | | | | | Accounts - pagination | | | | | Resources - edit | | | | | | | | | | **Events** | | | | | Links | | | | | Basic search | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Archive event | | | | | Delete event | | | | | | | | | | **Events > selected event** | | | | | Links | | | | | Refresh | | | | | Archive event | | | | | View event timeline | | | | | Delete event | | | | | | | | | | **Identify and access > Users** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Add user | | | | | | | | | | **Identify and access > Users > selected user** | | | | | Links | | | | | Refresh | | | | | Edit user | | | | | Change password | | | | | Generate keys | | | | | Disable/enable user | | | | | Delete user | | | | | Copy API Key | | | | | Copy Secret Key | | | | | | | | | | **Identify and access > Accounts** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Add account | | | | | Add LDAP account | | | | | | | | | | **Identify and access > Accounts > selected account** | | | | | Links | | | | | Refresh | | | | | Update account | | | | | Update resource count | | | | | Disable/enable account | | | | | Lock/unlock account | | | | | Add certificate | | | | | Delete account | | | | | Settings | | | | | | | | | | **Identify and access > Domains** | | | | | Search | | | | | Refresh | | | | | Expand/collapse | | | | | Add/delete note | | | | | Add domain | | | | | Edit domain | | | | | Delete domain | | | | | Update resource count | | | | | Link domain to LDAP Group/OU | | | | | Settings | | | | | | | | | | **Identify and access > Roles** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Create role | | | | | | | | | | **Identify and access > Roles > selected role** | | | | | Refresh | | | | | Edit role | | | | | Delete role | | | | | Rules - add new rule | | | | | Rules - modify rule | | | | | Rules - delete rule | | | | | Rules - change rules order | | | | | | | | | | **Regions** | | | | | | | | | | **Infrastructure > Summary** | | | | | Links | | | | | Refresh | | | | | Setup SSL certificate | | | | | | | | | | **Infrastructure > Zones** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | || | | Add zone | | || | | | | | | **Infrastructure > Zones > selected zone** | | | | | Links | | | | | Refresh | | | | | Edit zone | | | | | Enable/disable zone | || | | Enable/disable out-of-band management | | | | | Enable HA (disable?) | | | | | Add VMWare datacenter| || | | Delete zone | ||| | Settings - edit | | | | | | | | | | **Infrastructure > Pods** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Add Pod | | | | | | | | | | **Infrastructure > Pods > selected Pod** | | | | | Links | | | | | Refresh | | | | | Dedicate/Release Pod | | | | | Edit Pod | | | | | Disable/enable Pod | | | | | Delete Pod | | | | | | | | | | **Infrastructure > Clusters** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Add Cluster | | | | | | | | | | **Infrastructure > Clusters > selected cluster** | | | | | Links | | | | | Refresh | | | | | Dedicate/Release cluster | | | | | Enable/disable cluster | | | | | Manage/unmanage cluster | | | | | Enable/disable out-of-band management | | | | | Enable/disable HA | | | | | Configure HA | | | | | Delete cluster | | | | | Settings - edit | | | | | | | | | | **Infrastructure > Hosts** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Add host | || | | | | | | | **Infrastructure > Hosts > selected host** | | | | | Links | | | | | Refresh | | | | | Add/delete notes | | | | | Dedicate/release host | | | | | Edit host | | | | | Force reconnect | | | | | Disable/enable host | | | | | Enable/cancel maintenance mode | | | | Enable/disable out-of-band management | | | | | Enable/disale HA | | | | | Delete host (only if disabled) | | | | | | | | | | **Infrastructure > Primary Storage** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Add Primary storage | ||| | | | | | | **Infrastructure > Primary Storage > selected primary storage** | | | | | Links | | | | | Refresh | | | | | Edit primary storage | | | | | Enable/cancel maintenance mode | | | | | Delete primary storage | | | | | Settings - edit | | | | | | | | | | **Infrastructure > Secondary Storage** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Add Secondary storage | | | | | | | | | | **Infrastructure > Secondary Storage > selected secondary storage** | | | | | Links | | | | | Refresh | | | | | Delete secondary storage | | | | | Settings - edit | | | | | | | | | | **Infrastructure > System VMs** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | | | | | | **Infrastructure > System VMs > selected system VM** | | | | | Links | | | | | Refresh | | | | | View console | | | | | Start/Stop system VM | | | | | Reboot system VM | | | | | Change service offering | | | | | Migrate system VM | | | | | Run diagnostics | | | | | Get diagnostics data | | | | | Destroy system VM | | | | | | | | | | **Infrastructure > Virtual routers** || | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | | | | | | **Infrastructure > Virtual routers > selected virtual router** | | | | | Links | | | | | Refresh | | | | | View console (running) | | | | | Start/Stop router | | | | | Reboot router | | | | | Change service offering | | | | | Migrate router (running) | | | | | Run diagnostics (running) | | | | | Get diagnostics data | | | | | Destroy router | | | | | | | | | | **Infrastructure > Internal LB VMs** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | | | | | | **Infrastructure > Internal LB VMs > selected internal LB VM** | | | | | Links | | | | | Refresh | | | | | View console | | | | | Stop router | | | | | Migrate router | | | | | | | | | | **Infrastructure > CPU Sockets** | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | | | | | | **Infrastructure > Management servers** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | | | | | | **Infrastructure > Management servers > selected management server** | | | | | Refresh | | | | | | | | | | **Infrastructure > Alerts** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | | | | | | **Infrastructure > Alerts > selected alert** | | | | | Refresh | | | | | Archive alert | | | | | Delete alert | | | | | | | | | | **Offerings > Compute offerings** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Add offering | | | | | | | | | | **Offerings > Compute offerings > selected offering** | | | | | Links | | | | | Refresh | | | | | Edit offering | | | | | Update offering access | | | | | Delete offering | | | | | | | | | | **Offerings > System offerings** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Change order (move to the top/bottom, move one row up/down) | | | | | Add offering | | | | | | | | | | **Offerings > System offerings > selected offering** | | | | | Refresh | | | | | Edit offering | | | | | Delete offering | | | | | | | | | | **Offerings > Disk offerings** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Change order (move to the top/bottom, move one row up/down) | | | | | Add offering | | | | | | | | | | **Offerings > Disk offerings > selected offering** | | | | | Links | | | | | Refresh | | | | | Edit offering | | | | | Update offering access | | | | | Delete offering | | | | | | | | | | **Offerings > Backup offerings** | | | | | | | | | | **Offerings > Network offerings** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Change order (move to the top/bottom, move one row up/down) | | | | | Add offering | | | | | | | | | | **Offerings > Network offerings > selected offering** | | | | | Refresh | | | | | Edit offering | | | | | Enable/Disable offering | | | | | Update offering access | | | | | Delete offering | | | | | | | | | | **Offerings > VPC offerings** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Change order | | | | | Add offering | | | | | | | | | | **Offerings > VPC offerings > selected offering** | | | | | Links | | | | | Refresh | | | | | Add / delete tags | | | | | Edit offering | | | | | Enable/Disable offering | | | | | Update offering access | | | | | Delete offering | | | | | | | | | | **Configuration > Global settings** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Edit value | | | | | | | | | | **Configuration > LDAP Configuration** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Configure LDAP | | | | | | | | | | **Configuration > LDAP Configuration > selected LDAP configuration** | | | | | TBD | | | | | | | | | **Configuration > Baremetal Rack Configuration** | | | | | | | | | | **Configuration > Hypervisor capabilities** | | | | | Data | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | | | | | | **Quota > Summary** | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | | | | | | **Quota > Summary > selected account** | | | | | Refresh | | | | | Add credits | | | | | | | | | | **Quota > Tariff** | | | | | Sort | | | | | Calendar | | | | | Refresh | | | | | Change value | | | | | | | | | | **Quota > Email template** | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | | | | | | **Context-sensitive help** | | | |
1.0
Testing high-level test plan template - **Common** - Project selector - [x] pass - [ ] fail #151 - [ ] Language selector | | | | - [ ] Notifications / clear notifications | | | | - [ ] Profile | | | | | Feature | Tester | Result | Related Issues | |---------------|--------|--------|----------------| | **Common** | | | | | - [ ] Project selector | | | | | - [ ] Language selector | | | | | Notifications / clear notifications | | | | | Profile | | | | | Help | | | | | Logout | | | | | | | | | | **Dashboard** | | | | | Fetch latest | | | | | View hosts in alert state | || | | View alerts | | | | | View events | | | | | | | | | | **Compute > Instances** | | | | | Basic search | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Links | | | | | Filter by | | | | | Pagination | | | | | Create new instance | || | | | | | | | **Compute > Kubernetes** | | | | | Sort | | | | | Refresh| | | | | Links | | | | |Sub menus | | || | | | | | | **Compute > Instances > selected instance** | | | | | View console | | | | | Reboot instance | | | | | Update instance | | | | | Start/Stop instance | | | | | Reinstall instance | | | | | Take snapshot | | | | | Assign VM to backup offering | | | | | Attach ISO | | | | | Scale VM | | | | | Migrate instance to another host | | | | | Change affinity | | | | | Change service offering | | | | | Reset Instance Password | | | | | Assign Instance to Another Account | | | | | Network adapters | | | | | - Add network to VM | | | | | - Set default NIC | | | | | - Add/delete secondary IP address | | | | | - Delete VM network | | | | | Settings | | | | | - Add setting | | | | | - Update setting | | | | | - Delete setting | | | | | Add / delete comment | | | | | Add / delete tags | | | | | Refresh | | | | | Links | | | | | | | | | | **Compute > Instance groups** | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Links | | | | | New instance group | | | | | Pagination | | | | | | | | | | **Compute > Instance groups > selected instance group** | | | | | Refresh | | | | | Links | | | | | Update instance group | | | | | Delete instance group | | | | | | | | | | **Compute > SSH Key Pairs** | | | | | Search | | | | | Sorting | | | | | Refresh | | | | | Links | | | | | Pagination | | | | | New SSH key pair | | | | | | | | | | **Compute > SSH Key Pairs > selected SSH key pair** | | | | | Refresh | | | | | Links | | | | | Delete SSH key pair | | | | | | | | | | **Compute > Affinity Groups** | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Links | | | | | New affinity group | | | | | | | | | | **Compute > Affinity Groups > selected affinity group** | | | | | Refresh | | | | | Links | | | | | Delete affinity group | | | | | | | | | | **Storage > Volumes** | | | | | Basic earch | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Links | | | | | Create volume | | | | | Upload local volume | | | | | Upload volume from URL | | | | | Pagination | | | | | | | | | | **Storage > Volumes > selected volume** | | | | | Detach volume | | | | | Take snapshot | | | | | Recurring snapshot | | | | | Resize volume | | | | | Migrate volume | | | | | Download volume | | | | | Delete volume | | | | | Refresh | | | | | Links | | | | | Add/delete tags | | | | | | | | | | **Storage > Snapshots** | | | | | Basic search | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Links | | | | | Pagination | | | | | | | | | | **Storage > Snapshots > selected snapshot** | | | | | Refresh | | | | | Links | | | | | Add/delete tags | | | | | Create template | ||| | Create volume | | | | | Revert snapshot | | | | | Delete snapshot | | | | | | | | | | **Storage > VM Snapshots** | | | | | Basic search | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Links | | | | | Pagination | | | | | | | | | | **Storage > VM Snapshots > selected snapshot** | | | | | Refresh | | | | | Links | | | | | Add/delete tags | | | | | Revert VM snapshot | | | | | Delete VM snapshot | | | | | | | | | | **Storage > VM Snapshots** | | | | | | | | | | **Network > Guest networks** | | | | | Basic search | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Links | | | | | Add network | | | | | Pagination | | | | | | | | | | **Network > Guest networks > selected network** | | | | | Links | | | | | Refresh | | | | | Add/delete tags | | | | | Update network | | | | | Restart network | | | | | Delete network | | | | | Acquire new IP (only for isolated networks) | | | | | Replace ACL list(only for isolated networks) | | | | | Delete public IP address (only for isolated networks) | | | | | Add/delete egress rule (only for isolated networks) | | | | | Add/delete egress tags (only for isolated networks) | | | | | | | | | | **Network > VPC ** | | | | | Basic search | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Links | | | | | Add VPC | | | | | Pagination | | | | | | | | | | **Network > VPC > selected VPC** | Links | | | | | Refresh | | | | | Update VPC | | | | | Restart VPC | | | | | Delete VPC | | | | | Networks | | | | | - Links | | | | | - Paginations | | | | | - Add network | | | | | - Add internal LB | | | | | Public IP addresses | | | | | - Links | | | | | - Pagination | | | | | - Select tier | | | | | - Acquire new IP | | | | | - Delete IP address | | | | | Network ACL Lists | | | | | - Links | | | | | - Pagination | | | | | - Add network ACL list | | | | | Private Gateways | | | | | - Links | | | | | - Pagination | | | | | - Add private gateway | | | | | VPN Gateway | | | | | - Links | | | | | VPN Connections | | | | | - Links | | | | | - Pagination | | | | | - Create Site-to-site VPN connection | || | | Virtual routers | | | | | - Links | | | | | Add/delete tags | | | | | | | | | | **Network > Security groups** | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Links | | | | | Add security group | | | | | Pagination | | | | | | | | | | **Network > Security groups > selected security group** | | | | | Links | | | | | Refresh | | | | | Add/delete tags | | | | | Add ingress rule by CIDR | | | | | Add ingress rule by Account | | | | | Ingress rule - add/delete tags | | | | | Ingress rule - delete | | | | | Add egress rule by CIDR | | | | | Add egress rule by Account | | | | | Egress rule - add/delete tags | | | | | Egress rule - delete | | | | | Ingress/egress rules pagination | | | | | | | | | | **Network > Public IP Addresses** | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Links | | | | | Acquire new IP | | | | | Pagination | | | | | | | | | | **Network > Public IP Addresses > selected IP address** | | | | | Links | | | | | Refresh | | | | | Add/delete tags | | | | | Enable/Disable static NAT | | | | | Release IP | | | | | Firewall - add rule | | | | | Firewall rule - add/delete tags | | | | | Firewall rule - delete | | | | | VPN - Enable/Disable VPN | | | | | VPN - Manage VPN Users | | | | | | | | | | **Network > VPN Users** || || | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Add VPN user | | | | | | | | | | **Network > VPN Users > selected VPN user** | | | | | Links | | | | | Refresh | | | | | Delete VPN User | | | | | | | | | | **Network > VPN Customer Gateway** | | | | | Links | | | | | Basic search | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Add VPN Customer Gateway | | | | | | | | | | **Network > VPN Customer Gateway > selected gateway** | | | | | Links | | | | | Refresh | | | | | Pagination | | | | | Edit VPN Customer Gateway | | | | | Delete VPN Customer Gateway | | | | | Add/delete tags | | | | | | | | | | **Images > Templates** | | | | | Links | | | | | Basic search | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Change order (move to the top/bottom, move one row up/down) | | | | | Register template | | | | | Upload local template | | | | | | | | | | **Images > Templates > selected template** | | | | | Links | | | | | Refresh | | | | | Add/delete tags | | | | | Edit template | | | | | Copy template | || | | Update template permissions | | | | | Delete template | | | | | Download template | | | | | Zones pagination | | | | | Settings - add/edit/remove setting | | | | | | | | | | **Images > ISOs** | | | | | Links | | | | | Basic search | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Change order (move to the top/bottom, move one row up/down) | | | | | Register ISO | | | | | Upload local ISO | | | | | | | | | | **Images > ISOs > selected ISO** | | | | | Links | | | | | Refresh | | | | | Add/delete tags | | | | | Edit ISO | | | | | Download ISO | | | | | Update ISO permissions | | | | | Copy ISO | | || | Delete ISO | | | | | Zones - pagination | | | | | | | | | | **Images > Kubernetes ISOs** | | | | | Links | | || | Basic search | | || | Sort | | || | Refresh | | || | Pagination | | || | Enable/Disable | | || | Add Kubernetes Version | | || | | | | | | **Projects** | | | | | Links | | | | | Basic search | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Switch to project | | | | | New project | | | | | Enter token | | | | | Project invitations | | | | | | | | | | **Projects > selected project** | | | | | Links | | | | | Refresh | | | | | Add/delete tags | | | | | Edit project | | | | | Suspend/Activate project | | | | | Add account to project | | | | | Accounts - Make account project owner | | | | | Accounts - Remove account from project | | | | | Delete project | | | | | Accounts - pagination | | | | | Resources - edit | | | | | | | | | | **Events** | | | | | Links | | | | | Basic search | | | | | Extended search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Archive event | | | | | Delete event | | | | | | | | | | **Events > selected event** | | | | | Links | | | | | Refresh | | | | | Archive event | | | | | View event timeline | | | | | Delete event | | | | | | | | | | **Identify and access > Users** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Add user | | | | | | | | | | **Identify and access > Users > selected user** | | | | | Links | | | | | Refresh | | | | | Edit user | | | | | Change password | | | | | Generate keys | | | | | Disable/enable user | | | | | Delete user | | | | | Copy API Key | | | | | Copy Secret Key | | | | | | | | | | **Identify and access > Accounts** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Add account | | | | | Add LDAP account | | | | | | | | | | **Identify and access > Accounts > selected account** | | | | | Links | | | | | Refresh | | | | | Update account | | | | | Update resource count | | | | | Disable/enable account | | | | | Lock/unlock account | | | | | Add certificate | | | | | Delete account | | | | | Settings | | | | | | | | | | **Identify and access > Domains** | | | | | Search | | | | | Refresh | | | | | Expand/collapse | | | | | Add/delete note | | | | | Add domain | | | | | Edit domain | | | | | Delete domain | | | | | Update resource count | | | | | Link domain to LDAP Group/OU | | | | | Settings | | | | | | | | | | **Identify and access > Roles** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Create role | | | | | | | | | | **Identify and access > Roles > selected role** | | | | | Refresh | | | | | Edit role | | | | | Delete role | | | | | Rules - add new rule | | | | | Rules - modify rule | | | | | Rules - delete rule | | | | | Rules - change rules order | | | | | | | | | | **Regions** | | | | | | | | | | **Infrastructure > Summary** | | | | | Links | | | | | Refresh | | | | | Setup SSL certificate | | | | | | | | | | **Infrastructure > Zones** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | || | | Add zone | | || | | | | | | **Infrastructure > Zones > selected zone** | | | | | Links | | | | | Refresh | | | | | Edit zone | | | | | Enable/disable zone | || | | Enable/disable out-of-band management | | | | | Enable HA (disable?) | | | | | Add VMWare datacenter| || | | Delete zone | ||| | Settings - edit | | | | | | | | | | **Infrastructure > Pods** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Add Pod | | | | | | | | | | **Infrastructure > Pods > selected Pod** | | | | | Links | | | | | Refresh | | | | | Dedicate/Release Pod | | | | | Edit Pod | | | | | Disable/enable Pod | | | | | Delete Pod | | | | | | | | | | **Infrastructure > Clusters** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Add Cluster | | | | | | | | | | **Infrastructure > Clusters > selected cluster** | | | | | Links | | | | | Refresh | | | | | Dedicate/Release cluster | | | | | Enable/disable cluster | | | | | Manage/unmanage cluster | | | | | Enable/disable out-of-band management | | | | | Enable/disable HA | | | | | Configure HA | | | | | Delete cluster | | | | | Settings - edit | | | | | | | | | | **Infrastructure > Hosts** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Add host | || | | | | | | | **Infrastructure > Hosts > selected host** | | | | | Links | | | | | Refresh | | | | | Add/delete notes | | | | | Dedicate/release host | | | | | Edit host | | | | | Force reconnect | | | | | Disable/enable host | | | | | Enable/cancel maintenance mode | | | | Enable/disable out-of-band management | | | | | Enable/disale HA | | | | | Delete host (only if disabled) | | | | | | | | | | **Infrastructure > Primary Storage** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Add Primary storage | ||| | | | | | | **Infrastructure > Primary Storage > selected primary storage** | | | | | Links | | | | | Refresh | | | | | Edit primary storage | | | | | Enable/cancel maintenance mode | | | | | Delete primary storage | | | | | Settings - edit | | | | | | | | | | **Infrastructure > Secondary Storage** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Add Secondary storage | | | | | | | | | | **Infrastructure > Secondary Storage > selected secondary storage** | | | | | Links | | | | | Refresh | | | | | Delete secondary storage | | | | | Settings - edit | | | | | | | | | | **Infrastructure > System VMs** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | | | | | | **Infrastructure > System VMs > selected system VM** | | | | | Links | | | | | Refresh | | | | | View console | | | | | Start/Stop system VM | | | | | Reboot system VM | | | | | Change service offering | | | | | Migrate system VM | | | | | Run diagnostics | | | | | Get diagnostics data | | | | | Destroy system VM | | | | | | | | | | **Infrastructure > Virtual routers** || | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | | | | | | **Infrastructure > Virtual routers > selected virtual router** | | | | | Links | | | | | Refresh | | | | | View console (running) | | | | | Start/Stop router | | | | | Reboot router | | | | | Change service offering | | | | | Migrate router (running) | | | | | Run diagnostics (running) | | | | | Get diagnostics data | | | | | Destroy router | | | | | | | | | | **Infrastructure > Internal LB VMs** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | | | | | | **Infrastructure > Internal LB VMs > selected internal LB VM** | | | | | Links | | | | | Refresh | | | | | View console | | | | | Stop router | | | | | Migrate router | | | | | | | | | | **Infrastructure > CPU Sockets** | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | | | | | | **Infrastructure > Management servers** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | | | | | | **Infrastructure > Management servers > selected management server** | | | | | Refresh | | | | | | | | | | **Infrastructure > Alerts** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | | | | | | **Infrastructure > Alerts > selected alert** | | | | | Refresh | | | | | Archive alert | | | | | Delete alert | | | | | | | | | | **Offerings > Compute offerings** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Add offering | | | | | | | | | | **Offerings > Compute offerings > selected offering** | | | | | Links | | | | | Refresh | | | | | Edit offering | | | | | Update offering access | | | | | Delete offering | | | | | | | | | | **Offerings > System offerings** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Change order (move to the top/bottom, move one row up/down) | | | | | Add offering | | | | | | | | | | **Offerings > System offerings > selected offering** | | | | | Refresh | | | | | Edit offering | | | | | Delete offering | | | | | | | | | | **Offerings > Disk offerings** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Change order (move to the top/bottom, move one row up/down) | | | | | Add offering | | | | | | | | | | **Offerings > Disk offerings > selected offering** | | | | | Links | | | | | Refresh | | | | | Edit offering | | | | | Update offering access | | | | | Delete offering | | | | | | | | | | **Offerings > Backup offerings** | | | | | | | | | | **Offerings > Network offerings** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Change order (move to the top/bottom, move one row up/down) | | | | | Add offering | | | | | | | | | | **Offerings > Network offerings > selected offering** | | | | | Refresh | | | | | Edit offering | | | | | Enable/Disable offering | | | | | Update offering access | | | | | Delete offering | | | | | | | | | | **Offerings > VPC offerings** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Change order | | | | | Add offering | | | | | | | | | | **Offerings > VPC offerings > selected offering** | | | | | Links | | | | | Refresh | | | | | Add / delete tags | | | | | Edit offering | | | | | Enable/Disable offering | | | | | Update offering access | | | | | Delete offering | | | | | | | | | | **Configuration > Global settings** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Edit value | | | | | | | | | | **Configuration > LDAP Configuration** | | | | | Links | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | Configure LDAP | | | | | | | | | | **Configuration > LDAP Configuration > selected LDAP configuration** | | | | | TBD | | | | | | | | | **Configuration > Baremetal Rack Configuration** | | | | | | | | | | **Configuration > Hypervisor capabilities** | | | | | Data | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | | | | | | **Quota > Summary** | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | | | | | | **Quota > Summary > selected account** | | | | | Refresh | | | | | Add credits | | | | | | | | | | **Quota > Tariff** | | | | | Sort | | | | | Calendar | | | | | Refresh | | | | | Change value | | | | | | | | | | **Quota > Email template** | | | | | Search | | | | | Sort | | | | | Refresh | | | | | Pagination | | | | | | | | | | **Context-sensitive help** | | | |
test
testing high level test plan template common project selector pass fail language selector notifications clear notifications profile feature tester result related issues common project selector language selector notifications clear notifications profile help logout dashboard fetch latest view hosts in alert state view alerts view events compute instances basic search extended search sort refresh links filter by pagination create new instance compute kubernetes sort refresh links sub menus compute instances selected instance view console reboot instance update instance start stop instance reinstall instance take snapshot assign vm to backup offering attach iso scale vm migrate instance to another host change affinity change service offering reset instance password assign instance to another account network adapters add network to vm set default nic add delete secondary ip address delete vm network settings add setting update setting delete setting add delete comment add delete tags refresh links compute instance groups search sort refresh links new instance group pagination compute instance groups selected instance group refresh links update instance group delete instance group compute ssh key pairs search sorting refresh links pagination new ssh key pair compute ssh key pairs selected ssh key pair refresh links delete ssh key pair compute affinity groups search sort refresh links new affinity group compute affinity groups selected affinity group refresh links delete affinity group storage volumes basic earch extended search sort refresh links create volume upload local volume upload volume from url pagination storage volumes selected volume detach volume take snapshot recurring snapshot resize volume migrate volume download volume delete volume refresh links add delete tags storage snapshots basic search extended search sort refresh links pagination storage snapshots selected snapshot refresh links add delete tags create template create volume revert snapshot delete snapshot storage vm snapshots basic search extended search sort refresh links pagination storage vm snapshots selected snapshot refresh links add delete tags revert vm snapshot delete vm snapshot storage vm snapshots network guest networks basic search extended search sort refresh links add network pagination network guest networks selected network links refresh add delete tags update network restart network delete network acquire new ip only for isolated networks replace acl list only for isolated networks delete public ip address only for isolated networks add delete egress rule only for isolated networks add delete egress tags only for isolated networks network vpc basic search extended search sort refresh links add vpc pagination network vpc selected vpc links refresh update vpc restart vpc delete vpc networks links paginations add network add internal lb public ip addresses links pagination select tier acquire new ip delete ip address network acl lists links pagination add network acl list private gateways links pagination add private gateway vpn gateway links vpn connections links pagination create site to site vpn connection virtual routers links add delete tags network security groups search sort refresh links add security group pagination network security groups selected security group links refresh add delete tags add ingress rule by cidr add ingress rule by account ingress rule add delete tags ingress rule delete add egress rule by cidr add egress rule by account egress rule add delete tags egress rule delete ingress egress rules pagination network public ip addresses search sort refresh links acquire new ip pagination network public ip addresses selected ip address links refresh add delete tags enable disable static nat release ip firewall add rule firewall rule add delete tags firewall rule delete vpn enable disable vpn vpn manage vpn users network vpn users links search sort refresh pagination add vpn user network vpn users selected vpn user links refresh delete vpn user network vpn customer gateway links basic search extended search sort refresh pagination add vpn customer gateway network vpn customer gateway selected gateway links refresh pagination edit vpn customer gateway delete vpn customer gateway add delete tags images templates links basic search extended search sort refresh pagination change order move to the top bottom move one row up down register template upload local template images templates selected template links refresh add delete tags edit template copy template update template permissions delete template download template zones pagination settings add edit remove setting images isos links basic search extended search sort refresh pagination change order move to the top bottom move one row up down register iso upload local iso images isos selected iso links refresh add delete tags edit iso download iso update iso permissions copy iso delete iso zones pagination images kubernetes isos links basic search sort refresh pagination enable disable add kubernetes version projects links basic search extended search sort refresh pagination switch to project new project enter token project invitations projects selected project links refresh add delete tags edit project suspend activate project add account to project accounts make account project owner accounts remove account from project delete project accounts pagination resources edit events links basic search extended search sort refresh pagination archive event delete event events selected event links refresh archive event view event timeline delete event identify and access users links search sort refresh pagination add user identify and access users selected user links refresh edit user change password generate keys disable enable user delete user copy api key copy secret key identify and access accounts links search sort refresh pagination add account add ldap account identify and access accounts selected account links refresh update account update resource count disable enable account lock unlock account add certificate delete account settings identify and access domains search refresh expand collapse add delete note add domain edit domain delete domain update resource count link domain to ldap group ou settings identify and access roles links search sort refresh pagination create role identify and access roles selected role refresh edit role delete role rules add new rule rules modify rule rules delete rule rules change rules order regions infrastructure summary links refresh setup ssl certificate infrastructure zones links search sort refresh pagination add zone infrastructure zones selected zone links refresh edit zone enable disable zone enable disable out of band management enable ha disable add vmware datacenter delete zone settings edit infrastructure pods links search sort refresh pagination add pod infrastructure pods selected pod links refresh dedicate release pod edit pod disable enable pod delete pod infrastructure clusters links search sort refresh pagination add cluster infrastructure clusters selected cluster links refresh dedicate release cluster enable disable cluster manage unmanage cluster enable disable out of band management enable disable ha configure ha delete cluster settings edit infrastructure hosts links search sort refresh pagination add host infrastructure hosts selected host links refresh add delete notes dedicate release host edit host force reconnect disable enable host enable cancel maintenance mode enable disable out of band management enable disale ha delete host only if disabled infrastructure primary storage links search sort refresh pagination add primary storage infrastructure primary storage selected primary storage links refresh edit primary storage enable cancel maintenance mode delete primary storage settings edit infrastructure secondary storage links search sort refresh pagination add secondary storage infrastructure secondary storage selected secondary storage links refresh delete secondary storage settings edit infrastructure system vms links search sort refresh pagination infrastructure system vms selected system vm links refresh view console start stop system vm reboot system vm change service offering migrate system vm run diagnostics get diagnostics data destroy system vm infrastructure virtual routers links search sort refresh pagination infrastructure virtual routers selected virtual router links refresh view console running start stop router reboot router change service offering migrate router running run diagnostics running get diagnostics data destroy router infrastructure internal lb vms links search sort refresh pagination infrastructure internal lb vms selected internal lb vm links refresh view console stop router migrate router infrastructure cpu sockets search sort refresh pagination infrastructure management servers links search sort refresh pagination infrastructure management servers selected management server refresh infrastructure alerts links search sort refresh pagination infrastructure alerts selected alert refresh archive alert delete alert offerings compute offerings links search sort refresh pagination add offering offerings compute offerings selected offering links refresh edit offering update offering access delete offering offerings system offerings links search sort refresh pagination change order move to the top bottom move one row up down add offering offerings system offerings selected offering refresh edit offering delete offering offerings disk offerings links search sort refresh pagination change order move to the top bottom move one row up down add offering offerings disk offerings selected offering links refresh edit offering update offering access delete offering offerings backup offerings offerings network offerings links search sort refresh pagination change order move to the top bottom move one row up down add offering offerings network offerings selected offering refresh edit offering enable disable offering update offering access delete offering offerings vpc offerings links search sort refresh pagination change order add offering offerings vpc offerings selected offering links refresh add delete tags edit offering enable disable offering update offering access delete offering configuration global settings links search sort refresh pagination edit value configuration ldap configuration links search sort refresh pagination configure ldap configuration ldap configuration selected ldap configuration tbd configuration baremetal rack configuration configuration hypervisor capabilities data search sort refresh pagination quota summary search sort refresh pagination quota summary selected account refresh add credits quota tariff sort calendar refresh change value quota email template search sort refresh pagination context sensitive help
1
309,142
26,652,781,766
IssuesEvent
2023-01-25 14:49:52
department-of-veterans-affairs/va.gov-team
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
opened
Accessibility Testing for [Check-in Experience, Check-in Experience, Simplified Header and Footer]
HCE-Checkin a11y-testing
### Product information - [X] Team name, product name, and feature name have been added to the title of this issue. - [X] Team label, product label and feature label (if applicable) have been added to this issue. ### Who completed the use of color and color contrast test? @wullaski ### Use of color and color contrast checks - [X] All text of 20px or smaller has a 4.5:1 contrast ratio to its background (or better) - [X] All text of 20px or larger has a 3:1 contrast ratio to its background (or better) - [X] Non-text elements have a 3:1 contrast ratio to their background and to neighboring elements (or better) - [X] Color is not the only way to distinguish links from other text (eg. links are underlined) - [X] Any charts, maps, infographics, and tables convey all information without only relying on color - [X] Content does not refer to color, especially when providing user instructions (eg. "Click the blue button") ### How did color testing go? - Focus styles will not meet the new 3:1 contrast ratio outlined in [WCAG 2.2](https://www.sarasoueidan.com/blog/focus-indicators/#new-focus-indicator-accessibility-requirements-in-wcag-2.2). ### Who completed the axe scans? @wullaski ### axe checks - [X] Each page has been scanned using axe (results shared below) - [X] axe is integrated into your end-to-end testing ### axe DevTools scan results False positive color contrast issues detected - overlapping elements body has a 100% height and overflow hidden on it ![Screen Shot 2023-01-25 at 8 52 08 AM](https://user-images.githubusercontent.com/2982977/214586996-05a4ed14-c799-4a4a-99aa-3d7475a13df7.png) - background images on the accordion ![Screen Shot 2023-01-25 at 9 18 09 AM](https://user-images.githubusercontent.com/2982977/214587386-0957acb9-e530-4fd1-951c-202fddce56d9.png) ### How did axe testing go? - [Fixed](https://github.com/department-of-veterans-affairs/va.gov-team/issues/51894) aria-described by on [demographics display](http://fb317e98ddfef98bee03cab1cd5a3384.review.vetsgov-internal/health-care/appointment-check-in/contact-information) ### Who completed the content zoom and reflow test? @wullaski ### Content zoom and reflow checks - [X] All page elements are readable and usable at 200% zoom - [X] All page elements are readable and usable at 300% zoom - [X] All page elements are readable and usable at 400% zoom ### How did content zoom and reflow testing go? _No response_ ### Who completed the keyboard navigation test? @wullaski ### Keyboard navigation checks - [X] Each link, button, form input, checkbox, radio button, select menu, and custom element can receive keyboard focus - [X] Each link, button, form input, checkbox, radio button, select menu, and custom element responds to expected keys - [X] All elements under focus have a visible focus indicator - [X] The order of [Tab] stops made sense and was appropriate for completing tasks ### How did keyboard testing go? Focus style on logo in the footer should contain the entire logo ![Screen Shot 2023-01-25 at 9 20 38 AM](https://user-images.githubusercontent.com/2982977/214588052-8f26a4a7-1bb1-40e9-ad6c-023418ba6493.png) ### Do you have any other results to share? **WAVE Spot Checks** Wave Spot check pointed out flaw with functional links such as the language picker. ![Screen Shot 2023-01-18 at 4 17 54 PM](https://user-images.githubusercontent.com/2982977/213783190-73545f9c-3e43-4cff-9a9e-652bc5c77fd5.png) Created issue to address: https://github.com/department-of-veterans-affairs/va.gov-team/issues/52360 **Code quality review** Name doesn't match the name attribute on the input. The name attribute is `last-name` and the label is Your last name. Role on the warning alert is `role="presentation"` which doesn't get read by Voice Over like the Error on validation. ![Screen Shot 2023-01-23 at 2 28 32 PM](https://user-images.githubusercontent.com/2982977/214132157-3659029e-7689-49e2-872c-7d47d3232596.png) **Mouse only and touchscreen** Touch target on the language picker and other links that are outside of a block of content are not 44px high. **Screen Readers** Voice Over Testing pointed out a repetition issue with the loading component from the design system and the way it was implemented by the application. Created an issue for the design system team https://github.com/department-of-veterans-affairs/vets-design-system-documentation/issues/1457 Created an issue for the Check In Experience team https://github.com/department-of-veterans-affairs/va.gov-team/issues/52509 It also pointed out an issue with the hint text for the DOB input it doesn't mention the day field which also requires two digits Created an issue with the design system team https://github.com/department-of-veterans-affairs/vets-design-system-documentation/issues/1455
1.0
Accessibility Testing for [Check-in Experience, Check-in Experience, Simplified Header and Footer] - ### Product information - [X] Team name, product name, and feature name have been added to the title of this issue. - [X] Team label, product label and feature label (if applicable) have been added to this issue. ### Who completed the use of color and color contrast test? @wullaski ### Use of color and color contrast checks - [X] All text of 20px or smaller has a 4.5:1 contrast ratio to its background (or better) - [X] All text of 20px or larger has a 3:1 contrast ratio to its background (or better) - [X] Non-text elements have a 3:1 contrast ratio to their background and to neighboring elements (or better) - [X] Color is not the only way to distinguish links from other text (eg. links are underlined) - [X] Any charts, maps, infographics, and tables convey all information without only relying on color - [X] Content does not refer to color, especially when providing user instructions (eg. "Click the blue button") ### How did color testing go? - Focus styles will not meet the new 3:1 contrast ratio outlined in [WCAG 2.2](https://www.sarasoueidan.com/blog/focus-indicators/#new-focus-indicator-accessibility-requirements-in-wcag-2.2). ### Who completed the axe scans? @wullaski ### axe checks - [X] Each page has been scanned using axe (results shared below) - [X] axe is integrated into your end-to-end testing ### axe DevTools scan results False positive color contrast issues detected - overlapping elements body has a 100% height and overflow hidden on it ![Screen Shot 2023-01-25 at 8 52 08 AM](https://user-images.githubusercontent.com/2982977/214586996-05a4ed14-c799-4a4a-99aa-3d7475a13df7.png) - background images on the accordion ![Screen Shot 2023-01-25 at 9 18 09 AM](https://user-images.githubusercontent.com/2982977/214587386-0957acb9-e530-4fd1-951c-202fddce56d9.png) ### How did axe testing go? - [Fixed](https://github.com/department-of-veterans-affairs/va.gov-team/issues/51894) aria-described by on [demographics display](http://fb317e98ddfef98bee03cab1cd5a3384.review.vetsgov-internal/health-care/appointment-check-in/contact-information) ### Who completed the content zoom and reflow test? @wullaski ### Content zoom and reflow checks - [X] All page elements are readable and usable at 200% zoom - [X] All page elements are readable and usable at 300% zoom - [X] All page elements are readable and usable at 400% zoom ### How did content zoom and reflow testing go? _No response_ ### Who completed the keyboard navigation test? @wullaski ### Keyboard navigation checks - [X] Each link, button, form input, checkbox, radio button, select menu, and custom element can receive keyboard focus - [X] Each link, button, form input, checkbox, radio button, select menu, and custom element responds to expected keys - [X] All elements under focus have a visible focus indicator - [X] The order of [Tab] stops made sense and was appropriate for completing tasks ### How did keyboard testing go? Focus style on logo in the footer should contain the entire logo ![Screen Shot 2023-01-25 at 9 20 38 AM](https://user-images.githubusercontent.com/2982977/214588052-8f26a4a7-1bb1-40e9-ad6c-023418ba6493.png) ### Do you have any other results to share? **WAVE Spot Checks** Wave Spot check pointed out flaw with functional links such as the language picker. ![Screen Shot 2023-01-18 at 4 17 54 PM](https://user-images.githubusercontent.com/2982977/213783190-73545f9c-3e43-4cff-9a9e-652bc5c77fd5.png) Created issue to address: https://github.com/department-of-veterans-affairs/va.gov-team/issues/52360 **Code quality review** Name doesn't match the name attribute on the input. The name attribute is `last-name` and the label is Your last name. Role on the warning alert is `role="presentation"` which doesn't get read by Voice Over like the Error on validation. ![Screen Shot 2023-01-23 at 2 28 32 PM](https://user-images.githubusercontent.com/2982977/214132157-3659029e-7689-49e2-872c-7d47d3232596.png) **Mouse only and touchscreen** Touch target on the language picker and other links that are outside of a block of content are not 44px high. **Screen Readers** Voice Over Testing pointed out a repetition issue with the loading component from the design system and the way it was implemented by the application. Created an issue for the design system team https://github.com/department-of-veterans-affairs/vets-design-system-documentation/issues/1457 Created an issue for the Check In Experience team https://github.com/department-of-veterans-affairs/va.gov-team/issues/52509 It also pointed out an issue with the hint text for the DOB input it doesn't mention the day field which also requires two digits Created an issue with the design system team https://github.com/department-of-veterans-affairs/vets-design-system-documentation/issues/1455
test
accessibility testing for product information team name product name and feature name have been added to the title of this issue team label product label and feature label if applicable have been added to this issue who completed the use of color and color contrast test wullaski use of color and color contrast checks all text of or smaller has a contrast ratio to its background or better all text of or larger has a contrast ratio to its background or better non text elements have a contrast ratio to their background and to neighboring elements or better color is not the only way to distinguish links from other text eg links are underlined any charts maps infographics and tables convey all information without only relying on color content does not refer to color especially when providing user instructions eg click the blue button how did color testing go focus styles will not meet the new contrast ratio outlined in who completed the axe scans wullaski axe checks each page has been scanned using axe results shared below axe is integrated into your end to end testing axe devtools scan results false positive color contrast issues detected overlapping elements body has a height and overflow hidden on it background images on the accordion how did axe testing go aria described by on who completed the content zoom and reflow test wullaski content zoom and reflow checks all page elements are readable and usable at zoom all page elements are readable and usable at zoom all page elements are readable and usable at zoom how did content zoom and reflow testing go no response who completed the keyboard navigation test wullaski keyboard navigation checks each link button form input checkbox radio button select menu and custom element can receive keyboard focus each link button form input checkbox radio button select menu and custom element responds to expected keys all elements under focus have a visible focus indicator the order of stops made sense and was appropriate for completing tasks how did keyboard testing go focus style on logo in the footer should contain the entire logo do you have any other results to share wave spot checks wave spot check pointed out flaw with functional links such as the language picker created issue to address code quality review name doesn t match the name attribute on the input the name attribute is last name and the label is your last name role on the warning alert is role presentation which doesn t get read by voice over like the error on validation mouse only and touchscreen touch target on the language picker and other links that are outside of a block of content are not high screen readers voice over testing pointed out a repetition issue with the loading component from the design system and the way it was implemented by the application created an issue for the design system team created an issue for the check in experience team it also pointed out an issue with the hint text for the dob input it doesn t mention the day field which also requires two digits created an issue with the design system team
1
223,501
17,603,527,042
IssuesEvent
2021-08-17 14:29:18
apache/pulsar
https://api.github.com/repos/apache/pulsar
opened
Flaky-test: LockManagerTest.revalidateLockOnDifferentSession
flaky-tests
<!--- Instructions for reporting a flaky test using this issue template: 1. Replace [test class] in title and body with the test class name 2. Replace [test method] in title and body with the test method that failed. Multiple methods are flaky, remove the content that refers to the test method. 3. Replace "url here" with a url to an example failure. In the Github Actions workflow run logs, you can right click on the line number to copy a link to the line. Example of such url is https://github.com/apache/pulsar/pull/8892/checks?check_run_id=1531075794#step:9:377 . The logs are available for a limited amount of time (usually for a few weeks). 4. Replace "relevant parts of the exception stacktrace here" with the a few lines of the stack trace that shows at leat the exception message and the line of test code where the stacktrace occured. 5. Replace "full exception stacktrace here" with the full exception stacktrace from logs. This section will be hidded by default. 6. Remove all unused fields / content to unclutter the reported issue. Remove this comment too. --> LockManagerTest is flaky. The revalidateLockOnDifferentSession test method fails sporadically. [example failure](https://github.com/apache/pulsar/pull/11681/checks?check_run_id=3348683996#step:9:4714) ``` Error: Tests run: 12, Failures: 1, Errors: 0, Skipped: 7, Time elapsed: 1.52 s <<< FAILURE! - in org.apache.pulsar.metadata.LockManagerTest Error: revalidateLockOnDifferentSession(org.apache.pulsar.metadata.LockManagerTest) Time elapsed: 0.264 s <<< FAILURE! java.util.NoSuchElementException: No value present at java.base/java.util.Optional.get(Optional.java:148) at org.apache.pulsar.metadata.LockManagerTest.revalidateLockOnDifferentSession(LockManagerTest.java:229) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:132) at org.testng.internal.InvokeMethodRunnable.runOne(InvokeMethodRunnable.java:45) at org.testng.internal.InvokeMethodRunnable.call(InvokeMethodRunnable.java:73) at org.testng.internal.InvokeMethodRunnable.call(InvokeMethodRunnable.java:11) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) ```
1.0
Flaky-test: LockManagerTest.revalidateLockOnDifferentSession - <!--- Instructions for reporting a flaky test using this issue template: 1. Replace [test class] in title and body with the test class name 2. Replace [test method] in title and body with the test method that failed. Multiple methods are flaky, remove the content that refers to the test method. 3. Replace "url here" with a url to an example failure. In the Github Actions workflow run logs, you can right click on the line number to copy a link to the line. Example of such url is https://github.com/apache/pulsar/pull/8892/checks?check_run_id=1531075794#step:9:377 . The logs are available for a limited amount of time (usually for a few weeks). 4. Replace "relevant parts of the exception stacktrace here" with the a few lines of the stack trace that shows at leat the exception message and the line of test code where the stacktrace occured. 5. Replace "full exception stacktrace here" with the full exception stacktrace from logs. This section will be hidded by default. 6. Remove all unused fields / content to unclutter the reported issue. Remove this comment too. --> LockManagerTest is flaky. The revalidateLockOnDifferentSession test method fails sporadically. [example failure](https://github.com/apache/pulsar/pull/11681/checks?check_run_id=3348683996#step:9:4714) ``` Error: Tests run: 12, Failures: 1, Errors: 0, Skipped: 7, Time elapsed: 1.52 s <<< FAILURE! - in org.apache.pulsar.metadata.LockManagerTest Error: revalidateLockOnDifferentSession(org.apache.pulsar.metadata.LockManagerTest) Time elapsed: 0.264 s <<< FAILURE! java.util.NoSuchElementException: No value present at java.base/java.util.Optional.get(Optional.java:148) at org.apache.pulsar.metadata.LockManagerTest.revalidateLockOnDifferentSession(LockManagerTest.java:229) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:132) at org.testng.internal.InvokeMethodRunnable.runOne(InvokeMethodRunnable.java:45) at org.testng.internal.InvokeMethodRunnable.call(InvokeMethodRunnable.java:73) at org.testng.internal.InvokeMethodRunnable.call(InvokeMethodRunnable.java:11) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) ```
test
flaky test lockmanagertest revalidatelockondifferentsession instructions for reporting a flaky test using this issue template replace in title and body with the test class name replace in title and body with the test method that failed multiple methods are flaky remove the content that refers to the test method replace url here with a url to an example failure in the github actions workflow run logs you can right click on the line number to copy a link to the line example of such url is the logs are available for a limited amount of time usually for a few weeks replace relevant parts of the exception stacktrace here with the a few lines of the stack trace that shows at leat the exception message and the line of test code where the stacktrace occured replace full exception stacktrace here with the full exception stacktrace from logs this section will be hidded by default remove all unused fields content to unclutter the reported issue remove this comment too lockmanagertest is flaky the revalidatelockondifferentsession test method fails sporadically error tests run failures errors skipped time elapsed s failure in org apache pulsar metadata lockmanagertest error revalidatelockondifferentsession org apache pulsar metadata lockmanagertest time elapsed s failure java util nosuchelementexception no value present at java base java util optional get optional java at org apache pulsar metadata lockmanagertest revalidatelockondifferentsession lockmanagertest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org testng internal methodinvocationhelper invokemethod methodinvocationhelper java at org testng internal invokemethodrunnable runone invokemethodrunnable java at org testng internal invokemethodrunnable call invokemethodrunnable java at org testng internal invokemethodrunnable call invokemethodrunnable java at java base java util concurrent futuretask run futuretask java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java
1
314,383
26,996,240,603
IssuesEvent
2023-02-10 01:26:15
MohistMC/Mohist
https://api.github.com/repos/MohistMC/Mohist
closed
[1.16.5] No enum constant for Vanilla Biomes!
1.16.5 Wait Needs Testing
<!-- ISSUE_TEMPLATE_3 -> IMPORTANT: DO NOT DELETE THIS LINE.--> <!-- Thank you for reporting ! Please note that issues can take a lot of time to be fixed and there is no eta.--> <!-- If you don't know where to upload your logs and crash reports, you can use these websites : --> <!-- https://gist.github.com (recommended) --> <!-- https://mclo.gs --> <!-- https://haste.mohistmc.com --> <!-- https://pastebin.com --> <!-- TO FILL THIS TEMPLATE, YOU NEED TO REPLACE THE {} BY WHAT YOU WANT --> **Minecraft Version :** **1.16.5** **Mohist Version :** **mohist-1.16.5-1099** **Operating System :** any **Description of issue :** The code ```java player.getLocation().getBlock().getBiome() ``` Will cause: ```java Caused by: java.lang.IllegalArgumentException: No enum constant org.bukkit.block.Biome.MINECRAFT_TAIGA at java.lang.Enum.valueOf(Unknown Source) ~[?:1.8.0_333] at org.bukkit.block.Biome.valueOf(Biome.java:11) ~[forge:?] at org.bukkit.craftbukkit.v1_16_R3.block.CraftBlock.biomeBaseToBiome(CraftBlock.java:522) ~[forge:?] at org.bukkit.craftbukkit.v1_16_R3.CraftWorld.getBiome(CraftWorld.java:935) ~[forge:7e29f765-296df566-9fb885e8] at org.bukkit.craftbukkit.v1_16_R3.block.CraftBlock.getBiome(CraftBlock.java:508) ~[forge:?] ``` Because there is no enum for `Biome.MINECRAFT_TAIGA`, only for `Biome.TAIGA` So, if the biome is vanilla there is no need for the biome prefix.
1.0
[1.16.5] No enum constant for Vanilla Biomes! - <!-- ISSUE_TEMPLATE_3 -> IMPORTANT: DO NOT DELETE THIS LINE.--> <!-- Thank you for reporting ! Please note that issues can take a lot of time to be fixed and there is no eta.--> <!-- If you don't know where to upload your logs and crash reports, you can use these websites : --> <!-- https://gist.github.com (recommended) --> <!-- https://mclo.gs --> <!-- https://haste.mohistmc.com --> <!-- https://pastebin.com --> <!-- TO FILL THIS TEMPLATE, YOU NEED TO REPLACE THE {} BY WHAT YOU WANT --> **Minecraft Version :** **1.16.5** **Mohist Version :** **mohist-1.16.5-1099** **Operating System :** any **Description of issue :** The code ```java player.getLocation().getBlock().getBiome() ``` Will cause: ```java Caused by: java.lang.IllegalArgumentException: No enum constant org.bukkit.block.Biome.MINECRAFT_TAIGA at java.lang.Enum.valueOf(Unknown Source) ~[?:1.8.0_333] at org.bukkit.block.Biome.valueOf(Biome.java:11) ~[forge:?] at org.bukkit.craftbukkit.v1_16_R3.block.CraftBlock.biomeBaseToBiome(CraftBlock.java:522) ~[forge:?] at org.bukkit.craftbukkit.v1_16_R3.CraftWorld.getBiome(CraftWorld.java:935) ~[forge:7e29f765-296df566-9fb885e8] at org.bukkit.craftbukkit.v1_16_R3.block.CraftBlock.getBiome(CraftBlock.java:508) ~[forge:?] ``` Because there is no enum for `Biome.MINECRAFT_TAIGA`, only for `Biome.TAIGA` So, if the biome is vanilla there is no need for the biome prefix.
test
no enum constant for vanilla biomes important do not delete this line minecraft version mohist version mohist operating system any description of issue the code java player getlocation getblock getbiome will cause java caused by java lang illegalargumentexception no enum constant org bukkit block biome minecraft taiga at java lang enum valueof unknown source at org bukkit block biome valueof biome java at org bukkit craftbukkit block craftblock biomebasetobiome craftblock java at org bukkit craftbukkit craftworld getbiome craftworld java at org bukkit craftbukkit block craftblock getbiome craftblock java because there is no enum for biome minecraft taiga only for biome taiga so if the biome is vanilla there is no need for the biome prefix
1
708,670
24,349,519,395
IssuesEvent
2022-10-02 19:17:43
googleapis/python-aiplatform
https://api.github.com/repos/googleapis/python-aiplatform
closed
inconsistent: hp tunning still using entrypoints vs vertex pipeline using py-functions
type: bug priority: p2 type: docs :rotating_light: api: vertex-ai
With Vertex ai, you have the option to create components from py-functions in a custom base_image. https://www.kubeflow.org/docs/components/pipelines/sdk-v2/python-function-components/#packages But then for hpt, when using the base_image makes you call it via command, as entrypoint. https://cloud.google.com/vertex-ai/docs/training/using-hyperparameter-tuning#aiplatform_create_hyperparameter_tuning_job_python_package_sample-python This for me is inconsistent. One pushes for an end-to-end python code. While the other goes back to bash. And hpt is a very usual step in a pipeline. Maybe the solution would be to have a native gcp_component to do the hpt from a py-func? https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-0.1.6/genindex.html#T I report this as a bug and not a feature due the incosistency
1.0
inconsistent: hp tunning still using entrypoints vs vertex pipeline using py-functions - With Vertex ai, you have the option to create components from py-functions in a custom base_image. https://www.kubeflow.org/docs/components/pipelines/sdk-v2/python-function-components/#packages But then for hpt, when using the base_image makes you call it via command, as entrypoint. https://cloud.google.com/vertex-ai/docs/training/using-hyperparameter-tuning#aiplatform_create_hyperparameter_tuning_job_python_package_sample-python This for me is inconsistent. One pushes for an end-to-end python code. While the other goes back to bash. And hpt is a very usual step in a pipeline. Maybe the solution would be to have a native gcp_component to do the hpt from a py-func? https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-0.1.6/genindex.html#T I report this as a bug and not a feature due the incosistency
non_test
inconsistent hp tunning still using entrypoints vs vertex pipeline using py functions with vertex ai you have the option to create components from py functions in a custom base image but then for hpt when using the base image makes you call it via command as entrypoint this for me is inconsistent one pushes for an end to end python code while the other goes back to bash and hpt is a very usual step in a pipeline maybe the solution would be to have a native gcp component to do the hpt from a py func i report this as a bug and not a feature due the incosistency
0
253,618
8,058,449,583
IssuesEvent
2018-08-02 18:30:34
threefoldfoundation/tfchain
https://api.github.com/repos/threefoldfoundation/tfchain
closed
support atomic swaps with ethereum
priority_critical
As tfchain supports (thanks to Rivine) the atomic swap protocol as implemented by decred (see: https://github.com/decred/atomicswap), we already support bitcoin, litecoin, monacoin, particl, polis, vertcoin, viacoin and zcoin. Now it would be great if we also support Etheruem. This would require the implementation of a command line tool, as well as an ethereum smart contract (in solidity?). Whether we add that tool and contract in this repo, Rivine, a separate repo or contribute it back to https://github.com/decred/atomicswap is currently undecided.
1.0
support atomic swaps with ethereum - As tfchain supports (thanks to Rivine) the atomic swap protocol as implemented by decred (see: https://github.com/decred/atomicswap), we already support bitcoin, litecoin, monacoin, particl, polis, vertcoin, viacoin and zcoin. Now it would be great if we also support Etheruem. This would require the implementation of a command line tool, as well as an ethereum smart contract (in solidity?). Whether we add that tool and contract in this repo, Rivine, a separate repo or contribute it back to https://github.com/decred/atomicswap is currently undecided.
non_test
support atomic swaps with ethereum as tfchain supports thanks to rivine the atomic swap protocol as implemented by decred see we already support bitcoin litecoin monacoin particl polis vertcoin viacoin and zcoin now it would be great if we also support etheruem this would require the implementation of a command line tool as well as an ethereum smart contract in solidity whether we add that tool and contract in this repo rivine a separate repo or contribute it back to is currently undecided
0
75,160
7,461,003,994
IssuesEvent
2018-03-30 22:37:05
kubernetes/test-infra
https://api.github.com/repos/kubernetes/test-infra
closed
PR Dashboard: Explain what icons mean
area/gubernator help wanted kind/bug sig/testing
My dashboard currently shows some red Xs and a yellow dot. I think they encode something about PR state, but hovering over them shows nothing, and there's no legend. Hover would be fine.
1.0
PR Dashboard: Explain what icons mean - My dashboard currently shows some red Xs and a yellow dot. I think they encode something about PR state, but hovering over them shows nothing, and there's no legend. Hover would be fine.
test
pr dashboard explain what icons mean my dashboard currently shows some red xs and a yellow dot i think they encode something about pr state but hovering over them shows nothing and there s no legend hover would be fine
1
199,197
15,027,721,056
IssuesEvent
2021-02-02 01:25:36
CGCookie/retopoflow
https://api.github.com/repos/CGCookie/retopoflow
closed
Symetry: Improve Documentation
Ready for Testing enhancement
I had a problem setting symetry for my model. It seems the Symetry axis is aplied to the 3d cursor rather than the object origin. In my opinion this is not obvious. It took me a very long time to figure this out. I suggest you shoud add this to your documentation.
1.0
Symetry: Improve Documentation - I had a problem setting symetry for my model. It seems the Symetry axis is aplied to the 3d cursor rather than the object origin. In my opinion this is not obvious. It took me a very long time to figure this out. I suggest you shoud add this to your documentation.
test
symetry improve documentation i had a problem setting symetry for my model it seems the symetry axis is aplied to the cursor rather than the object origin in my opinion this is not obvious it took me a very long time to figure this out i suggest you shoud add this to your documentation
1
308,595
26,616,976,671
IssuesEvent
2023-01-24 08:10:18
unifyai/ivy
https://api.github.com/repos/unifyai/ivy
closed
Fix linalg.test_torch_vector_norm
PyTorch Frontend Sub Task Failing Test
| | | |---|---| |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/3975662191/jobs/6815665175" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/3975662191/jobs/6815665175" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/3975662191/jobs/6815665175" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/3975662191/jobs/6815665175" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> <details> <summary>FAILED ivy_tests/test_ivy/test_frontends/test_torch/test_linalg.py::test_torch_vector_norm[cpu-ivy.functional.backends.jax-False-False]</summary> 2023-01-21T17:04:33.7069524Z E AssertionError: [256.] != inf 2023-01-21T17:04:33.7069868Z E Falsifying example: test_torch_vector_norm( 2023-01-21T17:04:33.7070369Z E dtype_values_axis=(['float16'], [array(-256., dtype=float16)], None), 2023-01-21T17:04:33.7070712Z E kd=True, 2023-01-21T17:04:33.7071162Z E ord=2, 2023-01-21T17:04:33.7071489Z E dtype=['int32', 2023-01-21T17:04:33.7071796Z E 'bool', 2023-01-21T17:04:33.7072096Z E 'int64', 2023-01-21T17:04:33.7072394Z E 'uint8', 2023-01-21T17:04:33.7072688Z E 'bfloat16', 2023-01-21T17:04:33.7072996Z E 'float16', 2023-01-21T17:04:33.7073296Z E 'int16', 2023-01-21T17:04:33.7073601Z E 'complex128', 2023-01-21T17:04:33.7074084Z E 'float64', 2023-01-21T17:04:33.7074396Z E 'float32', 2023-01-21T17:04:33.7074685Z E 'int8', 2023-01-21T17:04:33.7075073Z E 'complex64'], 2023-01-21T17:04:33.7075974Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. , 2023-01-21T17:04:33.7077028Z E fn_tree='ivy.functional.frontends.torch.linalg.vector_norm', 2023-01-21T17:04:33.7077372Z E on_device='cpu', 2023-01-21T17:04:33.7077622Z E frontend='torch', 2023-01-21T17:04:33.7077807Z E ) 2023-01-21T17:04:33.7077979Z E 2023-01-21T17:04:33.7078495Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2QAAkYGBgjFCGUCAABVAAY=') as a decorator on your test case </details>
1.0
Fix linalg.test_torch_vector_norm - | | | |---|---| |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/3975662191/jobs/6815665175" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/3975662191/jobs/6815665175" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/3975662191/jobs/6815665175" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/3975662191/jobs/6815665175" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> <details> <summary>FAILED ivy_tests/test_ivy/test_frontends/test_torch/test_linalg.py::test_torch_vector_norm[cpu-ivy.functional.backends.jax-False-False]</summary> 2023-01-21T17:04:33.7069524Z E AssertionError: [256.] != inf 2023-01-21T17:04:33.7069868Z E Falsifying example: test_torch_vector_norm( 2023-01-21T17:04:33.7070369Z E dtype_values_axis=(['float16'], [array(-256., dtype=float16)], None), 2023-01-21T17:04:33.7070712Z E kd=True, 2023-01-21T17:04:33.7071162Z E ord=2, 2023-01-21T17:04:33.7071489Z E dtype=['int32', 2023-01-21T17:04:33.7071796Z E 'bool', 2023-01-21T17:04:33.7072096Z E 'int64', 2023-01-21T17:04:33.7072394Z E 'uint8', 2023-01-21T17:04:33.7072688Z E 'bfloat16', 2023-01-21T17:04:33.7072996Z E 'float16', 2023-01-21T17:04:33.7073296Z E 'int16', 2023-01-21T17:04:33.7073601Z E 'complex128', 2023-01-21T17:04:33.7074084Z E 'float64', 2023-01-21T17:04:33.7074396Z E 'float32', 2023-01-21T17:04:33.7074685Z E 'int8', 2023-01-21T17:04:33.7075073Z E 'complex64'], 2023-01-21T17:04:33.7075974Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. , 2023-01-21T17:04:33.7077028Z E fn_tree='ivy.functional.frontends.torch.linalg.vector_norm', 2023-01-21T17:04:33.7077372Z E on_device='cpu', 2023-01-21T17:04:33.7077622Z E frontend='torch', 2023-01-21T17:04:33.7077807Z E ) 2023-01-21T17:04:33.7077979Z E 2023-01-21T17:04:33.7078495Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2QAAkYGBgjFCGUCAABVAAY=') as a decorator on your test case </details>
test
fix linalg test torch vector norm tensorflow img src torch img src numpy img src jax img src failed ivy tests test ivy test frontends test torch test linalg py test torch vector norm e assertionerror inf e falsifying example test torch vector norm e dtype values axis none e kd true e ord e dtype e bool e e e e e e e e e e e test flags num positional args with out false inplace false native arrays as variable e fn tree ivy functional frontends torch linalg vector norm e on device cpu e frontend torch e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case
1
23,947
4,054,063,966
IssuesEvent
2016-05-24 10:53:23
ProfessorKaos64/SteamOS-Tools
https://api.github.com/repos/ProfessorKaos64/SteamOS-Tools
closed
RetroArch 1.3.4 segfault
brewmaster_testing investigating
#### Please describe your issue in as much detail as possible: Builds of RetroArch 1.3.4 segfault with existing packaging. This is under review. Logs will follow. The package is currently sitting in brewmaster_testing if there are folks willing to debug. See packaging scripts below if you wish to build yourself using existing methods: https://github.com/ProfessorKaos64/LibreGeek-Packaging/tree/brewmaster/retroarch
1.0
RetroArch 1.3.4 segfault - #### Please describe your issue in as much detail as possible: Builds of RetroArch 1.3.4 segfault with existing packaging. This is under review. Logs will follow. The package is currently sitting in brewmaster_testing if there are folks willing to debug. See packaging scripts below if you wish to build yourself using existing methods: https://github.com/ProfessorKaos64/LibreGeek-Packaging/tree/brewmaster/retroarch
test
retroarch segfault please describe your issue in as much detail as possible builds of retroarch segfault with existing packaging this is under review logs will follow the package is currently sitting in brewmaster testing if there are folks willing to debug see packaging scripts below if you wish to build yourself using existing methods
1
137,722
11,156,924,886
IssuesEvent
2019-12-25 09:45:31
microsoft/AzureStorageExplorer
https://api.github.com/repos/microsoft/AzureStorageExplorer
opened
'Clone with New Name...' action isn't localized
:gear: blobs :gear: files 🌐 localization 🧪 testing
**Storage Explorer Version:** 1.11.2 **Build**: [20191220.9](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3338616) **Branch**: hotfix/1.11.2-to-master **Language**: Chinese(zh-CN)/ Chinese(zh-TW) / Japanese / Korean **Platform/OS**: Windows 10/ CentOS 7.6.1810/ MacOS High Sierra **Architecture**: ia32/x64 **Regression From**: Not a regression **Steps to reproduce:** 1. Launch Storage Explorer. 2. Open 'Settings' -> Application (Regional Settings) -> Select '한국어' -> Restart Storage Explorer. 3. Expand one non-ADLS Gen2 storage account -> Blob Containers -> Create a new blob Container -> Right click it. 4. Observe its context menu. **Expect Experience:** All actions are localized. **Actual Experience:** 'Clone with New Name...' action isn't localized. ![image](https://user-images.githubusercontent.com/54055206/71441119-1b222e00-273b-11ea-8ad2-49d8ea32b834.png) **More Info:** 1. This issue also reproduces for one file share. 2. This issue also reproduces for one blob/file. 3. The 'Clone' dialog isn't localized. ![image](https://user-images.githubusercontent.com/54055206/71441140-2f662b00-273b-11ea-8775-d26fd0fb71f0.png)
1.0
'Clone with New Name...' action isn't localized - **Storage Explorer Version:** 1.11.2 **Build**: [20191220.9](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3338616) **Branch**: hotfix/1.11.2-to-master **Language**: Chinese(zh-CN)/ Chinese(zh-TW) / Japanese / Korean **Platform/OS**: Windows 10/ CentOS 7.6.1810/ MacOS High Sierra **Architecture**: ia32/x64 **Regression From**: Not a regression **Steps to reproduce:** 1. Launch Storage Explorer. 2. Open 'Settings' -> Application (Regional Settings) -> Select '한국어' -> Restart Storage Explorer. 3. Expand one non-ADLS Gen2 storage account -> Blob Containers -> Create a new blob Container -> Right click it. 4. Observe its context menu. **Expect Experience:** All actions are localized. **Actual Experience:** 'Clone with New Name...' action isn't localized. ![image](https://user-images.githubusercontent.com/54055206/71441119-1b222e00-273b-11ea-8ad2-49d8ea32b834.png) **More Info:** 1. This issue also reproduces for one file share. 2. This issue also reproduces for one blob/file. 3. The 'Clone' dialog isn't localized. ![image](https://user-images.githubusercontent.com/54055206/71441140-2f662b00-273b-11ea-8775-d26fd0fb71f0.png)
test
clone with new name action isn t localized storage explorer version build branch hotfix to master language chinese zh cn chinese zh tw japanese korean platform os windows centos macos high sierra architecture regression from not a regression steps to reproduce launch storage explorer open settings application regional settings select 한국어 restart storage explorer expand one non adls storage account blob containers create a new blob container right click it observe its context menu expect experience all actions are localized actual experience clone with new name action isn t localized more info this issue also reproduces for one file share this issue also reproduces for one blob file the clone dialog isn t localized
1
69,938
13,384,235,817
IssuesEvent
2020-09-02 11:40:53
JabRef/jabref
https://api.github.com/repos/JabRef/jabref
closed
Failing architecture tests
type: code-quality
Just as to remind that this is known, but not yet fixed. We cannot fix it today, but hopefully a PR will come in soon. ```text Test 10 -- is org.jabref.model independent of org.jabref.logic? FAILED org.opentest4j.AssertionFailedError: The following classes are not allowed to depend on org.jabref.logic ==> expected: <[]> but was: <[src/main/java/org/jabref/model/cleanup/NormalizeNewlinesFormatter.java]> at org.jabref@100.0.0/org.jabref.architecture.MainArchitectureTests.firstPackageIsIndependentOfSecondPackage(MainArchitectureTests.java:110) ```
1.0
Failing architecture tests - Just as to remind that this is known, but not yet fixed. We cannot fix it today, but hopefully a PR will come in soon. ```text Test 10 -- is org.jabref.model independent of org.jabref.logic? FAILED org.opentest4j.AssertionFailedError: The following classes are not allowed to depend on org.jabref.logic ==> expected: <[]> but was: <[src/main/java/org/jabref/model/cleanup/NormalizeNewlinesFormatter.java]> at org.jabref@100.0.0/org.jabref.architecture.MainArchitectureTests.firstPackageIsIndependentOfSecondPackage(MainArchitectureTests.java:110) ```
non_test
failing architecture tests just as to remind that this is known but not yet fixed we cannot fix it today but hopefully a pr will come in soon text test is org jabref model independent of org jabref logic failed org assertionfailederror the following classes are not allowed to depend on org jabref logic expected but was at org jabref org jabref architecture mainarchitecturetests firstpackageisindependentofsecondpackage mainarchitecturetests java
0
94,584
8,506,530,965
IssuesEvent
2018-10-30 16:47:23
SME-Issues/issues
https://api.github.com/repos/SME-Issues/issues
closed
Query Payment Tests Canonical - 26/10/2018
NLP Api pulse_tests
### Summary: - Total - 186 (100%) - Passed - 132 (71%) - Failed - 54 (29%) - Unsupported - 0 (0%) - Not understood - 10 (5%) **Querypayment Tests Canonical** - Total - 186 (100%) - Fail - 54 (29%) - Failed - 49 - Test failed exception - 5 - Pass - 132 (71%) - Passed as expected - 132
1.0
Query Payment Tests Canonical - 26/10/2018 - ### Summary: - Total - 186 (100%) - Passed - 132 (71%) - Failed - 54 (29%) - Unsupported - 0 (0%) - Not understood - 10 (5%) **Querypayment Tests Canonical** - Total - 186 (100%) - Fail - 54 (29%) - Failed - 49 - Test failed exception - 5 - Pass - 132 (71%) - Passed as expected - 132
test
query payment tests canonical summary total passed failed unsupported not understood querypayment tests canonical total fail failed test failed exception pass passed as expected
1
160,325
12,507,218,159
IssuesEvent
2020-06-02 13:48:45
microsoft/vscode
https://api.github.com/repos/microsoft/vscode
closed
Test: pinned tabs
testplan-item
Refs: https://github.com/microsoft/vscode/issues/12622 - [x] anyOS @jrieken - [x] anyOS @joaomoreno Complexity: 4 Authors: @bpasero [Create Issue](https://github.com/microsoft/vscode/issues/new?body=Testing+%2398019%0A%0A) --- Pinned tabs work when tabs are enabled through the following interactions: * from the context menu of a tab ("Pin" / "Unpin") * via keybinding * via global action from the command palette targeting the active editor **Verify** * tabs can be pinned and unpinned in each editor group * pinned tabs always appear before unpinned tabs * opening a new tab always opens it after the last pinned tab * pinned tabs remain visible even when many tabs are opened and scrollbars appear * non-pinned tabs scroll "under" pinned tabs * pinned tabs eventually scroll as normal tabs once you make the size of an editor group small enough (120px is the limit) * the active tab is always fully revealed in the presence of pinned tabs * pinned tabs show with a small fixed size * showing only the icon if icons are enabled * showing the first letter of the file name otherwise * pinned tabs can only be closed explicitly (e.g. via `CtrlCmd+W`) and are exempted from mass close actions (like "Close Others") * enabling `workbench.editor.limit.enabled` will never close a pinned tab when the limit is reached * pinned tabs work together with any of the tab related settings * `workbench.editor.tabSizing` set to `shrink` and `fit` * `workbench.editor.tabCloseButton` set to `left`, `right` or `off` (note that pinned tabs never show any close button) * `workbench.editor.highlightModifiedTabs` enabled to show a dirty indication for dirty pinned tabs * `workbench.editor.openPositioning` set to `left`, `right`, `first` or `last` does never mix pinned editors with non-pinned * pinned tabs preserve their state * saving an untitled editor that is pinned shows the result as pinned too * switching an editor for displaying a resource (markdown preview vs markdown text) should preserve pinned state * the command to reopen a closed tab restores a pinned tab at the right index and preserves pinned state * verify drag and drop * moving pinned tabs preserves pinned state if dropped onto other pinned tabs * moving pinned tabs onto a non-pinned tab unpins it * moving unpinned tabs onto pinned tab pins it * this works in the same group and across editor groups or windows * pin an editor that we do not restore after restart (e.g. a SCM diff editor) and verify that after restart the number of pinned vs unpinned editors is not broken * disable tabs when you have pinned tabs and ensure that they drop their pinned status (e.g. now mass close commands work on them)
1.0
Test: pinned tabs - Refs: https://github.com/microsoft/vscode/issues/12622 - [x] anyOS @jrieken - [x] anyOS @joaomoreno Complexity: 4 Authors: @bpasero [Create Issue](https://github.com/microsoft/vscode/issues/new?body=Testing+%2398019%0A%0A) --- Pinned tabs work when tabs are enabled through the following interactions: * from the context menu of a tab ("Pin" / "Unpin") * via keybinding * via global action from the command palette targeting the active editor **Verify** * tabs can be pinned and unpinned in each editor group * pinned tabs always appear before unpinned tabs * opening a new tab always opens it after the last pinned tab * pinned tabs remain visible even when many tabs are opened and scrollbars appear * non-pinned tabs scroll "under" pinned tabs * pinned tabs eventually scroll as normal tabs once you make the size of an editor group small enough (120px is the limit) * the active tab is always fully revealed in the presence of pinned tabs * pinned tabs show with a small fixed size * showing only the icon if icons are enabled * showing the first letter of the file name otherwise * pinned tabs can only be closed explicitly (e.g. via `CtrlCmd+W`) and are exempted from mass close actions (like "Close Others") * enabling `workbench.editor.limit.enabled` will never close a pinned tab when the limit is reached * pinned tabs work together with any of the tab related settings * `workbench.editor.tabSizing` set to `shrink` and `fit` * `workbench.editor.tabCloseButton` set to `left`, `right` or `off` (note that pinned tabs never show any close button) * `workbench.editor.highlightModifiedTabs` enabled to show a dirty indication for dirty pinned tabs * `workbench.editor.openPositioning` set to `left`, `right`, `first` or `last` does never mix pinned editors with non-pinned * pinned tabs preserve their state * saving an untitled editor that is pinned shows the result as pinned too * switching an editor for displaying a resource (markdown preview vs markdown text) should preserve pinned state * the command to reopen a closed tab restores a pinned tab at the right index and preserves pinned state * verify drag and drop * moving pinned tabs preserves pinned state if dropped onto other pinned tabs * moving pinned tabs onto a non-pinned tab unpins it * moving unpinned tabs onto pinned tab pins it * this works in the same group and across editor groups or windows * pin an editor that we do not restore after restart (e.g. a SCM diff editor) and verify that after restart the number of pinned vs unpinned editors is not broken * disable tabs when you have pinned tabs and ensure that they drop their pinned status (e.g. now mass close commands work on them)
test
test pinned tabs refs anyos jrieken anyos joaomoreno complexity authors bpasero pinned tabs work when tabs are enabled through the following interactions from the context menu of a tab pin unpin via keybinding via global action from the command palette targeting the active editor verify tabs can be pinned and unpinned in each editor group pinned tabs always appear before unpinned tabs opening a new tab always opens it after the last pinned tab pinned tabs remain visible even when many tabs are opened and scrollbars appear non pinned tabs scroll under pinned tabs pinned tabs eventually scroll as normal tabs once you make the size of an editor group small enough is the limit the active tab is always fully revealed in the presence of pinned tabs pinned tabs show with a small fixed size showing only the icon if icons are enabled showing the first letter of the file name otherwise pinned tabs can only be closed explicitly e g via ctrlcmd w and are exempted from mass close actions like close others enabling workbench editor limit enabled will never close a pinned tab when the limit is reached pinned tabs work together with any of the tab related settings workbench editor tabsizing set to shrink and fit workbench editor tabclosebutton set to left right or off note that pinned tabs never show any close button workbench editor highlightmodifiedtabs enabled to show a dirty indication for dirty pinned tabs workbench editor openpositioning set to left right first or last does never mix pinned editors with non pinned pinned tabs preserve their state saving an untitled editor that is pinned shows the result as pinned too switching an editor for displaying a resource markdown preview vs markdown text should preserve pinned state the command to reopen a closed tab restores a pinned tab at the right index and preserves pinned state verify drag and drop moving pinned tabs preserves pinned state if dropped onto other pinned tabs moving pinned tabs onto a non pinned tab unpins it moving unpinned tabs onto pinned tab pins it this works in the same group and across editor groups or windows pin an editor that we do not restore after restart e g a scm diff editor and verify that after restart the number of pinned vs unpinned editors is not broken disable tabs when you have pinned tabs and ensure that they drop their pinned status e g now mass close commands work on them
1
63,783
26,516,106,967
IssuesEvent
2023-01-18 20:57:00
dart-lang/sdk
https://api.github.com/repos/dart-lang/sdk
closed
VM service protocol doesn't expose set "elements"
area-vm customer-google3 vm-service
`pkg:vm_service` doesn't appear to treat `Set`s as collections. For example, when inspecting an object of type `LinkedHashSet` over the VM service protocol, it doesn't have any `elements`. This leads to us treating it as a custom object type, exposing its fields instead of its element. To make matters worse, on the web, set is backed by opaque JavaScript objects which hide the actual contents of the set. ![Screen Shot 2021-02-02 at 17 32 03](https://user-images.githubusercontent.com/1916163/106685612-96305780-657d-11eb-9492-d6ea79a89287.png) @grouma This was the issue I mentioned today – turns out its an issue with sets, not maps! @bkonyi Is this intentional – are their any plans to treat sets as first-class collections (like lists and maps)?
1.0
VM service protocol doesn't expose set "elements" - `pkg:vm_service` doesn't appear to treat `Set`s as collections. For example, when inspecting an object of type `LinkedHashSet` over the VM service protocol, it doesn't have any `elements`. This leads to us treating it as a custom object type, exposing its fields instead of its element. To make matters worse, on the web, set is backed by opaque JavaScript objects which hide the actual contents of the set. ![Screen Shot 2021-02-02 at 17 32 03](https://user-images.githubusercontent.com/1916163/106685612-96305780-657d-11eb-9492-d6ea79a89287.png) @grouma This was the issue I mentioned today – turns out its an issue with sets, not maps! @bkonyi Is this intentional – are their any plans to treat sets as first-class collections (like lists and maps)?
non_test
vm service protocol doesn t expose set elements pkg vm service doesn t appear to treat set s as collections for example when inspecting an object of type linkedhashset over the vm service protocol it doesn t have any elements this leads to us treating it as a custom object type exposing its fields instead of its element to make matters worse on the web set is backed by opaque javascript objects which hide the actual contents of the set grouma this was the issue i mentioned today – turns out its an issue with sets not maps bkonyi is this intentional – are their any plans to treat sets as first class collections like lists and maps
0
42,475
5,438,815,700
IssuesEvent
2017-03-06 11:34:09
Jumpscale/jumpscale_core8
https://api.github.com/repos/Jumpscale/jumpscale_core8
closed
jsagent8 is broken
needs_auto_test priority_major state_verification
jsagent8 is the jsagent in jumpscale8. It still relies on j.dirs.hrdDir which does not exist anymore. j.application.config behaves in another way then before.
1.0
jsagent8 is broken - jsagent8 is the jsagent in jumpscale8. It still relies on j.dirs.hrdDir which does not exist anymore. j.application.config behaves in another way then before.
test
is broken is the jsagent in it still relies on j dirs hrddir which does not exist anymore j application config behaves in another way then before
1
351,965
32,039,547,804
IssuesEvent
2023-09-22 18:05:43
unifyai/ivy
https://api.github.com/repos/unifyai/ivy
closed
Fix tensor.test_torch_tensor_imag
PyTorch Frontend Sub Task Failing Test
| | | |---|---| |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/6276216926/job/17045408750"><img src=https://img.shields.io/badge/-success-success></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/6276216926/job/17045408750"><img src=https://img.shields.io/badge/-success-success></a> |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/6276216926/job/17045408750"><img src=https://img.shields.io/badge/-success-success></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/6276216926/job/17045408750"><img src=https://img.shields.io/badge/-success-success></a> |paddle|<a href="https://github.com/unifyai/ivy/actions/runs/6276216926/job/17045408750"><img src=https://img.shields.io/badge/-success-success></a>
1.0
Fix tensor.test_torch_tensor_imag - | | | |---|---| |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/6276216926/job/17045408750"><img src=https://img.shields.io/badge/-success-success></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/6276216926/job/17045408750"><img src=https://img.shields.io/badge/-success-success></a> |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/6276216926/job/17045408750"><img src=https://img.shields.io/badge/-success-success></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/6276216926/job/17045408750"><img src=https://img.shields.io/badge/-success-success></a> |paddle|<a href="https://github.com/unifyai/ivy/actions/runs/6276216926/job/17045408750"><img src=https://img.shields.io/badge/-success-success></a>
test
fix tensor test torch tensor imag numpy a href src jax a href src tensorflow a href src torch a href src paddle a href src
1
799,050
28,300,561,459
IssuesEvent
2023-04-10 05:27:34
googleapis/google-cloud-ruby
https://api.github.com/repos/googleapis/google-cloud-ruby
closed
[Nightly CI Failures] Failures detected for google-cloud-channel-v1
type: bug priority: p1 nightly failure
At 2023-04-09 08:53:10 UTC, detected failures in google-cloud-channel-v1 for: yard report_key_cb9f632312116c74b2e0911c1cbc908c
1.0
[Nightly CI Failures] Failures detected for google-cloud-channel-v1 - At 2023-04-09 08:53:10 UTC, detected failures in google-cloud-channel-v1 for: yard report_key_cb9f632312116c74b2e0911c1cbc908c
non_test
failures detected for google cloud channel at utc detected failures in google cloud channel for yard report key
0
809,624
30,202,231,283
IssuesEvent
2023-07-05 06:57:36
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.google.com - site is not usable
browser-firefox-mobile priority-critical engine-gecko android13
<!-- @browser: Firefox Mobile 117.0 --> <!-- @ua_header: Mozilla/5.0 (Android 13; Mobile; rv:109.0) Gecko/117.0 Firefox/117.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/124429 --> **URL**: https://www.google.com/search?hl=it&q=mappa%20binari%20tram%20milano#tduds=!1m6!1m2!1s0x4786c3fb5050e633:0xecd24bb3512bb82b!2sbinari!2m2!1d9.1690141!2d45.453832!1m6!1m2!1s0x4786c1493f1275e7:0x3cffcd13c6740e8d!2smilano!2m2!1d9.189982!2d45.464203499999996!2m0!3e0 **Browser / Version**: Firefox Mobile 117.0 **Operating System**: Android 13 **Tested Another Browser**: Yes Chrome **Problem type**: Site is not usable **Description**: Buttons or links not working **Steps to Reproduce**: the site is unclickable. Any part of the site is static <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.google.com - site is not usable - <!-- @browser: Firefox Mobile 117.0 --> <!-- @ua_header: Mozilla/5.0 (Android 13; Mobile; rv:109.0) Gecko/117.0 Firefox/117.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/124429 --> **URL**: https://www.google.com/search?hl=it&q=mappa%20binari%20tram%20milano#tduds=!1m6!1m2!1s0x4786c3fb5050e633:0xecd24bb3512bb82b!2sbinari!2m2!1d9.1690141!2d45.453832!1m6!1m2!1s0x4786c1493f1275e7:0x3cffcd13c6740e8d!2smilano!2m2!1d9.189982!2d45.464203499999996!2m0!3e0 **Browser / Version**: Firefox Mobile 117.0 **Operating System**: Android 13 **Tested Another Browser**: Yes Chrome **Problem type**: Site is not usable **Description**: Buttons or links not working **Steps to Reproduce**: the site is unclickable. Any part of the site is static <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_test
site is not usable url browser version firefox mobile operating system android tested another browser yes chrome problem type site is not usable description buttons or links not working steps to reproduce the site is unclickable any part of the site is static browser configuration none from with ❤️
0
120,929
10,142,066,329
IssuesEvent
2019-08-03 20:06:01
haxeui/haxeui-core
https://api.github.com/repos/haxeui/haxeui-core
closed
background-image-clip causes $smake crash in buttons example
retest required
The [buttons example](https://github.com/haxeui/component-examples/tree/master/buttons) for openfl fails to build when trying to use `background-image-clip` in any of the button-icon styles. After commenting that specific one out it compiles, but looks weird. ## Expected Behavior We'd be able to use clipped images. ## Current Behavior Compile-time error: ```terminal Called from ? line 1 Called from ApplicationMain.hx line 25 Called from ApplicationMain.hx line 130 Called from lime/app/Application.hx line 150 Called from lime/_internal/backend/native/NativeApplication.hx line 146 Called from lime/_internal/backend/native/NativeApplication.hx line 370 Called from lime/_internal/macros/EventMacro.hx line 91 Called from openfl/display/Stage.hx line 1878 Called from openfl/display/Stage.hx line 1168 Called from openfl/display/Stage.hx line 1428 Called from C:\HaxeToolkit\haxe\std/neko/Lib.hx line 65 Called from openfl/display/Stage.hx line 1164 Called from openfl/display/DisplayObject.hx line 1430 Called from openfl/events/EventDispatcher.hx line 402 Called from haxe/ui/backend/TimerImpl.hx line 16 Called from haxe/ui/util/Timer.hx line 10 Called from haxe/ui/validation/ValidationManager.hx line 130 Called from haxe/ui/core/Component.hx line 1293 Called from haxe/ui/backend/ComponentImpl.hx line 65 Called from haxe/ui/backend/openfl/OpenFLStyleHelper.hx line 159 Called from haxe/ui/ToolkitAssets.hx line 86 Called from haxe/ui/backend/AssetsImpl.hx line 49 Called from haxe/ui/ToolkitAssets.hx line 88 Called from haxe/ui/ToolkitAssets.hx line 113 Called from haxe/ui/util/CallbackMap.hx line 66 Called from haxe/ui/backend/openfl/OpenFLStyleHelper.hx line 161 Called from haxe/ui/backend/openfl/OpenFLStyleHelper.hx line 198 Called from openfl/display/BitmapData.hx line 128 Called from openfl/display/BitmapData.hx line 278 Called from lime/utils/ArrayBufferView.hx line 7 Called from lime/utils/ArrayBufferView.hx line 37 Called from haxe/io/Bytes.hx line 508 Uncaught exception - $smake ``` ## Steps to Reproduce (for bugs) 1. Clone the examples repo https://github.com/haxeui/component-examples 2. `cd buttons` 3. `haxelib run lime test neko` 4. Error . . . Comment out all `background-image-clip` to see it work ## Test app / minimal test case ```xml <vbox style="padding: 5px;"> <style> .bitmapButton { background-image: "haxeui-core/styles/default/haxeui_tiny.png"; background-image-clip: 0px 0px 40px 94px; background-image-slice: 10px 10px 30px 84px; } </style> <grid columns="2"> <button text="Bitmap" styleName="bitmapButton" /> </grid> </vbox> ``` ## Context I just wanted a custom-textured button. ## Your Environment - master branch ( core, openfl-ui ) - Haxe 4rc3 - Openfl 8.9.1 , lime 7.5.0 - Windows 10 - Neko
1.0
background-image-clip causes $smake crash in buttons example - The [buttons example](https://github.com/haxeui/component-examples/tree/master/buttons) for openfl fails to build when trying to use `background-image-clip` in any of the button-icon styles. After commenting that specific one out it compiles, but looks weird. ## Expected Behavior We'd be able to use clipped images. ## Current Behavior Compile-time error: ```terminal Called from ? line 1 Called from ApplicationMain.hx line 25 Called from ApplicationMain.hx line 130 Called from lime/app/Application.hx line 150 Called from lime/_internal/backend/native/NativeApplication.hx line 146 Called from lime/_internal/backend/native/NativeApplication.hx line 370 Called from lime/_internal/macros/EventMacro.hx line 91 Called from openfl/display/Stage.hx line 1878 Called from openfl/display/Stage.hx line 1168 Called from openfl/display/Stage.hx line 1428 Called from C:\HaxeToolkit\haxe\std/neko/Lib.hx line 65 Called from openfl/display/Stage.hx line 1164 Called from openfl/display/DisplayObject.hx line 1430 Called from openfl/events/EventDispatcher.hx line 402 Called from haxe/ui/backend/TimerImpl.hx line 16 Called from haxe/ui/util/Timer.hx line 10 Called from haxe/ui/validation/ValidationManager.hx line 130 Called from haxe/ui/core/Component.hx line 1293 Called from haxe/ui/backend/ComponentImpl.hx line 65 Called from haxe/ui/backend/openfl/OpenFLStyleHelper.hx line 159 Called from haxe/ui/ToolkitAssets.hx line 86 Called from haxe/ui/backend/AssetsImpl.hx line 49 Called from haxe/ui/ToolkitAssets.hx line 88 Called from haxe/ui/ToolkitAssets.hx line 113 Called from haxe/ui/util/CallbackMap.hx line 66 Called from haxe/ui/backend/openfl/OpenFLStyleHelper.hx line 161 Called from haxe/ui/backend/openfl/OpenFLStyleHelper.hx line 198 Called from openfl/display/BitmapData.hx line 128 Called from openfl/display/BitmapData.hx line 278 Called from lime/utils/ArrayBufferView.hx line 7 Called from lime/utils/ArrayBufferView.hx line 37 Called from haxe/io/Bytes.hx line 508 Uncaught exception - $smake ``` ## Steps to Reproduce (for bugs) 1. Clone the examples repo https://github.com/haxeui/component-examples 2. `cd buttons` 3. `haxelib run lime test neko` 4. Error . . . Comment out all `background-image-clip` to see it work ## Test app / minimal test case ```xml <vbox style="padding: 5px;"> <style> .bitmapButton { background-image: "haxeui-core/styles/default/haxeui_tiny.png"; background-image-clip: 0px 0px 40px 94px; background-image-slice: 10px 10px 30px 84px; } </style> <grid columns="2"> <button text="Bitmap" styleName="bitmapButton" /> </grid> </vbox> ``` ## Context I just wanted a custom-textured button. ## Your Environment - master branch ( core, openfl-ui ) - Haxe 4rc3 - Openfl 8.9.1 , lime 7.5.0 - Windows 10 - Neko
test
background image clip causes smake crash in buttons example the for openfl fails to build when trying to use background image clip in any of the button icon styles after commenting that specific one out it compiles but looks weird expected behavior we d be able to use clipped images current behavior compile time error terminal called from line called from applicationmain hx line called from applicationmain hx line called from lime app application hx line called from lime internal backend native nativeapplication hx line called from lime internal backend native nativeapplication hx line called from lime internal macros eventmacro hx line called from openfl display stage hx line called from openfl display stage hx line called from openfl display stage hx line called from c haxetoolkit haxe std neko lib hx line called from openfl display stage hx line called from openfl display displayobject hx line called from openfl events eventdispatcher hx line called from haxe ui backend timerimpl hx line called from haxe ui util timer hx line called from haxe ui validation validationmanager hx line called from haxe ui core component hx line called from haxe ui backend componentimpl hx line called from haxe ui backend openfl openflstylehelper hx line called from haxe ui toolkitassets hx line called from haxe ui backend assetsimpl hx line called from haxe ui toolkitassets hx line called from haxe ui toolkitassets hx line called from haxe ui util callbackmap hx line called from haxe ui backend openfl openflstylehelper hx line called from haxe ui backend openfl openflstylehelper hx line called from openfl display bitmapdata hx line called from openfl display bitmapdata hx line called from lime utils arraybufferview hx line called from lime utils arraybufferview hx line called from haxe io bytes hx line uncaught exception smake steps to reproduce for bugs clone the examples repo cd buttons haxelib run lime test neko error comment out all background image clip to see it work test app minimal test case xml bitmapbutton background image haxeui core styles default haxeui tiny png background image clip background image slice context i just wanted a custom textured button your environment master branch core openfl ui haxe openfl lime windows neko
1
247,881
20,988,431,842
IssuesEvent
2022-03-29 06:59:08
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
roachtest: jepsen/monotonic/parts-start-kill-2 failed
C-test-failure O-robot O-roachtest branch-master release-blocker T-kv
roachtest.jepsen/monotonic/parts-start-kill-2 [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4713654&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4713654&tab=artifacts#/jepsen/monotonic/parts-start-kill-2) on master @ [29716850b181718594663889ddb5f479fef7a305](https://github.com/cockroachdb/cockroach/commits/29716850b181718594663889ddb5f479fef7a305): ``` (1) attached stack trace -- stack trace: | main.(*clusterImpl).RunE | main/pkg/cmd/roachtest/cluster.go:1987 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runJepsen.func1 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/jepsen.go:172 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runJepsen.func3 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/jepsen.go:210 | runtime.goexit | GOROOT/src/runtime/asm_amd64.s:1581 Wraps: (2) output in run_060242.220902237_n6_bash Wraps: (3) bash -e -c "\ | cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \ | ~/lein run test \ | --tarball file://${PWD}/cockroach.tgz \ | --username ${USER} \ | --ssh-private-key ~/.ssh/id_rsa \ | --os ubuntu \ | --time-limit 300 \ | --concurrency 30 \ | --recovery-time 25 \ | --test-count 1 \ | -n 10.142.0.83 -n 10.142.0.78 -n 10.142.0.76 -n 10.142.0.75 -n 10.142.0.79 \ | --test monotonic --nemesis parts --nemesis2 start-kill-2 \ | > invoke.log 2>&1 \ | " returned | stderr: | | stdout: Wraps: (4) SSH_PROBLEM Wraps: (5) Node 6. Command with error: | `````` | bash -e -c "\ | cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \ | ~/lein run test \ | --tarball file://${PWD}/cockroach.tgz \ | --username ${USER} \ | --ssh-private-key ~/.ssh/id_rsa \ | --os ubuntu \ | --time-limit 300 \ | --concurrency 30 \ | --recovery-time 25 \ | --test-count 1 \ | -n 10.142.0.83 -n 10.142.0.78 -n 10.142.0.76 -n 10.142.0.75 -n 10.142.0.79 \ | --test monotonic --nemesis parts --nemesis2 start-kill-2 \ | > invoke.log 2>&1 \ | " | `````` Wraps: (6) exit status 255 Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) errors.SSH (5) *hintdetail.withDetail (6) *exec.ExitError ``` <details><summary>Help</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7) </p> </details> /cc @cockroachdb/kv-triage <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*jepsen/monotonic/parts-start-kill-2.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub> Jira issue: CRDB-14352
2.0
roachtest: jepsen/monotonic/parts-start-kill-2 failed - roachtest.jepsen/monotonic/parts-start-kill-2 [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4713654&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4713654&tab=artifacts#/jepsen/monotonic/parts-start-kill-2) on master @ [29716850b181718594663889ddb5f479fef7a305](https://github.com/cockroachdb/cockroach/commits/29716850b181718594663889ddb5f479fef7a305): ``` (1) attached stack trace -- stack trace: | main.(*clusterImpl).RunE | main/pkg/cmd/roachtest/cluster.go:1987 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runJepsen.func1 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/jepsen.go:172 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runJepsen.func3 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/jepsen.go:210 | runtime.goexit | GOROOT/src/runtime/asm_amd64.s:1581 Wraps: (2) output in run_060242.220902237_n6_bash Wraps: (3) bash -e -c "\ | cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \ | ~/lein run test \ | --tarball file://${PWD}/cockroach.tgz \ | --username ${USER} \ | --ssh-private-key ~/.ssh/id_rsa \ | --os ubuntu \ | --time-limit 300 \ | --concurrency 30 \ | --recovery-time 25 \ | --test-count 1 \ | -n 10.142.0.83 -n 10.142.0.78 -n 10.142.0.76 -n 10.142.0.75 -n 10.142.0.79 \ | --test monotonic --nemesis parts --nemesis2 start-kill-2 \ | > invoke.log 2>&1 \ | " returned | stderr: | | stdout: Wraps: (4) SSH_PROBLEM Wraps: (5) Node 6. Command with error: | `````` | bash -e -c "\ | cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \ | ~/lein run test \ | --tarball file://${PWD}/cockroach.tgz \ | --username ${USER} \ | --ssh-private-key ~/.ssh/id_rsa \ | --os ubuntu \ | --time-limit 300 \ | --concurrency 30 \ | --recovery-time 25 \ | --test-count 1 \ | -n 10.142.0.83 -n 10.142.0.78 -n 10.142.0.76 -n 10.142.0.75 -n 10.142.0.79 \ | --test monotonic --nemesis parts --nemesis2 start-kill-2 \ | > invoke.log 2>&1 \ | " | `````` Wraps: (6) exit status 255 Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) errors.SSH (5) *hintdetail.withDetail (6) *exec.ExitError ``` <details><summary>Help</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7) </p> </details> /cc @cockroachdb/kv-triage <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*jepsen/monotonic/parts-start-kill-2.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub> Jira issue: CRDB-14352
test
roachtest jepsen monotonic parts start kill failed roachtest jepsen monotonic parts start kill with on master attached stack trace stack trace main clusterimpl rune main pkg cmd roachtest cluster go github com cockroachdb cockroach pkg cmd roachtest tests runjepsen github com cockroachdb cockroach pkg cmd roachtest tests jepsen go github com cockroachdb cockroach pkg cmd roachtest tests runjepsen github com cockroachdb cockroach pkg cmd roachtest tests jepsen go runtime goexit goroot src runtime asm s wraps output in run bash wraps bash e c cd mnt jepsen cockroachdb set eo pipefail lein run test tarball file pwd cockroach tgz username user ssh private key ssh id rsa os ubuntu time limit concurrency recovery time test count n n n n n test monotonic nemesis parts start kill invoke log returned stderr stdout wraps ssh problem wraps node command with error bash e c cd mnt jepsen cockroachdb set eo pipefail lein run test tarball file pwd cockroach tgz username user ssh private key ssh id rsa os ubuntu time limit concurrency recovery time test count n n n n n test monotonic nemesis parts start kill invoke log wraps exit status error types withstack withstack errutil withprefix cluster withcommanddetails errors ssh hintdetail withdetail exec exiterror help see see cc cockroachdb kv triage jira issue crdb
1
733,354
25,302,864,625
IssuesEvent
2022-11-17 12:04:58
rangav/thunder-client-support
https://api.github.com/repos/rangav/thunder-client-support
closed
File Form Checkbox disappear (UI)
bug Priority
**Describe the bug** File Checkbox for Forms is not visible when the request editor is too narrow and the response view goes underneath. **Visible** ![image](https://user-images.githubusercontent.com/44891585/201894128-ff603fc9-ca1c-433a-9fd3-adb92c7cb68a.png) **HIDDEN** ![image](https://user-images.githubusercontent.com/44891585/201894544-80554dd8-bdd9-4ff5-92e4-fc6b4b5ee4dc.png) **Expected behavior** checkbox should be visible **Platform:** - OS: Linux Mint - vscode version: 1.68.0 - extension version: v1.20.1 **Solution:** Remove that checkbox and let there be a first default unselected file form input, similar to field-name-values
1.0
File Form Checkbox disappear (UI) - **Describe the bug** File Checkbox for Forms is not visible when the request editor is too narrow and the response view goes underneath. **Visible** ![image](https://user-images.githubusercontent.com/44891585/201894128-ff603fc9-ca1c-433a-9fd3-adb92c7cb68a.png) **HIDDEN** ![image](https://user-images.githubusercontent.com/44891585/201894544-80554dd8-bdd9-4ff5-92e4-fc6b4b5ee4dc.png) **Expected behavior** checkbox should be visible **Platform:** - OS: Linux Mint - vscode version: 1.68.0 - extension version: v1.20.1 **Solution:** Remove that checkbox and let there be a first default unselected file form input, similar to field-name-values
non_test
file form checkbox disappear ui describe the bug file checkbox for forms is not visible when the request editor is too narrow and the response view goes underneath visible hidden expected behavior checkbox should be visible platform os linux mint vscode version extension version solution remove that checkbox and let there be a first default unselected file form input similar to field name values
0
244,336
20,624,471,172
IssuesEvent
2022-03-07 20:53:31
backend-br/vagas
https://api.github.com/repos/backend-br/vagas
closed
[Remoto] Back-end Developer @ Ecossistema EZ
PJ Pleno Remoto DevOps Testes automatizados NoSQL CI GraphQL Scrum Rest
## Nossa empresa A EZ.devs é um ecossistema que conecta pessoas de tecnologia a startups e scale-ups que estão mudando o mundo Alocamos você em empresas do Brasil (e, em breve, do mundo!) em times de tecnologia de alta performance. 👩🏽‍💻 [Em nossa plataforma](https://talentos.ezdevs.com.br/?ref=g_git) você encontra apenas oportunidades de trabalho 100% remoto, boa remuneração + benefícios e um ambiente incrível! Além disso, os talentos realizam apenas um processo seletivo: não precisa ficar refazendo testes toda vez que quiser uma nova oportunidade. Além de ser super rápido com apenas duas etapas! 🤩 A EZ é um ambiente remote-first. Você tem a liberdade e autonomia para decidir da onde quer trabalhar. ## Descrição da vaga Essa pode ser sua oportunidade de trabalhar em empresas como KOVI, Ahgora e Linx! O que você vai fazer: - Projetar e construir aplicativos de software seguros e escaláveis; - Atuar como mentor e disseminar conhecimento para outros membros do time; - Colaborar com os times de engenharia, produto e negócio na construção dos produtos; - Escrever código limpo, testável e de fácil manutenção. ## Local Remoto ## Requisitos **Obrigatórios:** - 2,5 anos de experiência como desenvolvedor front-end; - Experiência com ferramentas de versão de código; - Ter domínio de uma ou mais tecnologias back-end (Preferencialmente Node.js e .NET/C#); - Habilidades interpessoais de escrita e de comunicação verbal; - Proatividade na busca de soluções e novas formas de desenvolvimento; - Ter trabalhado com metodologia ágil / scrum / kanban; - Pessoa auto gerenciável, que busca conhecimento e seja resolutivo nos problemas; **Diferenciais:** - Experiência trabalhando com banco de dados relacionais e NoSQL; - Experiência aplicando conceitos DevOps nos projetos em que trabalhou; - Experiência com testes automatizados e ambientes de CI/CD; - Experiência sólida desenvolvendo APIs com REST, gRPC ou GraphQL; ## Benefícios - Aulas de inglês - Auxílio psicólogo - Gympass Confira as faixas salariais [aqui](https://talentos.ezdevs.com.br/salarios/?ref=g_git). ## Contratação PJ (pelos primeiros 6 meses) - Entenda mais sobre nosso [Modelo de contratação](https://talentos.ezdevs.com.br/modelo-de-contratacao/?ref=g_git) ## Como se candidatar Cadastre-se através desse [link](https://app.ezdevs.com.br/cadastro/?ref=g_git). Para conferir a página da oportunidade [aqui](https://talentos.ezdevs.com.br/carreiras/back-end-pleno/?ref=g_git). Confira todas as oportunidades de carreira [aqui](https://talentos.ezdevs.com.br/carreiras-do-ecossistema/?ref=g_git). ## Tempo médio de feedbacks Costumamos enviar feedbacks em até 10 dias após o processo único. ## Labels #### Alocação - Remoto #### Regime - PJ #### Nível - Pleno
1.0
[Remoto] Back-end Developer @ Ecossistema EZ - ## Nossa empresa A EZ.devs é um ecossistema que conecta pessoas de tecnologia a startups e scale-ups que estão mudando o mundo Alocamos você em empresas do Brasil (e, em breve, do mundo!) em times de tecnologia de alta performance. 👩🏽‍💻 [Em nossa plataforma](https://talentos.ezdevs.com.br/?ref=g_git) você encontra apenas oportunidades de trabalho 100% remoto, boa remuneração + benefícios e um ambiente incrível! Além disso, os talentos realizam apenas um processo seletivo: não precisa ficar refazendo testes toda vez que quiser uma nova oportunidade. Além de ser super rápido com apenas duas etapas! 🤩 A EZ é um ambiente remote-first. Você tem a liberdade e autonomia para decidir da onde quer trabalhar. ## Descrição da vaga Essa pode ser sua oportunidade de trabalhar em empresas como KOVI, Ahgora e Linx! O que você vai fazer: - Projetar e construir aplicativos de software seguros e escaláveis; - Atuar como mentor e disseminar conhecimento para outros membros do time; - Colaborar com os times de engenharia, produto e negócio na construção dos produtos; - Escrever código limpo, testável e de fácil manutenção. ## Local Remoto ## Requisitos **Obrigatórios:** - 2,5 anos de experiência como desenvolvedor front-end; - Experiência com ferramentas de versão de código; - Ter domínio de uma ou mais tecnologias back-end (Preferencialmente Node.js e .NET/C#); - Habilidades interpessoais de escrita e de comunicação verbal; - Proatividade na busca de soluções e novas formas de desenvolvimento; - Ter trabalhado com metodologia ágil / scrum / kanban; - Pessoa auto gerenciável, que busca conhecimento e seja resolutivo nos problemas; **Diferenciais:** - Experiência trabalhando com banco de dados relacionais e NoSQL; - Experiência aplicando conceitos DevOps nos projetos em que trabalhou; - Experiência com testes automatizados e ambientes de CI/CD; - Experiência sólida desenvolvendo APIs com REST, gRPC ou GraphQL; ## Benefícios - Aulas de inglês - Auxílio psicólogo - Gympass Confira as faixas salariais [aqui](https://talentos.ezdevs.com.br/salarios/?ref=g_git). ## Contratação PJ (pelos primeiros 6 meses) - Entenda mais sobre nosso [Modelo de contratação](https://talentos.ezdevs.com.br/modelo-de-contratacao/?ref=g_git) ## Como se candidatar Cadastre-se através desse [link](https://app.ezdevs.com.br/cadastro/?ref=g_git). Para conferir a página da oportunidade [aqui](https://talentos.ezdevs.com.br/carreiras/back-end-pleno/?ref=g_git). Confira todas as oportunidades de carreira [aqui](https://talentos.ezdevs.com.br/carreiras-do-ecossistema/?ref=g_git). ## Tempo médio de feedbacks Costumamos enviar feedbacks em até 10 dias após o processo único. ## Labels #### Alocação - Remoto #### Regime - PJ #### Nível - Pleno
test
back end developer ecossistema ez nossa empresa a ez devs é um ecossistema que conecta pessoas de tecnologia a startups e scale ups que estão mudando o mundo alocamos você em empresas do brasil e em breve do mundo em times de tecnologia de alta performance 👩🏽‍💻 você encontra apenas oportunidades de trabalho remoto boa remuneração benefícios e um ambiente incrível além disso os talentos realizam apenas um processo seletivo não precisa ficar refazendo testes toda vez que quiser uma nova oportunidade além de ser super rápido com apenas duas etapas 🤩 a ez é um ambiente remote first você tem a liberdade e autonomia para decidir da onde quer trabalhar descrição da vaga essa pode ser sua oportunidade de trabalhar em empresas como kovi ahgora e linx o que você vai fazer projetar e construir aplicativos de software seguros e escaláveis atuar como mentor e disseminar conhecimento para outros membros do time colaborar com os times de engenharia produto e negócio na construção dos produtos escrever código limpo testável e de fácil manutenção local remoto requisitos obrigatórios anos de experiência como desenvolvedor front end experiência com ferramentas de versão de código ter domínio de uma ou mais tecnologias back end preferencialmente node js e net c habilidades interpessoais de escrita e de comunicação verbal proatividade na busca de soluções e novas formas de desenvolvimento ter trabalhado com metodologia ágil scrum kanban pessoa auto gerenciável que busca conhecimento e seja resolutivo nos problemas diferenciais experiência trabalhando com banco de dados relacionais e nosql experiência aplicando conceitos devops nos projetos em que trabalhou experiência com testes automatizados e ambientes de ci cd experiência sólida desenvolvendo apis com rest grpc ou graphql benefícios aulas de inglês auxílio psicólogo gympass confira as faixas salariais contratação pj pelos primeiros meses entenda mais sobre nosso como se candidatar cadastre se através desse para conferir a página da oportunidade confira todas as oportunidades de carreira tempo médio de feedbacks costumamos enviar feedbacks em até dias após o processo único labels alocação remoto regime pj nível pleno
1
448,724
12,956,436,321
IssuesEvent
2020-07-20 08:13:00
3liz/lizmap-web-client
https://api.github.com/repos/3liz/lizmap-web-client
closed
Wrong UPDATE query sent to PostgreSQL, data loss
High priority bug data editor
### What is the bug? I have just noticed the following, source of big concerns. 1) I have a PostGIS polygon table/layer which has quite a few columns but not many records (I don't think this is important for the problem) 2) The layer is added to a minimal project (just this layer). The project is set up to be served with QGIS Server and the Lizmap configuration file is generated. The layer is configured to be editable (insert, update, delete) in Lizmap. 3) If the layer has **no** D&D form defined (so using the "auto generate" QGIS option) there are no issues when updating attributes within LMWC. 4) The issue arise when using a D&D form: in this specific test case in the form it was added just the "gid" (PK) field, another field that happens to have been defined as int8, and another field with datatype varchar. All the other attributes of this layer are left out of the form, 5) A feature is edited in LMWC with this form, and the value of this int8 column is changed. Edits are saved. 6) Inspecting the table I can see how the value of the int8 column was updated correctly, and the value of the varchar column was left unchanged (as expected), but for my great surprise all the values of the other columns (that are not part of the form) were completely **wiped out**. 7) I then inspected the UPDATE query sent by LMWC to PostgreSQL and as a fact all the wiped out fields were set as NULL The query looks like: ``` UPDATE "schemaname"."table" SET "geom"=ST_GeomFromText('MULTIPOLYGON(((-5463.366843911836 -7237.887256608387,-5452.475539068239 -7228.778165316916,-5427.128502341457 -7211.649113131462,-5412.27672300934...)))', 3763), "id_parceir"=NULL, "parcela"=NULL, "tipo_benef"=NULL, "nome_benef"='First Last', "nif_benef"=NULL, "telf_benef"=NULL, "mail_benef"=NULL, "parceiro"=NULL, "regiao_celpa"=NULL, "concelho"=NULL, "freguesia"=NULL, "rot_atual"=NULL, "ano_inicio"=NULL, "mes_inicio"=NULL, "densidade"=NULL, "area_ha"=NULL, "certif_fl"=NULL, "grupo_cert"=NULL, "codig_cert"=NULL, "sist_cert"=NULL, "status_cvi_el"=NULL, "tipo_veg_el"=NULL, "grau_cobert_el"=NULL, "altura_veg_el"=NULL, "metodo_cvi_el"=NULL, "status_cvi_l"=NULL, "tipo_veg_l"=NULL, "g rau_cobert_l"=NULL, "altura_veg_l"=NULL, "metodo_cvi_l"=NULL, "psf_cvi"=NULL, "data_cvi"=NULL, "controlo_cvi"=NULL, "observ_cvi"=NULL, "status_svaras"=NULL, "data_svaras"=NULL, "controlo_svaras"=NULL, "observ_svaras"=NULL, "s tatus_nutricional"=NULL, "metodo_adubacao"=NULL, "tipo_adubo"=NULL, "dose_g_pl"=NULL, "nr_sacos"=NULL, "qtdd_adubo"=NULL, "dose_kg_ha"=NULL, "ano_adubacao"=NULL, "psf_aduba"=NULL, "data_adubacao"=20201006, "controlo_adubacao" =NULL, "observ_adubacao"=NULL, "estado_candidatura"=NULL, "estado_informacao"=NULL, "estado_processo"=NULL, "registo_foto"=NULL, "data_registo"=NULL, "ultima_atualizacao"=NULL, "cod_programa"=NULL, "obs_gerais"=NULL, "grupo_webgis"=NULL, "obsv_hv"=NULL WHERE "gid" = 3907 RETURNING "gid"; ``` As you can see all fields where set to NULL but "data_adubacao" that is the field that has been edited and "nome_benef" that is the other field in the form. All values of fields not in the form were wiped out. Additional notes: 1) if the fields that are not in the form are **not** published as WMS/WFS in the layer properties in QGIS then there is no data loss, but this may not be a workaround for all cases 2) another workaround is to put all fields in the D&D form, which again may not be an option for all cases. 3) it is unclear why LMWC sends the UPDATE command for all fields including the ones that were not changed at all (included the geometry). For instance this is the query sent directly by QGIS when chancing just the value of 1 field: `UPDATE "schemaname"."table SET "data_adubacao"=20201006 WHERE "gid"=3907` ### Environment - Lizmap version: 3.3.6 - QGIS Server version: 3.10.5 - QGIS Project version: 3.10.5 - OS (Windows, Linux, MacOS, Android…): Linux (server), Windows/Linux (client) - Browser (Firefox, Chrome…): Firefox and Chrome, others not tested - Backend: PostgreSQL 10
1.0
Wrong UPDATE query sent to PostgreSQL, data loss - ### What is the bug? I have just noticed the following, source of big concerns. 1) I have a PostGIS polygon table/layer which has quite a few columns but not many records (I don't think this is important for the problem) 2) The layer is added to a minimal project (just this layer). The project is set up to be served with QGIS Server and the Lizmap configuration file is generated. The layer is configured to be editable (insert, update, delete) in Lizmap. 3) If the layer has **no** D&D form defined (so using the "auto generate" QGIS option) there are no issues when updating attributes within LMWC. 4) The issue arise when using a D&D form: in this specific test case in the form it was added just the "gid" (PK) field, another field that happens to have been defined as int8, and another field with datatype varchar. All the other attributes of this layer are left out of the form, 5) A feature is edited in LMWC with this form, and the value of this int8 column is changed. Edits are saved. 6) Inspecting the table I can see how the value of the int8 column was updated correctly, and the value of the varchar column was left unchanged (as expected), but for my great surprise all the values of the other columns (that are not part of the form) were completely **wiped out**. 7) I then inspected the UPDATE query sent by LMWC to PostgreSQL and as a fact all the wiped out fields were set as NULL The query looks like: ``` UPDATE "schemaname"."table" SET "geom"=ST_GeomFromText('MULTIPOLYGON(((-5463.366843911836 -7237.887256608387,-5452.475539068239 -7228.778165316916,-5427.128502341457 -7211.649113131462,-5412.27672300934...)))', 3763), "id_parceir"=NULL, "parcela"=NULL, "tipo_benef"=NULL, "nome_benef"='First Last', "nif_benef"=NULL, "telf_benef"=NULL, "mail_benef"=NULL, "parceiro"=NULL, "regiao_celpa"=NULL, "concelho"=NULL, "freguesia"=NULL, "rot_atual"=NULL, "ano_inicio"=NULL, "mes_inicio"=NULL, "densidade"=NULL, "area_ha"=NULL, "certif_fl"=NULL, "grupo_cert"=NULL, "codig_cert"=NULL, "sist_cert"=NULL, "status_cvi_el"=NULL, "tipo_veg_el"=NULL, "grau_cobert_el"=NULL, "altura_veg_el"=NULL, "metodo_cvi_el"=NULL, "status_cvi_l"=NULL, "tipo_veg_l"=NULL, "g rau_cobert_l"=NULL, "altura_veg_l"=NULL, "metodo_cvi_l"=NULL, "psf_cvi"=NULL, "data_cvi"=NULL, "controlo_cvi"=NULL, "observ_cvi"=NULL, "status_svaras"=NULL, "data_svaras"=NULL, "controlo_svaras"=NULL, "observ_svaras"=NULL, "s tatus_nutricional"=NULL, "metodo_adubacao"=NULL, "tipo_adubo"=NULL, "dose_g_pl"=NULL, "nr_sacos"=NULL, "qtdd_adubo"=NULL, "dose_kg_ha"=NULL, "ano_adubacao"=NULL, "psf_aduba"=NULL, "data_adubacao"=20201006, "controlo_adubacao" =NULL, "observ_adubacao"=NULL, "estado_candidatura"=NULL, "estado_informacao"=NULL, "estado_processo"=NULL, "registo_foto"=NULL, "data_registo"=NULL, "ultima_atualizacao"=NULL, "cod_programa"=NULL, "obs_gerais"=NULL, "grupo_webgis"=NULL, "obsv_hv"=NULL WHERE "gid" = 3907 RETURNING "gid"; ``` As you can see all fields where set to NULL but "data_adubacao" that is the field that has been edited and "nome_benef" that is the other field in the form. All values of fields not in the form were wiped out. Additional notes: 1) if the fields that are not in the form are **not** published as WMS/WFS in the layer properties in QGIS then there is no data loss, but this may not be a workaround for all cases 2) another workaround is to put all fields in the D&D form, which again may not be an option for all cases. 3) it is unclear why LMWC sends the UPDATE command for all fields including the ones that were not changed at all (included the geometry). For instance this is the query sent directly by QGIS when chancing just the value of 1 field: `UPDATE "schemaname"."table SET "data_adubacao"=20201006 WHERE "gid"=3907` ### Environment - Lizmap version: 3.3.6 - QGIS Server version: 3.10.5 - QGIS Project version: 3.10.5 - OS (Windows, Linux, MacOS, Android…): Linux (server), Windows/Linux (client) - Browser (Firefox, Chrome…): Firefox and Chrome, others not tested - Backend: PostgreSQL 10
non_test
wrong update query sent to postgresql data loss what is the bug i have just noticed the following source of big concerns i have a postgis polygon table layer which has quite a few columns but not many records i don t think this is important for the problem the layer is added to a minimal project just this layer the project is set up to be served with qgis server and the lizmap configuration file is generated the layer is configured to be editable insert update delete in lizmap if the layer has no d d form defined so using the auto generate qgis option there are no issues when updating attributes within lmwc the issue arise when using a d d form in this specific test case in the form it was added just the gid pk field another field that happens to have been defined as and another field with datatype varchar all the other attributes of this layer are left out of the form a feature is edited in lmwc with this form and the value of this column is changed edits are saved inspecting the table i can see how the value of the column was updated correctly and the value of the varchar column was left unchanged as expected but for my great surprise all the values of the other columns that are not part of the form were completely wiped out i then inspected the update query sent by lmwc to postgresql and as a fact all the wiped out fields were set as null the query looks like update schemaname table set geom st geomfromtext multipolygon id parceir null parcela null tipo benef null nome benef first last nif benef null telf benef null mail benef null parceiro null regiao celpa null concelho null freguesia null rot atual null ano inicio null mes inicio null densidade null area ha null certif fl null grupo cert null codig cert null sist cert null status cvi el null tipo veg el null grau cobert el null altura veg el null metodo cvi el null status cvi l null tipo veg l null g rau cobert l null altura veg l null metodo cvi l null psf cvi null data cvi null controlo cvi null observ cvi null status svaras null data svaras null controlo svaras null observ svaras null s tatus nutricional null metodo adubacao null tipo adubo null dose g pl null nr sacos null qtdd adubo null dose kg ha null ano adubacao null psf aduba null data adubacao controlo adubacao null observ adubacao null estado candidatura null estado informacao null estado processo null registo foto null data registo null ultima atualizacao null cod programa null obs gerais null grupo webgis null obsv hv null where gid returning gid as you can see all fields where set to null but data adubacao that is the field that has been edited and nome benef that is the other field in the form all values of fields not in the form were wiped out additional notes if the fields that are not in the form are not published as wms wfs in the layer properties in qgis then there is no data loss but this may not be a workaround for all cases another workaround is to put all fields in the d d form which again may not be an option for all cases it is unclear why lmwc sends the update command for all fields including the ones that were not changed at all included the geometry for instance this is the query sent directly by qgis when chancing just the value of field update schemaname table set data adubacao where gid environment lizmap version qgis server version qgis project version os windows linux macos android… linux server windows linux client browser firefox chrome… firefox and chrome others not tested backend postgresql
0
361,886
25,352,822,545
IssuesEvent
2022-11-20 00:49:27
WSU-CPTS415-ParquetParkour/Amazon-CoPurchasing
https://api.github.com/repos/WSU-CPTS415-ParquetParkour/Amazon-CoPurchasing
closed
[TASK] Dataset measurements & statistics
documentation
_Background_: To assist in planning for storage and memory allocation, we need to make measurements of the co-purchasing dataset. From the MS02 requirements: > * If you are using a key-value data model, how may key-value pairs? How many unique keys? What are the data types for keys and values? Are these basic data types or data structures? What’s the physical storage size (in KB/MB/GB). > * If you are using a graph data set: how many nodes and edges? How many attributes are there for the nodes/edges? Is it labelled? Directed? What’s the average degree of the nodes? What’s the density of the graph/network data? What’s the physical storage size (in KB/MB/GB). > * If you are using a document data model, how may documents does your model contain? How many elements / sub-elements does each document has? What are the attributes? What’s the physical storage size (in KB/MB/GB). > * If you are using another non-relational data model, describe your dataset statistics based on this non-relational data model. What’s the physical storage size (in KB/MB/GB). _Problem_: This information is both a requirement for MS02, as well as a set of measurements which will aid in planning storage needs. _Success Criteria_: Specific requirements of implementation which must be met for this to be accepted as complete. 1. Data model has been defined and documented (see #3). 2. Measurements and metadata specific to that data model has been calculated and documented. 3. Wiki has been updated with findings and the article's title is prepended with the '[MS02R02]' tag.
1.0
[TASK] Dataset measurements & statistics - _Background_: To assist in planning for storage and memory allocation, we need to make measurements of the co-purchasing dataset. From the MS02 requirements: > * If you are using a key-value data model, how may key-value pairs? How many unique keys? What are the data types for keys and values? Are these basic data types or data structures? What’s the physical storage size (in KB/MB/GB). > * If you are using a graph data set: how many nodes and edges? How many attributes are there for the nodes/edges? Is it labelled? Directed? What’s the average degree of the nodes? What’s the density of the graph/network data? What’s the physical storage size (in KB/MB/GB). > * If you are using a document data model, how may documents does your model contain? How many elements / sub-elements does each document has? What are the attributes? What’s the physical storage size (in KB/MB/GB). > * If you are using another non-relational data model, describe your dataset statistics based on this non-relational data model. What’s the physical storage size (in KB/MB/GB). _Problem_: This information is both a requirement for MS02, as well as a set of measurements which will aid in planning storage needs. _Success Criteria_: Specific requirements of implementation which must be met for this to be accepted as complete. 1. Data model has been defined and documented (see #3). 2. Measurements and metadata specific to that data model has been calculated and documented. 3. Wiki has been updated with findings and the article's title is prepended with the '[MS02R02]' tag.
non_test
dataset measurements statistics background to assist in planning for storage and memory allocation we need to make measurements of the co purchasing dataset from the requirements if you are using a key value data model how may key value pairs how many unique keys what are the data types for keys and values are these basic data types or data structures what’s the physical storage size in kb mb gb if you are using a graph data set how many nodes and edges how many attributes are there for the nodes edges is it labelled directed what’s the average degree of the nodes what’s the density of the graph network data what’s the physical storage size in kb mb gb if you are using a document data model how may documents does your model contain how many elements sub elements does each document has what are the attributes what’s the physical storage size in kb mb gb if you are using another non relational data model describe your dataset statistics based on this non relational data model what’s the physical storage size in kb mb gb problem this information is both a requirement for as well as a set of measurements which will aid in planning storage needs success criteria specific requirements of implementation which must be met for this to be accepted as complete data model has been defined and documented see measurements and metadata specific to that data model has been calculated and documented wiki has been updated with findings and the article s title is prepended with the tag
0
9,612
3,059,986,593
IssuesEvent
2015-08-14 18:02:59
NAVADMC/ADSM
https://api.github.com/repos/NAVADMC/ADSM
closed
Program update process
User Testing
Issue: The current program update methodology needs reworked Explanation: When epidemiologists use the software for research, they need to be absolutely certain that the entire software suite is identical throughout the entire research project. Any models created and simulations run must be done so with the exact same code. To do otherwise invalidates their study results, according to proper scientific method. Having the ability to update the software by clicking on the "check updates" button, especially when it isn't working solidly and the program gives no real or standard indication of its current version, could put research and reputation in jeopardy. Suggestions: 1) During program startup the current version (all of the information relating to that version, i.e. build number, version number, and whatever that hexadecimal number stands for. And for each component, i.e. the GUI and the CEngine) should be displayed clearly in a startup box or window. 2) The current adsm.exe program needs to start making use of Windows programming standards. It needs to, and can, popup message boxes and give user feedback other than a long stream of seemingly meaningless debug messages that scroll off the screen. 3) The program update process should be as it was before. The user should be required to go to the website and download a new version. This makes it a very deliberate act, and provides the user with several advantages: a) The user can run several different versions at the same time on their machine and be absolutely certain of what each version is and is not. b) Assures the user that they have not invalidated their study research by accidentally mixing measurement tools mid-study. c) Assures the user that the update process works as expected and does not have any unknown quirks, as experienced by me in the testing process.
1.0
Program update process - Issue: The current program update methodology needs reworked Explanation: When epidemiologists use the software for research, they need to be absolutely certain that the entire software suite is identical throughout the entire research project. Any models created and simulations run must be done so with the exact same code. To do otherwise invalidates their study results, according to proper scientific method. Having the ability to update the software by clicking on the "check updates" button, especially when it isn't working solidly and the program gives no real or standard indication of its current version, could put research and reputation in jeopardy. Suggestions: 1) During program startup the current version (all of the information relating to that version, i.e. build number, version number, and whatever that hexadecimal number stands for. And for each component, i.e. the GUI and the CEngine) should be displayed clearly in a startup box or window. 2) The current adsm.exe program needs to start making use of Windows programming standards. It needs to, and can, popup message boxes and give user feedback other than a long stream of seemingly meaningless debug messages that scroll off the screen. 3) The program update process should be as it was before. The user should be required to go to the website and download a new version. This makes it a very deliberate act, and provides the user with several advantages: a) The user can run several different versions at the same time on their machine and be absolutely certain of what each version is and is not. b) Assures the user that they have not invalidated their study research by accidentally mixing measurement tools mid-study. c) Assures the user that the update process works as expected and does not have any unknown quirks, as experienced by me in the testing process.
test
program update process issue the current program update methodology needs reworked explanation when epidemiologists use the software for research they need to be absolutely certain that the entire software suite is identical throughout the entire research project any models created and simulations run must be done so with the exact same code to do otherwise invalidates their study results according to proper scientific method having the ability to update the software by clicking on the check updates button especially when it isn t working solidly and the program gives no real or standard indication of its current version could put research and reputation in jeopardy suggestions during program startup the current version all of the information relating to that version i e build number version number and whatever that hexadecimal number stands for and for each component i e the gui and the cengine should be displayed clearly in a startup box or window the current adsm exe program needs to start making use of windows programming standards it needs to and can popup message boxes and give user feedback other than a long stream of seemingly meaningless debug messages that scroll off the screen the program update process should be as it was before the user should be required to go to the website and download a new version this makes it a very deliberate act and provides the user with several advantages a the user can run several different versions at the same time on their machine and be absolutely certain of what each version is and is not b assures the user that they have not invalidated their study research by accidentally mixing measurement tools mid study c assures the user that the update process works as expected and does not have any unknown quirks as experienced by me in the testing process
1
50,126
6,061,442,879
IssuesEvent
2017-06-14 06:37:43
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
closed
Leaked socket file after unit test
area/test kind/bug sig/api-machinery
**Is this a BUG REPORT or FEATURE REQUEST?** (choose one): BUG REPORT **What happened**: 4 socket files is left in workspace after we run "make test" **What you expected to happen**: The workspace should be clean. **How to reproduce it** (as minimally and precisely as possible): $ make test WHAT=k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/registry/generic/registry KUBE_GOFLAGS="-v" $ find -type s ./staging/src/k8s.io/apiserver/pkg/registry/generic/registry/127.0.0.1:2105731492 ./staging/src/k8s.io/apiserver/pkg/registry/generic/registry/127.0.0.1:2105831492 ./staging/src/k8s.io/apiserver/pkg/registry/generic/registry/localhost:8595437375691616229 ./staging/src/k8s.io/apiserver/pkg/registry/generic/registry/localhost:85954373756916162290 **Anything else we need to know**:
1.0
Leaked socket file after unit test - **Is this a BUG REPORT or FEATURE REQUEST?** (choose one): BUG REPORT **What happened**: 4 socket files is left in workspace after we run "make test" **What you expected to happen**: The workspace should be clean. **How to reproduce it** (as minimally and precisely as possible): $ make test WHAT=k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/registry/generic/registry KUBE_GOFLAGS="-v" $ find -type s ./staging/src/k8s.io/apiserver/pkg/registry/generic/registry/127.0.0.1:2105731492 ./staging/src/k8s.io/apiserver/pkg/registry/generic/registry/127.0.0.1:2105831492 ./staging/src/k8s.io/apiserver/pkg/registry/generic/registry/localhost:8595437375691616229 ./staging/src/k8s.io/apiserver/pkg/registry/generic/registry/localhost:85954373756916162290 **Anything else we need to know**:
test
leaked socket file after unit test is this a bug report or feature request choose one bug report what happened socket files is left in workspace after we run make test what you expected to happen the workspace should be clean how to reproduce it as minimally and precisely as possible make test what io kubernetes vendor io apiserver pkg registry generic registry kube goflags v find type s staging src io apiserver pkg registry generic registry staging src io apiserver pkg registry generic registry staging src io apiserver pkg registry generic registry localhost staging src io apiserver pkg registry generic registry localhost anything else we need to know
1
39,085
5,217,409,989
IssuesEvent
2017-01-26 13:50:16
hzi-braunschweig/SORMAS-Open
https://api.github.com/repos/hzi-braunschweig/SORMAS-Open
closed
Create mockups for sample testing (lab persona) [2]
accepted Sample Lab Testing
* person in hospital will send lab material physically (probably informant) * at the same time enters sample data into the system * UID of system + another UID from the lab form * lab person receive sample and has to match it to entry in sormas * enters lab result for the sample * list of test typs: relevant disease, result options * list of labs with definition of test types * possible 2 step process: forward to another lab * multiple tests per sample * sample: UID, Text-ID, Sample referral (lab), 2nd level sample referral, kind-of material + other, date+time of sampling (collecting), timestamp of shipment, shipment details, status: Shipped, Received & Pending, Further testing, Referred to other lab, Tested * test: kind of test, date+time, lab, result (depends on kind of test; can also be text) + text-field, verified (user has to enter verification pin) - [x] App Sample list - [x] App Sample create/edit - [x] Web-UI Sample list - [x] Web-UI Sample create/edit - [x] Web-UI Sample Tests list - [x] Web-UI Sample Tests create/edit - [x] DataDictionary Sample, SampleTest
1.0
Create mockups for sample testing (lab persona) [2] - * person in hospital will send lab material physically (probably informant) * at the same time enters sample data into the system * UID of system + another UID from the lab form * lab person receive sample and has to match it to entry in sormas * enters lab result for the sample * list of test typs: relevant disease, result options * list of labs with definition of test types * possible 2 step process: forward to another lab * multiple tests per sample * sample: UID, Text-ID, Sample referral (lab), 2nd level sample referral, kind-of material + other, date+time of sampling (collecting), timestamp of shipment, shipment details, status: Shipped, Received & Pending, Further testing, Referred to other lab, Tested * test: kind of test, date+time, lab, result (depends on kind of test; can also be text) + text-field, verified (user has to enter verification pin) - [x] App Sample list - [x] App Sample create/edit - [x] Web-UI Sample list - [x] Web-UI Sample create/edit - [x] Web-UI Sample Tests list - [x] Web-UI Sample Tests create/edit - [x] DataDictionary Sample, SampleTest
test
create mockups for sample testing lab persona person in hospital will send lab material physically probably informant at the same time enters sample data into the system uid of system another uid from the lab form lab person receive sample and has to match it to entry in sormas enters lab result for the sample list of test typs relevant disease result options list of labs with definition of test types possible step process forward to another lab multiple tests per sample sample uid text id sample referral lab level sample referral kind of material other date time of sampling collecting timestamp of shipment shipment details status shipped received pending further testing referred to other lab tested test kind of test date time lab result depends on kind of test can also be text text field verified user has to enter verification pin app sample list app sample create edit web ui sample list web ui sample create edit web ui sample tests list web ui sample tests create edit datadictionary sample sampletest
1
130,683
10,640,907,832
IssuesEvent
2019-10-16 08:22:57
enonic/app-contentstudio
https://api.github.com/repos/enonic/app-contentstudio
opened
Add ui-test to verify issue #1095
Test
Inspection panel is not updated after a changes in the site-wizard #1095
1.0
Add ui-test to verify issue #1095 - Inspection panel is not updated after a changes in the site-wizard #1095
test
add ui test to verify issue inspection panel is not updated after a changes in the site wizard
1
192,048
14,598,964,061
IssuesEvent
2020-12-21 02:41:55
GlobantUy/STB-Bank
https://api.github.com/repos/GlobantUy/STB-Bank
opened
[Login - Validaciones ] Validar mensaje de error cuando el correo no existe
TestCase
**Precondiciones:** ======================================================= Pasos para la ejecución | Resultado Esperado ------------ | ------------- 1: Ingresar al Simulador de Préstamos| 2: Cliquear el botón 'ingresar'| 3: Ingresar un correo inexistente y una contraseña válida| Se despliega mensaje de error 'Los datos ingresados no son correctos, por favor verifique' 4: Ingresar un correo existente y una contraseña inválida| Se despliega mensaje de error 'Los datos ingresados no son correctos, por favor verifique' ======================================================= **US asociada:** #6
1.0
[Login - Validaciones ] Validar mensaje de error cuando el correo no existe - **Precondiciones:** ======================================================= Pasos para la ejecución | Resultado Esperado ------------ | ------------- 1: Ingresar al Simulador de Préstamos| 2: Cliquear el botón 'ingresar'| 3: Ingresar un correo inexistente y una contraseña válida| Se despliega mensaje de error 'Los datos ingresados no son correctos, por favor verifique' 4: Ingresar un correo existente y una contraseña inválida| Se despliega mensaje de error 'Los datos ingresados no son correctos, por favor verifique' ======================================================= **US asociada:** #6
test
validar mensaje de error cuando el correo no existe precondiciones pasos para la ejecución resultado esperado ingresar al simulador de préstamos cliquear el botón ingresar ingresar un correo inexistente y una contraseña válida se despliega mensaje de error los datos ingresados no son correctos por favor verifique ingresar un correo existente y una contraseña inválida se despliega mensaje de error los datos ingresados no son correctos por favor verifique us asociada
1
50,593
6,103,475,919
IssuesEvent
2017-06-20 18:47:37
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
teamcity: failed tests on master: test/TestLogic, test/TestLogic/default, test/TestLogic/default/show_trace, testrace/TestLogic, testrace/TestLogic/default, testrace/TestLogic/default/show_trace, lint/TestStyle, lint/TestStyle/TestErrCheck, lint/TestStyle/TestReturnCheck, lint/TestStyle/TestMetacheck
Robot test-failure
The following tests appear to have failed: [#276796](https://teamcity.cockroachdb.com/viewLog.html?buildId=276796): ``` --- FAIL: test/TestLogic (48.190s) test_log_scope.go:80: test logs captured to: /go/src/github.com/cockroachdb/cockroach/artifacts/logTestLogic321497299 test_log_scope.go:63: use -show-logs to present logs inline --- FAIL: test/TestLogic/default (0.170s) null --- FAIL: test/TestLogic/default/show_trace (0.330s) logic_test.go:1707: testdata/logic_test/show_trace:71: expected: (0,1) starting plan querying next range at /Table/2/1/51/"kv"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /System/"desc-idgen" (0,1) starting plan r1: sending batch 1 Inc, 1 BeginTxn to (n1,s1):1 (0,1) starting plan CPut /Table/2/1/51/"kv"/3/1 -> 52 (0,1) starting plan CPut /Table/3/1/52/2/1 -> table:<name:"kv" id:52 parent_id:51 version:1 up_version:false modification_time:<wall_time:0 logical:0 > columns:<name:"k" id:1 type:<kind:INT width:0 precision:0 > nullable:false hidden:false > columns:<name:"v" id:2 type:<kind:INT width:0 precision:0 > nullable:true hidden:false > next_column_id:3 families:<name:"primary" id:0 column_names:"k" column_names:"v" column_ids:1 column_ids:2 default_column_id:2 > next_family_id:1 primary_index:<name:"primary" id:1 unique:true column_names:"k" column_directions:ASC column_ids:1 foreign_key:<table:0 index:0 name:"" validity:Validated shared_prefix_len:0 > interleave:<> > next_index_id:2 privileges:<users:<user:"root" privileges:2 > > next_mutation_id:1 format_version:InterleavedFormatVersion state:PUBLIC view_query:"" > (0,1) starting plan querying next range at /Table/2/1/51/"kv"/3/1 (0,1) starting plan r1: sending batch 2 CPut to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/51/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/2/1/0/"system"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/1/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/2/1/1/"eventlog"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/12/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/1/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan r1: sending batch 5 CPut to (n1,s1):1 but found (query options: "") : (0,1) starting plan querying next range at /Table/2/1/51/"kv"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /System/"desc-idgen" (0,1) starting plan r1: sending batch 1 Inc, 1 BeginTxn to (n1,s1):1 (0,1) starting plan CPut /Table/2/1/51/"kv"/3/1 -> 52 (0,1) starting plan CPut /Table/3/1/52/2/1 -> table:<name:"kv" id:52 parent_id:51 version:1 up_version:false modification_time:<wall_time:0 logical:0 > columns:<name:"k" id:1 type:<semantic_type:INT width:0 precision:0 > nullable:false hidden:false > columns:<name:"v" id:2 type:<semantic_type:INT width:0 precision:0 > nullable:true hidden:false > next_column_id:3 families:<name:"primary" id:0 column_names:"k" column_names:"v" column_ids:1 column_ids:2 default_column_id:2 > next_family_id:1 primary_index:<name:"primary" id:1 unique:true column_names:"k" column_directions:ASC column_ids:1 foreign_key:<table:0 index:0 name:"" validity:Validated shared_prefix_len:0 > interleave:<> > next_index_id:2 privileges:<users:<user:"root" privileges:2 > > next_mutation_id:1 format_version:InterleavedFormatVersion state:PUBLIC view_query:"" > (0,1) starting plan querying next range at /Table/2/1/51/"kv"/3/1 (0,1) starting plan r1: sending batch 2 CPut to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/51/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/2/1/0/"system"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/1/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/2/1/1/"eventlog"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/12/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/1/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan r1: sending batch 5 CPut to (n1,s1):1 logic_test.go:1737: testdata/logic_test/show_trace:96: too many errors encountered, skipping the rest of the input --- FAIL: testrace/TestLogic (433.330s) <autogenerated>:10: test logs captured to: /go/src/github.com/cockroachdb/cockroach/artifacts/logTestLogic751330862 <autogenerated>:9: use -show-logs to present logs inline --- FAIL: testrace/TestLogic/default (1.020s) null --- FAIL: testrace/TestLogic/default/show_trace (2.780s) logic_test.go:1707: testdata/logic_test/show_trace:71: expected: (0,1) starting plan querying next range at /Table/2/1/51/"kv"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /System/"desc-idgen" (0,1) starting plan r1: sending batch 1 Inc, 1 BeginTxn to (n1,s1):1 (0,1) starting plan CPut /Table/2/1/51/"kv"/3/1 -> 52 (0,1) starting plan CPut /Table/3/1/52/2/1 -> table:<name:"kv" id:52 parent_id:51 version:1 up_version:false modification_time:<wall_time:0 logical:0 > columns:<name:"k" id:1 type:<kind:INT width:0 precision:0 > nullable:false hidden:false > columns:<name:"v" id:2 type:<kind:INT width:0 precision:0 > nullable:true hidden:false > next_column_id:3 families:<name:"primary" id:0 column_names:"k" column_names:"v" column_ids:1 column_ids:2 default_column_id:2 > next_family_id:1 primary_index:<name:"primary" id:1 unique:true column_names:"k" column_directions:ASC column_ids:1 foreign_key:<table:0 index:0 name:"" validity:Validated shared_prefix_len:0 > interleave:<> > next_index_id:2 privileges:<users:<user:"root" privileges:2 > > next_mutation_id:1 format_version:InterleavedFormatVersion state:PUBLIC view_query:"" > (0,1) starting plan querying next range at /Table/2/1/51/"kv"/3/1 (0,1) starting plan r1: sending batch 2 CPut to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/51/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/2/1/0/"system"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/1/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/2/1/1/"eventlog"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/12/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/1/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan r1: sending batch 5 CPut to (n1,s1):1 but found (query options: "") : (0,1) starting plan querying next range at /Table/2/1/51/"kv"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /System/"desc-idgen" (0,1) starting plan r1: sending batch 1 Inc, 1 BeginTxn to (n1,s1):1 (0,1) starting plan CPut /Table/2/1/51/"kv"/3/1 -> 52 (0,1) starting plan CPut /Table/3/1/52/2/1 -> table:<name:"kv" id:52 parent_id:51 version:1 up_version:false modification_time:<wall_time:0 logical:0 > columns:<name:"k" id:1 type:<semantic_type:INT width:0 precision:0 > nullable:false hidden:false > columns:<name:"v" id:2 type:<semantic_type:INT width:0 precision:0 > nullable:true hidden:false > next_column_id:3 families:<name:"primary" id:0 column_names:"k" column_names:"v" column_ids:1 column_ids:2 default_column_id:2 > next_family_id:1 primary_index:<name:"primary" id:1 unique:true column_names:"k" column_directions:ASC column_ids:1 foreign_key:<table:0 index:0 name:"" validity:Validated shared_prefix_len:0 > interleave:<> > next_index_id:2 privileges:<users:<user:"root" privileges:2 > > next_mutation_id:1 format_version:InterleavedFormatVersion state:PUBLIC view_query:"" > (0,1) starting plan querying next range at /Table/2/1/51/"kv"/3/1 (0,1) starting plan r1: sending batch 2 CPut to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/51/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/2/1/0/"system"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/1/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/2/1/1/"eventlog"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/12/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/1/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan r1: sending batch 5 CPut to (n1,s1):1 logic_test.go:1737: testdata/logic_test/show_trace:96: too many errors encountered, skipping the rest of the input --- FAIL: lint/TestStyle (30.130s) null --- FAIL: lint/TestStyle/TestErrCheck (8.920s) style_test.go:601: err=exit status 2, stderr=/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsqlrun/sorter_test.go:235:38: unknown field Kind in struct literal /go/src/github.com/cockroachdb/cockroach/pkg/sql/distsqlrun/sorter_test.go:276:38: unknown field Kind in struct literal error: failed to check packages: could not type check: couldn't load packages due to errors: github.com/cockroachdb/cockroach/pkg/sql/distsqlrun --- FAIL: lint/TestStyle/TestReturnCheck (11.660s) style_test.go:625: err=exit status 1, stderr=/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsqlrun/sorter_test.go:235:38: unknown field Kind in struct literal /go/src/github.com/cockroachdb/cockroach/pkg/sql/distsqlrun/sorter_test.go:276:38: unknown field Kind in struct literal --- FAIL: lint/TestStyle/TestMetacheck (9.550s) style_test.go:804: err=exit status 1, stderr=/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsqlrun/sorter_test.go:235:38: unknown field Kind in struct literal /go/src/github.com/cockroachdb/cockroach/pkg/sql/distsqlrun/sorter_test.go:276:38: unknown field Kind in struct literal couldn't load packages due to errors: github.com/cockroachdb/cockroach/pkg/sql/distsqlrun ``` Please assign, take a look and update the issue accordingly.
1.0
teamcity: failed tests on master: test/TestLogic, test/TestLogic/default, test/TestLogic/default/show_trace, testrace/TestLogic, testrace/TestLogic/default, testrace/TestLogic/default/show_trace, lint/TestStyle, lint/TestStyle/TestErrCheck, lint/TestStyle/TestReturnCheck, lint/TestStyle/TestMetacheck - The following tests appear to have failed: [#276796](https://teamcity.cockroachdb.com/viewLog.html?buildId=276796): ``` --- FAIL: test/TestLogic (48.190s) test_log_scope.go:80: test logs captured to: /go/src/github.com/cockroachdb/cockroach/artifacts/logTestLogic321497299 test_log_scope.go:63: use -show-logs to present logs inline --- FAIL: test/TestLogic/default (0.170s) null --- FAIL: test/TestLogic/default/show_trace (0.330s) logic_test.go:1707: testdata/logic_test/show_trace:71: expected: (0,1) starting plan querying next range at /Table/2/1/51/"kv"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /System/"desc-idgen" (0,1) starting plan r1: sending batch 1 Inc, 1 BeginTxn to (n1,s1):1 (0,1) starting plan CPut /Table/2/1/51/"kv"/3/1 -> 52 (0,1) starting plan CPut /Table/3/1/52/2/1 -> table:<name:"kv" id:52 parent_id:51 version:1 up_version:false modification_time:<wall_time:0 logical:0 > columns:<name:"k" id:1 type:<kind:INT width:0 precision:0 > nullable:false hidden:false > columns:<name:"v" id:2 type:<kind:INT width:0 precision:0 > nullable:true hidden:false > next_column_id:3 families:<name:"primary" id:0 column_names:"k" column_names:"v" column_ids:1 column_ids:2 default_column_id:2 > next_family_id:1 primary_index:<name:"primary" id:1 unique:true column_names:"k" column_directions:ASC column_ids:1 foreign_key:<table:0 index:0 name:"" validity:Validated shared_prefix_len:0 > interleave:<> > next_index_id:2 privileges:<users:<user:"root" privileges:2 > > next_mutation_id:1 format_version:InterleavedFormatVersion state:PUBLIC view_query:"" > (0,1) starting plan querying next range at /Table/2/1/51/"kv"/3/1 (0,1) starting plan r1: sending batch 2 CPut to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/51/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/2/1/0/"system"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/1/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/2/1/1/"eventlog"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/12/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/1/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan r1: sending batch 5 CPut to (n1,s1):1 but found (query options: "") : (0,1) starting plan querying next range at /Table/2/1/51/"kv"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /System/"desc-idgen" (0,1) starting plan r1: sending batch 1 Inc, 1 BeginTxn to (n1,s1):1 (0,1) starting plan CPut /Table/2/1/51/"kv"/3/1 -> 52 (0,1) starting plan CPut /Table/3/1/52/2/1 -> table:<name:"kv" id:52 parent_id:51 version:1 up_version:false modification_time:<wall_time:0 logical:0 > columns:<name:"k" id:1 type:<semantic_type:INT width:0 precision:0 > nullable:false hidden:false > columns:<name:"v" id:2 type:<semantic_type:INT width:0 precision:0 > nullable:true hidden:false > next_column_id:3 families:<name:"primary" id:0 column_names:"k" column_names:"v" column_ids:1 column_ids:2 default_column_id:2 > next_family_id:1 primary_index:<name:"primary" id:1 unique:true column_names:"k" column_directions:ASC column_ids:1 foreign_key:<table:0 index:0 name:"" validity:Validated shared_prefix_len:0 > interleave:<> > next_index_id:2 privileges:<users:<user:"root" privileges:2 > > next_mutation_id:1 format_version:InterleavedFormatVersion state:PUBLIC view_query:"" > (0,1) starting plan querying next range at /Table/2/1/51/"kv"/3/1 (0,1) starting plan r1: sending batch 2 CPut to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/51/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/2/1/0/"system"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/1/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/2/1/1/"eventlog"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/12/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/1/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan r1: sending batch 5 CPut to (n1,s1):1 logic_test.go:1737: testdata/logic_test/show_trace:96: too many errors encountered, skipping the rest of the input --- FAIL: testrace/TestLogic (433.330s) <autogenerated>:10: test logs captured to: /go/src/github.com/cockroachdb/cockroach/artifacts/logTestLogic751330862 <autogenerated>:9: use -show-logs to present logs inline --- FAIL: testrace/TestLogic/default (1.020s) null --- FAIL: testrace/TestLogic/default/show_trace (2.780s) logic_test.go:1707: testdata/logic_test/show_trace:71: expected: (0,1) starting plan querying next range at /Table/2/1/51/"kv"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /System/"desc-idgen" (0,1) starting plan r1: sending batch 1 Inc, 1 BeginTxn to (n1,s1):1 (0,1) starting plan CPut /Table/2/1/51/"kv"/3/1 -> 52 (0,1) starting plan CPut /Table/3/1/52/2/1 -> table:<name:"kv" id:52 parent_id:51 version:1 up_version:false modification_time:<wall_time:0 logical:0 > columns:<name:"k" id:1 type:<kind:INT width:0 precision:0 > nullable:false hidden:false > columns:<name:"v" id:2 type:<kind:INT width:0 precision:0 > nullable:true hidden:false > next_column_id:3 families:<name:"primary" id:0 column_names:"k" column_names:"v" column_ids:1 column_ids:2 default_column_id:2 > next_family_id:1 primary_index:<name:"primary" id:1 unique:true column_names:"k" column_directions:ASC column_ids:1 foreign_key:<table:0 index:0 name:"" validity:Validated shared_prefix_len:0 > interleave:<> > next_index_id:2 privileges:<users:<user:"root" privileges:2 > > next_mutation_id:1 format_version:InterleavedFormatVersion state:PUBLIC view_query:"" > (0,1) starting plan querying next range at /Table/2/1/51/"kv"/3/1 (0,1) starting plan r1: sending batch 2 CPut to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/51/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/2/1/0/"system"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/1/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/2/1/1/"eventlog"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/12/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/1/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan r1: sending batch 5 CPut to (n1,s1):1 but found (query options: "") : (0,1) starting plan querying next range at /Table/2/1/51/"kv"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /System/"desc-idgen" (0,1) starting plan r1: sending batch 1 Inc, 1 BeginTxn to (n1,s1):1 (0,1) starting plan CPut /Table/2/1/51/"kv"/3/1 -> 52 (0,1) starting plan CPut /Table/3/1/52/2/1 -> table:<name:"kv" id:52 parent_id:51 version:1 up_version:false modification_time:<wall_time:0 logical:0 > columns:<name:"k" id:1 type:<semantic_type:INT width:0 precision:0 > nullable:false hidden:false > columns:<name:"v" id:2 type:<semantic_type:INT width:0 precision:0 > nullable:true hidden:false > next_column_id:3 families:<name:"primary" id:0 column_names:"k" column_names:"v" column_ids:1 column_ids:2 default_column_id:2 > next_family_id:1 primary_index:<name:"primary" id:1 unique:true column_names:"k" column_directions:ASC column_ids:1 foreign_key:<table:0 index:0 name:"" validity:Validated shared_prefix_len:0 > interleave:<> > next_index_id:2 privileges:<users:<user:"root" privileges:2 > > next_mutation_id:1 format_version:InterleavedFormatVersion state:PUBLIC view_query:"" > (0,1) starting plan querying next range at /Table/2/1/51/"kv"/3/1 (0,1) starting plan r1: sending batch 2 CPut to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/51/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/2/1/0/"system"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/1/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/2/1/1/"eventlog"/3/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/12/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan querying next range at /Table/3/1/1/2/1 (0,1) starting plan r1: sending batch 1 Get to (n1,s1):1 (0,1) starting plan r1: sending batch 5 CPut to (n1,s1):1 logic_test.go:1737: testdata/logic_test/show_trace:96: too many errors encountered, skipping the rest of the input --- FAIL: lint/TestStyle (30.130s) null --- FAIL: lint/TestStyle/TestErrCheck (8.920s) style_test.go:601: err=exit status 2, stderr=/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsqlrun/sorter_test.go:235:38: unknown field Kind in struct literal /go/src/github.com/cockroachdb/cockroach/pkg/sql/distsqlrun/sorter_test.go:276:38: unknown field Kind in struct literal error: failed to check packages: could not type check: couldn't load packages due to errors: github.com/cockroachdb/cockroach/pkg/sql/distsqlrun --- FAIL: lint/TestStyle/TestReturnCheck (11.660s) style_test.go:625: err=exit status 1, stderr=/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsqlrun/sorter_test.go:235:38: unknown field Kind in struct literal /go/src/github.com/cockroachdb/cockroach/pkg/sql/distsqlrun/sorter_test.go:276:38: unknown field Kind in struct literal --- FAIL: lint/TestStyle/TestMetacheck (9.550s) style_test.go:804: err=exit status 1, stderr=/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsqlrun/sorter_test.go:235:38: unknown field Kind in struct literal /go/src/github.com/cockroachdb/cockroach/pkg/sql/distsqlrun/sorter_test.go:276:38: unknown field Kind in struct literal couldn't load packages due to errors: github.com/cockroachdb/cockroach/pkg/sql/distsqlrun ``` Please assign, take a look and update the issue accordingly.
test
teamcity failed tests on master test testlogic test testlogic default test testlogic default show trace testrace testlogic testrace testlogic default testrace testlogic default show trace lint teststyle lint teststyle testerrcheck lint teststyle testreturncheck lint teststyle testmetacheck the following tests appear to have failed fail test testlogic test log scope go test logs captured to go src github com cockroachdb cockroach artifacts test log scope go use show logs to present logs inline fail test testlogic default null fail test testlogic default show trace logic test go testdata logic test show trace expected starting plan querying next range at table kv starting plan sending batch get to starting plan querying next range at system desc idgen starting plan sending batch inc begintxn to starting plan cput table kv starting plan cput table table columns nullable false hidden false columns nullable true hidden false next column id families next family id primary index interleave next index id privileges next mutation id format version interleavedformatversion state public view query starting plan querying next range at table kv starting plan sending batch cput to starting plan querying next range at table starting plan sending batch get to starting plan querying next range at table system starting plan sending batch get to starting plan querying next range at table starting plan sending batch get to starting plan querying next range at table eventlog starting plan sending batch get to starting plan querying next range at table starting plan sending batch get to starting plan querying next range at table starting plan sending batch get to starting plan sending batch cput to but found query options starting plan querying next range at table kv starting plan sending batch get to starting plan querying next range at system desc idgen starting plan sending batch inc begintxn to starting plan cput table kv starting plan cput table table columns nullable false hidden false columns nullable true hidden false next column id families next family id primary index interleave next index id privileges next mutation id format version interleavedformatversion state public view query starting plan querying next range at table kv starting plan sending batch cput to starting plan querying next range at table starting plan sending batch get to starting plan querying next range at table system starting plan sending batch get to starting plan querying next range at table starting plan sending batch get to starting plan querying next range at table eventlog starting plan sending batch get to starting plan querying next range at table starting plan sending batch get to starting plan querying next range at table starting plan sending batch get to starting plan sending batch cput to logic test go testdata logic test show trace too many errors encountered skipping the rest of the input fail testrace testlogic test logs captured to go src github com cockroachdb cockroach artifacts use show logs to present logs inline fail testrace testlogic default null fail testrace testlogic default show trace logic test go testdata logic test show trace expected starting plan querying next range at table kv starting plan sending batch get to starting plan querying next range at system desc idgen starting plan sending batch inc begintxn to starting plan cput table kv starting plan cput table table columns nullable false hidden false columns nullable true hidden false next column id families next family id primary index interleave next index id privileges next mutation id format version interleavedformatversion state public view query starting plan querying next range at table kv starting plan sending batch cput to starting plan querying next range at table starting plan sending batch get to starting plan querying next range at table system starting plan sending batch get to starting plan querying next range at table starting plan sending batch get to starting plan querying next range at table eventlog starting plan sending batch get to starting plan querying next range at table starting plan sending batch get to starting plan querying next range at table starting plan sending batch get to starting plan sending batch cput to but found query options starting plan querying next range at table kv starting plan sending batch get to starting plan querying next range at system desc idgen starting plan sending batch inc begintxn to starting plan cput table kv starting plan cput table table columns nullable false hidden false columns nullable true hidden false next column id families next family id primary index interleave next index id privileges next mutation id format version interleavedformatversion state public view query starting plan querying next range at table kv starting plan sending batch cput to starting plan querying next range at table starting plan sending batch get to starting plan querying next range at table system starting plan sending batch get to starting plan querying next range at table starting plan sending batch get to starting plan querying next range at table eventlog starting plan sending batch get to starting plan querying next range at table starting plan sending batch get to starting plan querying next range at table starting plan sending batch get to starting plan sending batch cput to logic test go testdata logic test show trace too many errors encountered skipping the rest of the input fail lint teststyle null fail lint teststyle testerrcheck style test go err exit status stderr go src github com cockroachdb cockroach pkg sql distsqlrun sorter test go unknown field kind in struct literal go src github com cockroachdb cockroach pkg sql distsqlrun sorter test go unknown field kind in struct literal error failed to check packages could not type check couldn t load packages due to errors github com cockroachdb cockroach pkg sql distsqlrun fail lint teststyle testreturncheck style test go err exit status stderr go src github com cockroachdb cockroach pkg sql distsqlrun sorter test go unknown field kind in struct literal go src github com cockroachdb cockroach pkg sql distsqlrun sorter test go unknown field kind in struct literal fail lint teststyle testmetacheck style test go err exit status stderr go src github com cockroachdb cockroach pkg sql distsqlrun sorter test go unknown field kind in struct literal go src github com cockroachdb cockroach pkg sql distsqlrun sorter test go unknown field kind in struct literal couldn t load packages due to errors github com cockroachdb cockroach pkg sql distsqlrun please assign take a look and update the issue accordingly
1
250,805
27,111,566,517
IssuesEvent
2023-02-15 15:39:03
EliyaC/JAVA-DEMO
https://api.github.com/repos/EliyaC/JAVA-DEMO
closed
CVE-2013-4002 (Medium) detected in xercesImpl-2.8.0.jar - autoclosed
security vulnerability
## CVE-2013-4002 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xercesImpl-2.8.0.jar</b></p></summary> <p>Xerces2 is the next generation of high performance, fully compliant XML parsers in the Apache Xerces family. This new version of Xerces introduces the Xerces Native Interface (XNI), a complete framework for building parser components and configurations that is extremely modular and easy to program.</p> <p>Library home page: <a href="http://xerces.apache.org/xerces2-j">http://xerces.apache.org/xerces2-j</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/xerces/xercesImpl/2.8.0/xercesImpl-2.8.0.jar</p> <p> Dependency Hierarchy: - esapi-2.1.0.1.jar (Root Library) - xom-1.2.5.jar - :x: **xercesImpl-2.8.0.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/EliyaC/JAVA-DEMO/commit/161c15821aae65cc93de5739e7824a2c312f0b60">161c15821aae65cc93de5739e7824a2c312f0b60</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> XMLscanner.java in Apache Xerces2 Java Parser before 2.12.0, as used in the Java Runtime Environment (JRE) in IBM Java 5.0 before 5.0 SR16-FP3, 6 before 6 SR14, 6.0.1 before 6.0.1 SR6, and 7 before 7 SR5 as well as Oracle Java SE 7u40 and earlier, Java SE 6u60 and earlier, Java SE 5.0u51 and earlier, JRockit R28.2.8 and earlier, JRockit R27.7.6 and earlier, Java SE Embedded 7u40 and earlier, and possibly other products allows remote attackers to cause a denial of service via vectors related to XML attribute names. <p>Publish Date: 2013-07-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-4002>CVE-2013-4002</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-4002">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-4002</a></p> <p>Release Date: 2013-07-23</p> <p>Fix Resolution: xerces:xercesImpl:Xerces-J_2_12_0</p> </p> </details> <p></p>
True
CVE-2013-4002 (Medium) detected in xercesImpl-2.8.0.jar - autoclosed - ## CVE-2013-4002 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xercesImpl-2.8.0.jar</b></p></summary> <p>Xerces2 is the next generation of high performance, fully compliant XML parsers in the Apache Xerces family. This new version of Xerces introduces the Xerces Native Interface (XNI), a complete framework for building parser components and configurations that is extremely modular and easy to program.</p> <p>Library home page: <a href="http://xerces.apache.org/xerces2-j">http://xerces.apache.org/xerces2-j</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/xerces/xercesImpl/2.8.0/xercesImpl-2.8.0.jar</p> <p> Dependency Hierarchy: - esapi-2.1.0.1.jar (Root Library) - xom-1.2.5.jar - :x: **xercesImpl-2.8.0.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/EliyaC/JAVA-DEMO/commit/161c15821aae65cc93de5739e7824a2c312f0b60">161c15821aae65cc93de5739e7824a2c312f0b60</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> XMLscanner.java in Apache Xerces2 Java Parser before 2.12.0, as used in the Java Runtime Environment (JRE) in IBM Java 5.0 before 5.0 SR16-FP3, 6 before 6 SR14, 6.0.1 before 6.0.1 SR6, and 7 before 7 SR5 as well as Oracle Java SE 7u40 and earlier, Java SE 6u60 and earlier, Java SE 5.0u51 and earlier, JRockit R28.2.8 and earlier, JRockit R27.7.6 and earlier, Java SE Embedded 7u40 and earlier, and possibly other products allows remote attackers to cause a denial of service via vectors related to XML attribute names. <p>Publish Date: 2013-07-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-4002>CVE-2013-4002</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-4002">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-4002</a></p> <p>Release Date: 2013-07-23</p> <p>Fix Resolution: xerces:xercesImpl:Xerces-J_2_12_0</p> </p> </details> <p></p>
non_test
cve medium detected in xercesimpl jar autoclosed cve medium severity vulnerability vulnerable library xercesimpl jar is the next generation of high performance fully compliant xml parsers in the apache xerces family this new version of xerces introduces the xerces native interface xni a complete framework for building parser components and configurations that is extremely modular and easy to program library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository xerces xercesimpl xercesimpl jar dependency hierarchy esapi jar root library xom jar x xercesimpl jar vulnerable library found in head commit a href found in base branch main vulnerability details xmlscanner java in apache java parser before as used in the java runtime environment jre in ibm java before before before and before as well as oracle java se and earlier java se and earlier java se and earlier jrockit and earlier jrockit and earlier java se embedded and earlier and possibly other products allows remote attackers to cause a denial of service via vectors related to xml attribute names publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution xerces xercesimpl xerces j
0
83,046
10,319,426,159
IssuesEvent
2019-08-30 17:29:07
reactioncommerce/reaction
https://api.github.com/repos/reactioncommerce/reaction
closed
Bulk Tag/Product Feature: Show isVisible/Hidden circle in Product Table
design-complete reaction-admin
<img width="149" alt="Screen Shot 2019-07-30 at 7 06 13 PM" src="https://user-images.githubusercontent.com/3673236/62178229-217fbb80-b2fd-11e9-9ec7-e1f2699fce15.png">
1.0
Bulk Tag/Product Feature: Show isVisible/Hidden circle in Product Table - <img width="149" alt="Screen Shot 2019-07-30 at 7 06 13 PM" src="https://user-images.githubusercontent.com/3673236/62178229-217fbb80-b2fd-11e9-9ec7-e1f2699fce15.png">
non_test
bulk tag product feature show isvisible hidden circle in product table img width alt screen shot at pm src
0
58,540
7,160,936,647
IssuesEvent
2018-01-28 07:55:36
HabitRPG/habitica
https://api.github.com/repos/HabitRPG/habitica
closed
Challenge Winner Text is misleading
POST-REDESIGN good first issue priority: medium section: Challenges: all section: Challenges: judging status: issue: in progress
The Challenge winner declaration text implies that you can select more than one winner, which is not the case. It should say “Select a winner from the Challenge participants” and “Award Winner.” (Also, there should be some space between the winner’s name and “Selected”.)
1.0
Challenge Winner Text is misleading - The Challenge winner declaration text implies that you can select more than one winner, which is not the case. It should say “Select a winner from the Challenge participants” and “Award Winner.” (Also, there should be some space between the winner’s name and “Selected”.)
non_test
challenge winner text is misleading the challenge winner declaration text implies that you can select more than one winner which is not the case it should say “select a winner from the challenge participants” and “award winner ” also there should be some space between the winner’s name and “selected”
0
1,982
2,580,369,341
IssuesEvent
2015-02-13 17:12:50
aisapatino/sjfnw
https://api.github.com/repos/aisapatino/sjfnw
opened
Fundraising tests - manage account
area: project central beta i type: testing
`manage_account` view - already registered with that GP - invalid GP id (not found) - valid, not pre-approved - valid, pre-approved
1.0
Fundraising tests - manage account - `manage_account` view - already registered with that GP - invalid GP id (not found) - valid, not pre-approved - valid, pre-approved
test
fundraising tests manage account manage account view already registered with that gp invalid gp id not found valid not pre approved valid pre approved
1
780,542
27,399,422,448
IssuesEvent
2023-02-28 22:44:50
darktable-org/darktable
https://api.github.com/repos/darktable-org/darktable
closed
Presets name are not ellipsized: this could make panel size change on long ones
feature: enhancement priority: medium scope: UI
**Describe the bug/issue** 1. Have a not too wide panel (it should be smaller that what is needed to display preset name) 2. Go to Tone equalizer module 3. Select preset with long name (those starting by compress) 4. See that panel size has changed to allow all preset name to appears. 5. Change a setting on tone equalizer, preset name disappear and panel is restored too previous smaller size. This is probably related to PR #13149. So @TurboGit: should probably another one for you. **Expected behavior** Panel size should not changed and so preset name should be ellipsized. **Platform** _Please fill as much information as possible in the list given below. Please state "unknown" where you do not know the answer and remove any sections that are not applicable _ * darktable version : master
1.0
Presets name are not ellipsized: this could make panel size change on long ones - **Describe the bug/issue** 1. Have a not too wide panel (it should be smaller that what is needed to display preset name) 2. Go to Tone equalizer module 3. Select preset with long name (those starting by compress) 4. See that panel size has changed to allow all preset name to appears. 5. Change a setting on tone equalizer, preset name disappear and panel is restored too previous smaller size. This is probably related to PR #13149. So @TurboGit: should probably another one for you. **Expected behavior** Panel size should not changed and so preset name should be ellipsized. **Platform** _Please fill as much information as possible in the list given below. Please state "unknown" where you do not know the answer and remove any sections that are not applicable _ * darktable version : master
non_test
presets name are not ellipsized this could make panel size change on long ones describe the bug issue have a not too wide panel it should be smaller that what is needed to display preset name go to tone equalizer module select preset with long name those starting by compress see that panel size has changed to allow all preset name to appears change a setting on tone equalizer preset name disappear and panel is restored too previous smaller size this is probably related to pr so turbogit should probably another one for you expected behavior panel size should not changed and so preset name should be ellipsized platform please fill as much information as possible in the list given below please state unknown where you do not know the answer and remove any sections that are not applicable darktable version master
0
61,773
25,729,050,188
IssuesEvent
2022-12-07 18:46:31
microsoft/vscode-cpptools
https://api.github.com/repos/microsoft/vscode-cpptools
closed
Add support for `/cygdrive` paths returned by some versions of Cygwin
Language Service fixed (release pending) Feature: Configuration
Hi, I am facing error: "Cannot open source file stddef.h" (dependency of stdlib.h)" when editing c files. ![image](https://user-images.githubusercontent.com/12242398/200069265-eb2d4a9d-25bc-44b0-b553-f137d6f1592a.png) Archive with CygWin installation: https://1drv.ms/u/s!AhQbYFkanIg5h5RrrKO2FXw8tPj23Q?e=1igDaK Config: ```JSON { "configurations": [ { "name": "Win32", "includePath": ["${workspaceFolder}/**"], "defines": ["_DEBUG", "UNICODE", "_UNICODE"], "compilerPath": "C:\\Program Files\\CygWin64\\bin\\gcc.exe", "cStandard": "c99", "cppStandard": "c++17", "intelliSenseMode": "linux-gcc-x64" } ], "version": 4 } ``` Weirdly enough Intelli-Sense seems to work just fine. ![image](https://user-images.githubusercontent.com/12242398/200069169-98e45f06-747f-4adc-a372-47de547497c5.png) ![image](https://user-images.githubusercontent.com/12242398/200069109-f16edb36-f4e5-4809-90bf-0a46da2d880a.png)
1.0
Add support for `/cygdrive` paths returned by some versions of Cygwin - Hi, I am facing error: "Cannot open source file stddef.h" (dependency of stdlib.h)" when editing c files. ![image](https://user-images.githubusercontent.com/12242398/200069265-eb2d4a9d-25bc-44b0-b553-f137d6f1592a.png) Archive with CygWin installation: https://1drv.ms/u/s!AhQbYFkanIg5h5RrrKO2FXw8tPj23Q?e=1igDaK Config: ```JSON { "configurations": [ { "name": "Win32", "includePath": ["${workspaceFolder}/**"], "defines": ["_DEBUG", "UNICODE", "_UNICODE"], "compilerPath": "C:\\Program Files\\CygWin64\\bin\\gcc.exe", "cStandard": "c99", "cppStandard": "c++17", "intelliSenseMode": "linux-gcc-x64" } ], "version": 4 } ``` Weirdly enough Intelli-Sense seems to work just fine. ![image](https://user-images.githubusercontent.com/12242398/200069169-98e45f06-747f-4adc-a372-47de547497c5.png) ![image](https://user-images.githubusercontent.com/12242398/200069109-f16edb36-f4e5-4809-90bf-0a46da2d880a.png)
non_test
add support for cygdrive paths returned by some versions of cygwin hi i am facing error cannot open source file stddef h dependency of stdlib h when editing c files archive with cygwin installation config json configurations name includepath defines compilerpath c program files bin gcc exe cstandard cppstandard c intellisensemode linux gcc version weirdly enough intelli sense seems to work just fine
0
300,443
22,678,562,786
IssuesEvent
2022-07-04 07:49:11
RedisGraph/RedisGraph
https://api.github.com/repos/RedisGraph/RedisGraph
closed
GRAPH.CONFIG docs link wrong SET command
Documentation
Command documentation pages of both [GRAPH.CONFIG SET](https://redis.io/commands/graph.config-set/) and [GRAPH.CONFIG GET](https://redis.io/commands/graph.config-get/) commands wrongfully links [regular SET](https://redis.io/commands/set) command.
1.0
GRAPH.CONFIG docs link wrong SET command - Command documentation pages of both [GRAPH.CONFIG SET](https://redis.io/commands/graph.config-set/) and [GRAPH.CONFIG GET](https://redis.io/commands/graph.config-get/) commands wrongfully links [regular SET](https://redis.io/commands/set) command.
non_test
graph config docs link wrong set command command documentation pages of both and commands wrongfully links command
0
333,793
29,808,818,404
IssuesEvent
2023-06-16 13:39:40
Azure/azure-sdk-for-net
https://api.github.com/repos/Azure/azure-sdk-for-net
closed
[ContainerRegistry] Deploy test resources failing in nightly runs
Container Registry Client needs-team-triage test-reliability
ContainerRegistry nightly test runs are failing with: > 18:36:23 - Invoking post-deployment script '/mnt/vss/_work/1/s/sdk/containerregistry/test-resources-post.ps1' > VERBOSE: Running registered exit actions > VERBOSE: Logging out of service principal '***' > VERBOSE: Performing the operation "New-TestResources.ps1" on target "***". > Import-AzContainerRegistryImage: /mnt/vss/_work/1/s/sdk/containerregistry/test-resources-post.ps1:27 > Line | > 27 | Import-AzContainerRegistryImage ` > | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > | An error occurred while sending the request. > > ##[error]PowerShell exited with code '1'. > For more details check here: - https://dev.azure.com/azure-sdk/internal/_build/results?buildId=2850319&view=results @jsquire for notification.
1.0
[ContainerRegistry] Deploy test resources failing in nightly runs - ContainerRegistry nightly test runs are failing with: > 18:36:23 - Invoking post-deployment script '/mnt/vss/_work/1/s/sdk/containerregistry/test-resources-post.ps1' > VERBOSE: Running registered exit actions > VERBOSE: Logging out of service principal '***' > VERBOSE: Performing the operation "New-TestResources.ps1" on target "***". > Import-AzContainerRegistryImage: /mnt/vss/_work/1/s/sdk/containerregistry/test-resources-post.ps1:27 > Line | > 27 | Import-AzContainerRegistryImage ` > | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > | An error occurred while sending the request. > > ##[error]PowerShell exited with code '1'. > For more details check here: - https://dev.azure.com/azure-sdk/internal/_build/results?buildId=2850319&view=results @jsquire for notification.
test
deploy test resources failing in nightly runs containerregistry nightly test runs are failing with invoking post deployment script mnt vss work s sdk containerregistry test resources post verbose running registered exit actions verbose logging out of service principal verbose performing the operation new testresources on target import azcontainerregistryimage mnt vss work s sdk containerregistry test resources post line import azcontainerregistryimage an error occurred while sending the request powershell exited with code for more details check here jsquire for notification
1
219,896
24,539,529,662
IssuesEvent
2022-10-12 01:30:50
rsoreq/keycloak-quickstarts
https://api.github.com/repos/rsoreq/keycloak-quickstarts
opened
CVE-2022-42003 (High) detected in jackson-databind-2.10.3.jar
security vulnerability
## CVE-2022-42003 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.10.3.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /app-profile-jee-jsp/pom.xml</p> <p>Path to vulnerable library: /canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.10.3.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/rsoreq/keycloak-quickstarts/commit/b6816370d88885f5bec5979e71a32c917478ec4a">b6816370d88885f5bec5979e71a32c917478ec4a</a></p> <p>Found in base branch: <b>latest</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In FasterXML jackson-databind before 2.14.0-rc1, resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting, when the UNWRAP_SINGLE_VALUE_ARRAYS feature is enabled. <p>Publish Date: 2022-10-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42003>CVE-2022-42003</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p>
True
CVE-2022-42003 (High) detected in jackson-databind-2.10.3.jar - ## CVE-2022-42003 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.10.3.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /app-profile-jee-jsp/pom.xml</p> <p>Path to vulnerable library: /canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.10.3.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/rsoreq/keycloak-quickstarts/commit/b6816370d88885f5bec5979e71a32c917478ec4a">b6816370d88885f5bec5979e71a32c917478ec4a</a></p> <p>Found in base branch: <b>latest</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In FasterXML jackson-databind before 2.14.0-rc1, resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting, when the UNWRAP_SINGLE_VALUE_ARRAYS feature is enabled. <p>Publish Date: 2022-10-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42003>CVE-2022-42003</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p>
non_test
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file app profile jee jsp pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch latest vulnerability details in fasterxml jackson databind before resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting when the unwrap single value arrays feature is enabled publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href
0
148,889
11,870,495,071
IssuesEvent
2020-03-26 12:54:51
inverse-inc/packetfence
https://api.github.com/repos/inverse-inc/packetfence
closed
Configurator: Can't set the mysql root password
Priority: High Status: Tested and works Type: Bug
```` [root@pf-testing ~]# rpm -qa packetfence packetfence-9.3.9-20200325180446.129715902.0007.devel.el7.x86_64 ```` If I click on "Set password" I get : ![image](https://user-images.githubusercontent.com/5261214/77575893-90bca800-6eaa-11ea-918c-dacdb01539a3.png)
1.0
Configurator: Can't set the mysql root password - ```` [root@pf-testing ~]# rpm -qa packetfence packetfence-9.3.9-20200325180446.129715902.0007.devel.el7.x86_64 ```` If I click on "Set password" I get : ![image](https://user-images.githubusercontent.com/5261214/77575893-90bca800-6eaa-11ea-918c-dacdb01539a3.png)
test
configurator can t set the mysql root password rpm qa packetfence packetfence devel if i click on set password i get
1
18,884
3,729,037,745
IssuesEvent
2016-03-07 05:08:05
rancher/rancher
https://api.github.com/repos/rancher/rancher
closed
CentOS7: Event Errors in Rancher Agent when starting containers
area/agent kind/bug release/v1.0.0 status/to-test
Rancher Agent v0.7.6 Rancher Version: v0.20.0 Docker Version: 1.6.1 Event errors are thrown when launching containers on a CentOS7 host. The containers will eventually launch, but unable to ping other containers. ```bash time="2015-05-12T17:53:43Z" level="info" msg="Processing event: &docker.APIEvents{Status:\"start\", ID:\"edd8b73f 098ff2f3dba0bac2f9985b704fc95383ce15e159368c567d5b78c6d7\", From:\"rancher/agent-instance:v0.3.1\", Time:1431453223}" time="2015-05-12T17:53:43Z" level="info" msg="Assigning IP [10.42.92.230/16], ContainerId [edd8b73f098ff2f3dba0bac2f9985b704fc95383ce15e159368c567d5b78c6d7], Pid [6462]" time="2015-05-12T17:53:43Z" level="info" msg="Processing event: &docker.APIEvents{Status:\"start\", ID:\"edd8b73f098ff2f3dba0bac2f9985b704fc95383ce15e159368c567d5b78c6d7\", From:\"-simulated-\", Time:0}" time="2015-05-12T17:53:43Z" level="info" msg="Container locked. Can't run StartHandler. ID: [edd8b73f098ff2f3dba0bac2f9985b704fc95383ce15e159368c567d5b78c6d7]" W0512 17:53:53.686973 04356 handler.go:524] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/NetworkManager-dispatcher.service": 0x40000200 == IN_DELETE|IN_ISDIR): inotify_rm_watch: invalid argument W0512 17:53:53.687194 04356 handler.go:524] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/NetworkManager-dispatcher.service": 0x40000200 == IN_DELETE|IN_ISDIR): inotify_rm_watch: invalid argument time="2015-05-12T17:54:11Z" level="info" msg="Processing event: &docker.APIEvents{Status:\"create\", ID:\"8383da581e4c4d039cedc3b218a9e687d7f4f695df02cbbd999a601e69750954\", From:\"ubuntu:14.04.1\", Time:1431453251}" time="2015-05-12T17:54:11Z" level="info" msg="Processing event: &docker.APIEvents{Status:\"start\", ID:\"8383da581e4c4d039cedc3b218a9e687d7f4f695df02cbbd999a601e69750954\", From:\"ubuntu:14.04.1\", Time:1431453251}" time="2015-05-12T17:54:12Z" level="info" msg="Assigning IP [10.42.184.132/16], ContainerId [8383da581e4c4d039cedc3b218a9e687d7f4f695df02cbbd999a601e69750954], Pid [8083]" time="2015-05-12T17:54:12Z" level="info" msg="Processing event: &docker.APIEvents{Status:\"start\", ID:\"8383da581e4c4d039cedc3b218a9e687d7f4f695df02cbbd999a601e69750954\", From:\"-simulated-\", Time:0}" time="2015-05-12T17:54:12Z" level="info" msg="Container locked. Can't run StartHandler. ID: [8383da581e4c4d039cedc3b218a9e687d7f4f695df02cbbd999a601e69750954]" W0512 17:54:22.675596 04356 handler.go:524] Error while processing event ("/cpuacct.usage": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cpuacct.usage" W0512 17:54:22.675680 04356 handler.go:524] Error while processing event ("/cpuacct.usage_percpu": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cpuacct.usage_percpu" W0512 17:54:22.675700 04356 handler.go:524] Error while processing event ("/cpuacct.stat": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cpuacct.stat" W0512 17:54:22.675712 04356 handler.go:524] Error while processing event ("/cpu.shares": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cpu.shares" W0512 17:54:22.675726 04356 handler.go:524] Error while processing event ("/cpu.cfs_quota_us": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cpu.cfs_quota_us" W0512 17:54:22.675736 04356 handler.go:524] Error while processing event ("/cpu.cfs_period_us": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cpu.cfs_period_us" W0512 17:54:22.675750 04356 handler.go:524] Error while processing event ("/cpu.stat": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cpu.stat" W0512 17:54:22.675780 04356 handler.go:524] Error while processing event ("/cpu.rt_runtime_us": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cpu.rt_runtime_us" W0512 17:54:22.675795 04356 handler.go:524] Error while processing event ("/cpu.rt_period_us": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cpu.rt_period_us" W0512 17:54:22.675806 04356 handler.go:524] Error while processing event ("/tasks": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/tasks" W0512 17:54:22.675819 04356 handler.go:524] Error while processing event ("/cgroup.procs": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cgroup.procs" W0512 17:54:22.675829 04356 handler.go:524] Error while processing event ("/notify_on_release": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/notify_on_release" W0512 17:54:22.675842 04356 handler.go:524] Error while processing event ("/cgroup.event_control": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cgroup.event_control" W0512 17:54:22.675853 04356 handler.go:524] Error while processing event ("/cgroup.clone_children": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cgroup.clone_children" W0512 17:54:22.675883 04356 handler.go:524] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/NetworkManager-dispatcher.service": 0x40000200 == IN_DELETE|IN_ISDIR): inotify_rm_watch: invalid argument W0512 17:54:22.675899 04356 handler.go:524] Error while processing event ("/blkio.reset_stats": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.reset_stats" W0512 17:54:22.675912 04356 handler.go:524] Error while processing event ("/blkio.throttle.read_bps_device": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.throttle.read_bps_device" W0512 17:54:22.675938 04356 handler.go:524] Error while processing event ("/blkio.throttle.write_bps_device": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.throttle.write_bps_device" W0512 17:54:22.675953 04356 handler.go:524] Error while processing event ("/blkio.throttle.read_iops_device": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.throttle.read_iops_device" W0512 17:54:22.676201 04356 handler.go:524] Error while processing event ("/blkio.throttle.write_iops_device": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.throttle.write_iops_device" W0512 17:54:22.676220 04356 handler.go:524] Error while processing event ("/blkio.throttle.io_service_bytes": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.throttle.io_service_bytes" W0512 17:54:22.676234 04356 handler.go:524] Error while processing event ("/blkio.throttle.io_serviced": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.throttle.io_serviced" W0512 17:54:22.676248 04356 handler.go:524] Error while processing event ("/blkio.weight_device": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.weight_device" W0512 17:54:22.676260 04356 handler.go:524] Error while processing event ("/blkio.weight": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.weight" W0512 17:54:22.676274 04356 handler.go:524] Error while processing event ("/blkio.leaf_weight_device": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.leaf_weight_device" W0512 17:54:22.676285 04356 handler.go:524] Error while processing event ("/blkio.leaf_weight": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.leaf_weight" W0512 17:54:22.676298 04356 handler.go:524] Error while processing event ("/blkio.time": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.time" W0512 17:54:22.676308 04356 handler.go:524] Error while processing event ("/blkio.sectors": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.sectors" W0512 17:54:22.676322 04356 handler.go:524] Error while processing event ("/blkio.io_service_bytes": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.io_service_bytes" W0512 17:54:22.676333 04356 handler.go:524] Error while processing event ("/blkio.io_serviced": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.io_serviced" W0512 17:54:22.676346 04356 handler.go:524] Error while processing event ("/blkio.io_service_time": 0x200 == IN_D ELETE): unable to detect container from watch event on directory "/blkio.io_service_time" W0512 17:54:22.676357 04356 handler.go:524] Error while processing event ("/blkio.io_wait_time": 0x200 == IN_DELE TE): unable to detect container from watch event on directory "/blkio.io_wait_time" W0512 17:54:22.676369 04356 handler.go:524] Error while processing event ("/blkio.io_merged": 0x200 == IN_DELETE) : unable to detect container from watch event on directory "/blkio.io_merged" W0512 17:54:22.676380 04356 handler.go:524] Error while processing event ("/blkio.io_queued": 0x200 == IN_DELETE) : unable to detect container from watch event on directory "/blkio.io_queued" W0512 17:54:22.676392 04356 handler.go:524] Error while processing event ("/blkio.time_recursive": 0x200 == IN_DE LETE): unable to detect container from watch event on directory "/blkio.time_recursive" W0512 17:54:22.676403 04356 handler.go:524] Error while processing event ("/blkio.sectors_recursive": 0x200 == IN _DELETE): unable to detect container from watch event on directory "/blkio.sectors_recursive" W0512 17:54:22.676417 04356 handler.go:524] Error while processing event ("/blkio.io_service_bytes_recursive": 0x 200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.io_service_bytes_recursive" W0512 17:54:22.676428 04356 handler.go:524] Error while processing event ("/blkio.io_serviced_recursive": 0x200 = = IN_DELETE): unable to detect container from watch event on directory "/blkio.io_serviced_recursive" W0512 17:54:22.676449 04356 handler.go:524] Error while processing event ("/blkio.io_service_time_recursive": 0x2 00 == IN_DELETE): unable to detect container from watch event on directory "/blkio.io_service_time_recursive" W0512 17:54:22.676460 04356 handler.go:524] Error while processing event ("/blkio.io_wait_time_recursive": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.io_wait_time_recursive" W0512 17:54:22.676473 04356 handler.go:524] Error while processing event ("/blkio.io_merged_recursive": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.io_merged_recursive" W0512 17:54:22.676485 04356 handler.go:524] Error while processing event ("/blkio.io_queued_recursive": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.io_queued_recursive" W0512 17:54:22.676498 04356 handler.go:524] Error while processing event ("/tasks": 0x200 == IN_DELETE): unable t o detect container from watch event on directory "/tasks" W0512 17:54:22.676509 04356 handler.go:524] Error while processing event ("/cgroup.procs": 0x200 == IN_DELETE): u nable to detect container from watch event on directory "/cgroup.procs" W0512 17:54:22.676522 04356 handler.go:524] Error while processing event ("/notify_on_release": 0x200 == IN_DELET E): unable to detect container from watch event on directory "/notify_on_release" W0512 17:54:22.676532 04356 handler.go:524] Error while processing event ("/cgroup.event_control": 0x200 == IN_DE LETE): unable to detect container from watch event on directory "/cgroup.event_control" W0512 17:54:22.676546 04356 handler.go:524] Error while processing event ("/cgroup.clone_children": 0x200 == IN_D ELETE): unable to detect container from watch event on directory "/cgroup.clone_children" W0512 17:54:22.676558 04356 handler.go:524] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/Netw orkManager-dispatcher.service": 0x40000200 == IN_DELETE|IN_ISDIR): inotify_rm_watch: invalid argument ```
1.0
CentOS7: Event Errors in Rancher Agent when starting containers - Rancher Agent v0.7.6 Rancher Version: v0.20.0 Docker Version: 1.6.1 Event errors are thrown when launching containers on a CentOS7 host. The containers will eventually launch, but unable to ping other containers. ```bash time="2015-05-12T17:53:43Z" level="info" msg="Processing event: &docker.APIEvents{Status:\"start\", ID:\"edd8b73f 098ff2f3dba0bac2f9985b704fc95383ce15e159368c567d5b78c6d7\", From:\"rancher/agent-instance:v0.3.1\", Time:1431453223}" time="2015-05-12T17:53:43Z" level="info" msg="Assigning IP [10.42.92.230/16], ContainerId [edd8b73f098ff2f3dba0bac2f9985b704fc95383ce15e159368c567d5b78c6d7], Pid [6462]" time="2015-05-12T17:53:43Z" level="info" msg="Processing event: &docker.APIEvents{Status:\"start\", ID:\"edd8b73f098ff2f3dba0bac2f9985b704fc95383ce15e159368c567d5b78c6d7\", From:\"-simulated-\", Time:0}" time="2015-05-12T17:53:43Z" level="info" msg="Container locked. Can't run StartHandler. ID: [edd8b73f098ff2f3dba0bac2f9985b704fc95383ce15e159368c567d5b78c6d7]" W0512 17:53:53.686973 04356 handler.go:524] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/NetworkManager-dispatcher.service": 0x40000200 == IN_DELETE|IN_ISDIR): inotify_rm_watch: invalid argument W0512 17:53:53.687194 04356 handler.go:524] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/NetworkManager-dispatcher.service": 0x40000200 == IN_DELETE|IN_ISDIR): inotify_rm_watch: invalid argument time="2015-05-12T17:54:11Z" level="info" msg="Processing event: &docker.APIEvents{Status:\"create\", ID:\"8383da581e4c4d039cedc3b218a9e687d7f4f695df02cbbd999a601e69750954\", From:\"ubuntu:14.04.1\", Time:1431453251}" time="2015-05-12T17:54:11Z" level="info" msg="Processing event: &docker.APIEvents{Status:\"start\", ID:\"8383da581e4c4d039cedc3b218a9e687d7f4f695df02cbbd999a601e69750954\", From:\"ubuntu:14.04.1\", Time:1431453251}" time="2015-05-12T17:54:12Z" level="info" msg="Assigning IP [10.42.184.132/16], ContainerId [8383da581e4c4d039cedc3b218a9e687d7f4f695df02cbbd999a601e69750954], Pid [8083]" time="2015-05-12T17:54:12Z" level="info" msg="Processing event: &docker.APIEvents{Status:\"start\", ID:\"8383da581e4c4d039cedc3b218a9e687d7f4f695df02cbbd999a601e69750954\", From:\"-simulated-\", Time:0}" time="2015-05-12T17:54:12Z" level="info" msg="Container locked. Can't run StartHandler. ID: [8383da581e4c4d039cedc3b218a9e687d7f4f695df02cbbd999a601e69750954]" W0512 17:54:22.675596 04356 handler.go:524] Error while processing event ("/cpuacct.usage": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cpuacct.usage" W0512 17:54:22.675680 04356 handler.go:524] Error while processing event ("/cpuacct.usage_percpu": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cpuacct.usage_percpu" W0512 17:54:22.675700 04356 handler.go:524] Error while processing event ("/cpuacct.stat": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cpuacct.stat" W0512 17:54:22.675712 04356 handler.go:524] Error while processing event ("/cpu.shares": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cpu.shares" W0512 17:54:22.675726 04356 handler.go:524] Error while processing event ("/cpu.cfs_quota_us": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cpu.cfs_quota_us" W0512 17:54:22.675736 04356 handler.go:524] Error while processing event ("/cpu.cfs_period_us": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cpu.cfs_period_us" W0512 17:54:22.675750 04356 handler.go:524] Error while processing event ("/cpu.stat": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cpu.stat" W0512 17:54:22.675780 04356 handler.go:524] Error while processing event ("/cpu.rt_runtime_us": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cpu.rt_runtime_us" W0512 17:54:22.675795 04356 handler.go:524] Error while processing event ("/cpu.rt_period_us": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cpu.rt_period_us" W0512 17:54:22.675806 04356 handler.go:524] Error while processing event ("/tasks": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/tasks" W0512 17:54:22.675819 04356 handler.go:524] Error while processing event ("/cgroup.procs": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cgroup.procs" W0512 17:54:22.675829 04356 handler.go:524] Error while processing event ("/notify_on_release": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/notify_on_release" W0512 17:54:22.675842 04356 handler.go:524] Error while processing event ("/cgroup.event_control": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cgroup.event_control" W0512 17:54:22.675853 04356 handler.go:524] Error while processing event ("/cgroup.clone_children": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/cgroup.clone_children" W0512 17:54:22.675883 04356 handler.go:524] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/NetworkManager-dispatcher.service": 0x40000200 == IN_DELETE|IN_ISDIR): inotify_rm_watch: invalid argument W0512 17:54:22.675899 04356 handler.go:524] Error while processing event ("/blkio.reset_stats": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.reset_stats" W0512 17:54:22.675912 04356 handler.go:524] Error while processing event ("/blkio.throttle.read_bps_device": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.throttle.read_bps_device" W0512 17:54:22.675938 04356 handler.go:524] Error while processing event ("/blkio.throttle.write_bps_device": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.throttle.write_bps_device" W0512 17:54:22.675953 04356 handler.go:524] Error while processing event ("/blkio.throttle.read_iops_device": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.throttle.read_iops_device" W0512 17:54:22.676201 04356 handler.go:524] Error while processing event ("/blkio.throttle.write_iops_device": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.throttle.write_iops_device" W0512 17:54:22.676220 04356 handler.go:524] Error while processing event ("/blkio.throttle.io_service_bytes": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.throttle.io_service_bytes" W0512 17:54:22.676234 04356 handler.go:524] Error while processing event ("/blkio.throttle.io_serviced": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.throttle.io_serviced" W0512 17:54:22.676248 04356 handler.go:524] Error while processing event ("/blkio.weight_device": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.weight_device" W0512 17:54:22.676260 04356 handler.go:524] Error while processing event ("/blkio.weight": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.weight" W0512 17:54:22.676274 04356 handler.go:524] Error while processing event ("/blkio.leaf_weight_device": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.leaf_weight_device" W0512 17:54:22.676285 04356 handler.go:524] Error while processing event ("/blkio.leaf_weight": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.leaf_weight" W0512 17:54:22.676298 04356 handler.go:524] Error while processing event ("/blkio.time": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.time" W0512 17:54:22.676308 04356 handler.go:524] Error while processing event ("/blkio.sectors": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.sectors" W0512 17:54:22.676322 04356 handler.go:524] Error while processing event ("/blkio.io_service_bytes": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.io_service_bytes" W0512 17:54:22.676333 04356 handler.go:524] Error while processing event ("/blkio.io_serviced": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.io_serviced" W0512 17:54:22.676346 04356 handler.go:524] Error while processing event ("/blkio.io_service_time": 0x200 == IN_D ELETE): unable to detect container from watch event on directory "/blkio.io_service_time" W0512 17:54:22.676357 04356 handler.go:524] Error while processing event ("/blkio.io_wait_time": 0x200 == IN_DELE TE): unable to detect container from watch event on directory "/blkio.io_wait_time" W0512 17:54:22.676369 04356 handler.go:524] Error while processing event ("/blkio.io_merged": 0x200 == IN_DELETE) : unable to detect container from watch event on directory "/blkio.io_merged" W0512 17:54:22.676380 04356 handler.go:524] Error while processing event ("/blkio.io_queued": 0x200 == IN_DELETE) : unable to detect container from watch event on directory "/blkio.io_queued" W0512 17:54:22.676392 04356 handler.go:524] Error while processing event ("/blkio.time_recursive": 0x200 == IN_DE LETE): unable to detect container from watch event on directory "/blkio.time_recursive" W0512 17:54:22.676403 04356 handler.go:524] Error while processing event ("/blkio.sectors_recursive": 0x200 == IN _DELETE): unable to detect container from watch event on directory "/blkio.sectors_recursive" W0512 17:54:22.676417 04356 handler.go:524] Error while processing event ("/blkio.io_service_bytes_recursive": 0x 200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.io_service_bytes_recursive" W0512 17:54:22.676428 04356 handler.go:524] Error while processing event ("/blkio.io_serviced_recursive": 0x200 = = IN_DELETE): unable to detect container from watch event on directory "/blkio.io_serviced_recursive" W0512 17:54:22.676449 04356 handler.go:524] Error while processing event ("/blkio.io_service_time_recursive": 0x2 00 == IN_DELETE): unable to detect container from watch event on directory "/blkio.io_service_time_recursive" W0512 17:54:22.676460 04356 handler.go:524] Error while processing event ("/blkio.io_wait_time_recursive": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.io_wait_time_recursive" W0512 17:54:22.676473 04356 handler.go:524] Error while processing event ("/blkio.io_merged_recursive": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.io_merged_recursive" W0512 17:54:22.676485 04356 handler.go:524] Error while processing event ("/blkio.io_queued_recursive": 0x200 == IN_DELETE): unable to detect container from watch event on directory "/blkio.io_queued_recursive" W0512 17:54:22.676498 04356 handler.go:524] Error while processing event ("/tasks": 0x200 == IN_DELETE): unable t o detect container from watch event on directory "/tasks" W0512 17:54:22.676509 04356 handler.go:524] Error while processing event ("/cgroup.procs": 0x200 == IN_DELETE): u nable to detect container from watch event on directory "/cgroup.procs" W0512 17:54:22.676522 04356 handler.go:524] Error while processing event ("/notify_on_release": 0x200 == IN_DELET E): unable to detect container from watch event on directory "/notify_on_release" W0512 17:54:22.676532 04356 handler.go:524] Error while processing event ("/cgroup.event_control": 0x200 == IN_DE LETE): unable to detect container from watch event on directory "/cgroup.event_control" W0512 17:54:22.676546 04356 handler.go:524] Error while processing event ("/cgroup.clone_children": 0x200 == IN_D ELETE): unable to detect container from watch event on directory "/cgroup.clone_children" W0512 17:54:22.676558 04356 handler.go:524] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/Netw orkManager-dispatcher.service": 0x40000200 == IN_DELETE|IN_ISDIR): inotify_rm_watch: invalid argument ```
test
event errors in rancher agent when starting containers rancher agent rancher version docker version event errors are thrown when launching containers on a host the containers will eventually launch but unable to ping other containers bash time level info msg processing event docker apievents status start id from rancher agent instance time time level info msg assigning ip containerid pid time level info msg processing event docker apievents status start id from simulated time time level info msg container locked can t run starthandler id handler go error while processing event sys fs cgroup cpu cpuacct system slice networkmanager dispatcher service in delete in isdir inotify rm watch invalid argument handler go error while processing event sys fs cgroup blkio system slice networkmanager dispatcher service in delete in isdir inotify rm watch invalid argument time level info msg processing event docker apievents status create id from ubuntu time time level info msg processing event docker apievents status start id from ubuntu time time level info msg assigning ip containerid pid time level info msg processing event docker apievents status start id from simulated time time level info msg container locked can t run starthandler id handler go error while processing event cpuacct usage in delete unable to detect container from watch event on directory cpuacct usage handler go error while processing event cpuacct usage percpu in delete unable to detect container from watch event on directory cpuacct usage percpu handler go error while processing event cpuacct stat in delete unable to detect container from watch event on directory cpuacct stat handler go error while processing event cpu shares in delete unable to detect container from watch event on directory cpu shares handler go error while processing event cpu cfs quota us in delete unable to detect container from watch event on directory cpu cfs quota us handler go error while processing event cpu cfs period us in delete unable to detect container from watch event on directory cpu cfs period us handler go error while processing event cpu stat in delete unable to detect container from watch event on directory cpu stat handler go error while processing event cpu rt runtime us in delete unable to detect container from watch event on directory cpu rt runtime us handler go error while processing event cpu rt period us in delete unable to detect container from watch event on directory cpu rt period us handler go error while processing event tasks in delete unable to detect container from watch event on directory tasks handler go error while processing event cgroup procs in delete unable to detect container from watch event on directory cgroup procs handler go error while processing event notify on release in delete unable to detect container from watch event on directory notify on release handler go error while processing event cgroup event control in delete unable to detect container from watch event on directory cgroup event control handler go error while processing event cgroup clone children in delete unable to detect container from watch event on directory cgroup clone children handler go error while processing event sys fs cgroup cpu cpuacct system slice networkmanager dispatcher service in delete in isdir inotify rm watch invalid argument handler go error while processing event blkio reset stats in delete unable to detect container from watch event on directory blkio reset stats handler go error while processing event blkio throttle read bps device in delete unable to detect container from watch event on directory blkio throttle read bps device handler go error while processing event blkio throttle write bps device in delete unable to detect container from watch event on directory blkio throttle write bps device handler go error while processing event blkio throttle read iops device in delete unable to detect container from watch event on directory blkio throttle read iops device handler go error while processing event blkio throttle write iops device in delete unable to detect container from watch event on directory blkio throttle write iops device handler go error while processing event blkio throttle io service bytes in delete unable to detect container from watch event on directory blkio throttle io service bytes handler go error while processing event blkio throttle io serviced in delete unable to detect container from watch event on directory blkio throttle io serviced handler go error while processing event blkio weight device in delete unable to detect container from watch event on directory blkio weight device handler go error while processing event blkio weight in delete unable to detect container from watch event on directory blkio weight handler go error while processing event blkio leaf weight device in delete unable to detect container from watch event on directory blkio leaf weight device handler go error while processing event blkio leaf weight in delete unable to detect container from watch event on directory blkio leaf weight handler go error while processing event blkio time in delete unable to detect container from watch event on directory blkio time handler go error while processing event blkio sectors in delete unable to detect container from watch event on directory blkio sectors handler go error while processing event blkio io service bytes in delete unable to detect container from watch event on directory blkio io service bytes handler go error while processing event blkio io serviced in delete unable to detect container from watch event on directory blkio io serviced handler go error while processing event blkio io service time in d elete unable to detect container from watch event on directory blkio io service time handler go error while processing event blkio io wait time in dele te unable to detect container from watch event on directory blkio io wait time handler go error while processing event blkio io merged in delete unable to detect container from watch event on directory blkio io merged handler go error while processing event blkio io queued in delete unable to detect container from watch event on directory blkio io queued handler go error while processing event blkio time recursive in de lete unable to detect container from watch event on directory blkio time recursive handler go error while processing event blkio sectors recursive in delete unable to detect container from watch event on directory blkio sectors recursive handler go error while processing event blkio io service bytes recursive in delete unable to detect container from watch event on directory blkio io service bytes recursive handler go error while processing event blkio io serviced recursive in delete unable to detect container from watch event on directory blkio io serviced recursive handler go error while processing event blkio io service time recursive in delete unable to detect container from watch event on directory blkio io service time recursive handler go error while processing event blkio io wait time recursive in delete unable to detect container from watch event on directory blkio io wait time recursive handler go error while processing event blkio io merged recursive in delete unable to detect container from watch event on directory blkio io merged recursive handler go error while processing event blkio io queued recursive in delete unable to detect container from watch event on directory blkio io queued recursive handler go error while processing event tasks in delete unable t o detect container from watch event on directory tasks handler go error while processing event cgroup procs in delete u nable to detect container from watch event on directory cgroup procs handler go error while processing event notify on release in delet e unable to detect container from watch event on directory notify on release handler go error while processing event cgroup event control in de lete unable to detect container from watch event on directory cgroup event control handler go error while processing event cgroup clone children in d elete unable to detect container from watch event on directory cgroup clone children handler go error while processing event sys fs cgroup blkio system slice netw orkmanager dispatcher service in delete in isdir inotify rm watch invalid argument
1
199,097
15,023,819,528
IssuesEvent
2021-02-01 18:45:43
pravega/pravega
https://api.github.com/repos/pravega/pravega
closed
Speed up client unit tests
area/client area/testing
**Problem description** ControllerImplTests (+Secure) ConnectionFactoryImplTests (+Secure) **Problem location** **Suggestions for an improvement**
1.0
Speed up client unit tests - **Problem description** ControllerImplTests (+Secure) ConnectionFactoryImplTests (+Secure) **Problem location** **Suggestions for an improvement**
test
speed up client unit tests problem description controllerimpltests secure connectionfactoryimpltests secure problem location suggestions for an improvement
1
409,686
11,966,784,287
IssuesEvent
2020-04-06 04:46:10
wso2/product-microgateway
https://api.github.com/repos/wso2/product-microgateway
opened
Basic auth header is removed when x-wso2-disable-security: true
Priority/Normal Type/Bug
### Description: Basic authentication header is removed when x-wso2-disable-security: true and is no longer present when request is received by user defined interceptor. we are trying to validate basic auth hearder as part of interceptor. this was working in version 3.0.1 but there were other issue so moved to 3.1.0. but in latest version header is altogether removed. ### Steps to reproduce: /public/rt/PING: get: description: "" operationId: PING x-wso2-disable-security: true x-wso2-request-interceptor: java:org.mgw.interceptor.IDSAuthInterceptor responses: "200": description: Successful response content: application/json: schema: $ref: "#/components/schemas/PING" application/xml: schema: $ref: "#/components/schemas/PING" security: - basicAuthentication: [] ### Affected Product Version: 3.1.0
1.0
Basic auth header is removed when x-wso2-disable-security: true - ### Description: Basic authentication header is removed when x-wso2-disable-security: true and is no longer present when request is received by user defined interceptor. we are trying to validate basic auth hearder as part of interceptor. this was working in version 3.0.1 but there were other issue so moved to 3.1.0. but in latest version header is altogether removed. ### Steps to reproduce: /public/rt/PING: get: description: "" operationId: PING x-wso2-disable-security: true x-wso2-request-interceptor: java:org.mgw.interceptor.IDSAuthInterceptor responses: "200": description: Successful response content: application/json: schema: $ref: "#/components/schemas/PING" application/xml: schema: $ref: "#/components/schemas/PING" security: - basicAuthentication: [] ### Affected Product Version: 3.1.0
non_test
basic auth header is removed when x disable security true description basic authentication header is removed when x disable security true and is no longer present when request is received by user defined interceptor we are trying to validate basic auth hearder as part of interceptor this was working in version but there were other issue so moved to but in latest version header is altogether removed steps to reproduce public rt ping get description operationid ping x disable security true x request interceptor java org mgw interceptor idsauthinterceptor responses description successful response content application json schema ref components schemas ping application xml schema ref components schemas ping security basicauthentication affected product version
0
26,685
12,467,262,324
IssuesEvent
2020-05-28 16:46:46
cityofaustin/atd-data-tech
https://api.github.com/repos/cityofaustin/atd-data-tech
opened
Story map and Input app templates in AGOL
Service: Geo Type: Map Request Workgroup: ATSD Workgroup: PIO
Set up a collaboration group in AGOL so that PIO, ATSD and DTS GIS staff can collaborate on story maps and input apps related to upcoming projects and virtual open houses. The group is currently in testing, and we'll move on to developing templates soon. Once we're happy with the functionality, I"ll create process documentation and share it with the group (documentation will be tracked in a separate issue.
1.0
Story map and Input app templates in AGOL - Set up a collaboration group in AGOL so that PIO, ATSD and DTS GIS staff can collaborate on story maps and input apps related to upcoming projects and virtual open houses. The group is currently in testing, and we'll move on to developing templates soon. Once we're happy with the functionality, I"ll create process documentation and share it with the group (documentation will be tracked in a separate issue.
non_test
story map and input app templates in agol set up a collaboration group in agol so that pio atsd and dts gis staff can collaborate on story maps and input apps related to upcoming projects and virtual open houses the group is currently in testing and we ll move on to developing templates soon once we re happy with the functionality i ll create process documentation and share it with the group documentation will be tracked in a separate issue
0
178,592
13,785,942,626
IssuesEvent
2020-10-09 00:18:13
Azure/autorest.typescript
https://api.github.com/repos/Azure/autorest.typescript
opened
Add test coverage for Non-string-enums test server scenarios
test-coverage v6
- [ ] [`NonStringEnumsGetFloat`](https://github.com/Azure/autorest.testserver/search?q=NonStringEnumsGetFloat) - [ ] [`NonStringEnumsGetInt`](https://github.com/Azure/autorest.testserver/search?q=NonStringEnumsGetInt) - [ ] [`NonStringEnumsPostFloat`](https://github.com/Azure/autorest.testserver/search?q=NonStringEnumsPostFloat) - [ ] [`NonStringEnumsPostInt`](https://github.com/Azure/autorest.testserver/search?q=NonStringEnumsPostInt) Bug https://github.com/Azure/autorest.typescript/issues/742 should be addressed when adding this coverage
1.0
Add test coverage for Non-string-enums test server scenarios - - [ ] [`NonStringEnumsGetFloat`](https://github.com/Azure/autorest.testserver/search?q=NonStringEnumsGetFloat) - [ ] [`NonStringEnumsGetInt`](https://github.com/Azure/autorest.testserver/search?q=NonStringEnumsGetInt) - [ ] [`NonStringEnumsPostFloat`](https://github.com/Azure/autorest.testserver/search?q=NonStringEnumsPostFloat) - [ ] [`NonStringEnumsPostInt`](https://github.com/Azure/autorest.testserver/search?q=NonStringEnumsPostInt) Bug https://github.com/Azure/autorest.typescript/issues/742 should be addressed when adding this coverage
test
add test coverage for non string enums test server scenarios bug should be addressed when adding this coverage
1
319,879
27,404,786,719
IssuesEvent
2023-03-01 05:29:19
freqtrade/freqtrade
https://api.github.com/repos/freqtrade/freqtrade
closed
Cache 'on' cause backtesting error
Question Backtest
<!-- Have you searched for similar issues before posting it? Did you have a VERY good look at the [documentation](https://www.freqtrade.io/en/latest/) and are sure that the question is not explained there Please do not use the question template to report bugs or to request new features. --> ## Describe your environment * Operating system: wsl2 w10 * Python Version: 3.10 * CCXT version: 2.7.66 * Freqtrade Version: 2023.dev2 ## Your question first run on new time frame, bot takes 252 trade after edit strategy (only **dataframe['rsi_3m'] < 75** to **dataframe['rsi_3m'] < 70**, no other thing change), If i run ``` freqtrade backtesting --userdir=tData -c=tData/config.json -s=tScalp --cache=none --timerange=20220209-20230210 ``` then when edit stra, it works (with total **250** trade on) but if i run with no **--cache = none**: ``` freqtrade backtesting --userdir=tData -c=tData/config.json -s=tScalp --timerange=20220209-20230210 ``` it still show **252** trade *Ask the question you have not been able to find an answer in the [Documentation](https://www.freqtrade.io/en/latest/)* What wrong with my command to use cache function correctly? many thank
1.0
Cache 'on' cause backtesting error - <!-- Have you searched for similar issues before posting it? Did you have a VERY good look at the [documentation](https://www.freqtrade.io/en/latest/) and are sure that the question is not explained there Please do not use the question template to report bugs or to request new features. --> ## Describe your environment * Operating system: wsl2 w10 * Python Version: 3.10 * CCXT version: 2.7.66 * Freqtrade Version: 2023.dev2 ## Your question first run on new time frame, bot takes 252 trade after edit strategy (only **dataframe['rsi_3m'] < 75** to **dataframe['rsi_3m'] < 70**, no other thing change), If i run ``` freqtrade backtesting --userdir=tData -c=tData/config.json -s=tScalp --cache=none --timerange=20220209-20230210 ``` then when edit stra, it works (with total **250** trade on) but if i run with no **--cache = none**: ``` freqtrade backtesting --userdir=tData -c=tData/config.json -s=tScalp --timerange=20220209-20230210 ``` it still show **252** trade *Ask the question you have not been able to find an answer in the [Documentation](https://www.freqtrade.io/en/latest/)* What wrong with my command to use cache function correctly? many thank
test
cache on cause backtesting error have you searched for similar issues before posting it did you have a very good look at the and are sure that the question is not explained there please do not use the question template to report bugs or to request new features describe your environment operating system python version ccxt version freqtrade version your question first run on new time frame bot takes trade after edit strategy only dataframe to dataframe no other thing change if i run freqtrade backtesting userdir tdata c tdata config json s tscalp cache none timerange then when edit stra it works with total trade on but if i run with no cache none freqtrade backtesting userdir tdata c tdata config json s tscalp timerange it still show trade ask the question you have not been able to find an answer in the what wrong with my command to use cache function correctly many thank
1
33,022
6,997,174,312
IssuesEvent
2017-12-16 11:19:28
RoboCup-SSL/ssl-vision
https://api.github.com/repos/RoboCup-SSL/ssl-vision
closed
Glitch when playing teams have different heights
auto-migrated Priority-Medium Type-Defect
``` When the two teams have robots with different heights, a workaround is needed in order to configure both, given the height comes from the pattern and not from the team. The workaround is to add a second 'standard2010' team, which is sort of unusual. Using version r198 ``` Original issue reported on code.google.com by `jgurz...@yahoo.com.br` on 12 Nov 2011 at 1:43
1.0
Glitch when playing teams have different heights - ``` When the two teams have robots with different heights, a workaround is needed in order to configure both, given the height comes from the pattern and not from the team. The workaround is to add a second 'standard2010' team, which is sort of unusual. Using version r198 ``` Original issue reported on code.google.com by `jgurz...@yahoo.com.br` on 12 Nov 2011 at 1:43
non_test
glitch when playing teams have different heights when the two teams have robots with different heights a workaround is needed in order to configure both given the height comes from the pattern and not from the team the workaround is to add a second team which is sort of unusual using version original issue reported on code google com by jgurz yahoo com br on nov at
0
315,777
27,106,171,182
IssuesEvent
2023-02-15 12:16:51
LIBCAS/DL4DH-Kramerius-plus
https://api.github.com/repos/LIBCAS/DL4DH-Kramerius-plus
closed
Chyba 500 při zobrazení detailu úlohy obohacení
T::ToTests
### Postup - zadám obohacení publikací z kolekce spiritualistických textů (pomocí REST API) - v seznamu žádostí o obohacení (`/enrichment/`) vyberu poslední kolekci ### Výsledek - zobrazí se prázdná stránka - v ladicích nástrojích je následující výstup: ```json { "status" : 500, "error" : "Internal Server Error", "path" : "GET: /api/enrichment/2a27259b-ce07-44d2-aebd-80af9d4dc78e", "timestamp" : "2023-01-25T17:50:00.702409Z", "exception" : "org.hibernate.HibernateException", "message" : "Unable to access lob stream", "cause" : "org.postgresql.util.PSQLException: Large Objects may not be used in auto-commit mode." } ```
1.0
Chyba 500 při zobrazení detailu úlohy obohacení - ### Postup - zadám obohacení publikací z kolekce spiritualistických textů (pomocí REST API) - v seznamu žádostí o obohacení (`/enrichment/`) vyberu poslední kolekci ### Výsledek - zobrazí se prázdná stránka - v ladicích nástrojích je následující výstup: ```json { "status" : 500, "error" : "Internal Server Error", "path" : "GET: /api/enrichment/2a27259b-ce07-44d2-aebd-80af9d4dc78e", "timestamp" : "2023-01-25T17:50:00.702409Z", "exception" : "org.hibernate.HibernateException", "message" : "Unable to access lob stream", "cause" : "org.postgresql.util.PSQLException: Large Objects may not be used in auto-commit mode." } ```
test
chyba při zobrazení detailu úlohy obohacení postup zadám obohacení publikací z kolekce spiritualistických textů pomocí rest api v seznamu žádostí o obohacení enrichment vyberu poslední kolekci výsledek zobrazí se prázdná stránka v ladicích nástrojích je následující výstup json status error internal server error path get api enrichment aebd timestamp exception org hibernate hibernateexception message unable to access lob stream cause org postgresql util psqlexception large objects may not be used in auto commit mode
1
27,918
8,052,578,854
IssuesEvent
2018-08-01 19:47:28
oracle/opengrok
https://api.github.com/repos/oracle/opengrok
opened
consider shipping tools Python scripts as python package
build enhancement
As mentioned in #2245, the Python scripts > should be a python package and the distribution version of this package we should ship with each opengrok release.
1.0
consider shipping tools Python scripts as python package - As mentioned in #2245, the Python scripts > should be a python package and the distribution version of this package we should ship with each opengrok release.
non_test
consider shipping tools python scripts as python package as mentioned in the python scripts should be a python package and the distribution version of this package we should ship with each opengrok release
0
427,434
29,812,534,561
IssuesEvent
2023-06-16 16:10:04
FRONTENDSCHOOL5/final-10-Goodi
https://api.github.com/repos/FRONTENDSCHOOL5/final-10-Goodi
closed
💅 [Style]: PostProduct page style
documentation style
## Description PostProduct page style ## To-do - [x] PostProduct page route 생성 - [x] UploadImage 컴포넌트 생성 및 style - [x] PostProduct page style ## ETC 20230616 - 202300613
1.0
💅 [Style]: PostProduct page style - ## Description PostProduct page style ## To-do - [x] PostProduct page route 생성 - [x] UploadImage 컴포넌트 생성 및 style - [x] PostProduct page style ## ETC 20230616 - 202300613
non_test
💅 postproduct page style description postproduct page style to do postproduct page route 생성 uploadimage 컴포넌트 생성 및 style postproduct page style etc
0
224,393
17,186,041,254
IssuesEvent
2021-07-16 02:11:17
atsign-foundation/at_tools
https://api.github.com/repos/atsign-foundation/at_tools
closed
at_ve_doctor README.md needs to explain what the tool does
1 SP PR15 PR16 documentation
@cconstab initial commit is a good starting point `Small demo/tool to check that your virtual environment is up and working working and that all the @Signs are ready and activated..` Where do I get the virtual environment (VE) from, and how is this going to help me with it? How do I use the tool? What is the purpose of a lib that calculates 42 and a test that looks for 42?
1.0
at_ve_doctor README.md needs to explain what the tool does - @cconstab initial commit is a good starting point `Small demo/tool to check that your virtual environment is up and working working and that all the @Signs are ready and activated..` Where do I get the virtual environment (VE) from, and how is this going to help me with it? How do I use the tool? What is the purpose of a lib that calculates 42 and a test that looks for 42?
non_test
at ve doctor readme md needs to explain what the tool does cconstab initial commit is a good starting point small demo tool to check that your virtual environment is up and working working and that all the signs are ready and activated where do i get the virtual environment ve from and how is this going to help me with it how do i use the tool what is the purpose of a lib that calculates and a test that looks for
0
225,217
17,799,174,566
IssuesEvent
2021-09-01 04:33:30
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
roachtest: tpcc/multiregion/survive=region/chaos=true failed
C-test-failure O-robot O-roachtest branch-master
roachtest.tpcc/multiregion/survive=region/chaos=true [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=3366874&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=3366874&tab=artifacts#/tpcc/multiregion/survive=region/chaos=true) on master @ [8cae60f603ccc4d83137167b3b31cab09be9d41a](https://github.com/cockroachdb/cockroach/commits/8cae60f603ccc4d83137167b3b31cab09be9d41a): ``` | | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).checkUptime | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:56 | | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).listen.func1 | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:282 | | runtime.goexit | | /usr/local/go/src/runtime/asm_amd64.s:1371 | Wraps: (4) expected <=100 errors, found from 51649.000000, to 144458.000000 | Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.leafError Wraps: (399) secondary error attachment | error at from 2021-08-27T12:42:27Z, to 2021-08-27T12:47:07Z on metric workload_tpcc_newOrder_error_total{instance="34.139.254.172:2121"}: expected <=100 errors, found from 68062.000000, to 190722.000000 | (1) attached stack trace | -- stack trace: | | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).checkMetrics | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:207 | | [...repeated from below...] | Wraps: (2) error at from 2021-08-27T12:42:27Z, to 2021-08-27T12:47:07Z on metric workload_tpcc_newOrder_error_total{instance="34.139.254.172:2121"} | Wraps: (3) attached stack trace | -- stack trace: | | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).checkUptime.func2 | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:78 | | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).checkMetrics | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:206 | | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).checkUptime | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:56 | | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).listen.func1 | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:282 | | runtime.goexit | | /usr/local/go/src/runtime/asm_amd64.s:1371 | Wraps: (4) expected <=100 errors, found from 68062.000000, to 190722.000000 | Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.leafError Wraps: (400) attached stack trace -- stack trace: | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).checkMetrics | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:207 | [...repeated from below...] Wraps: (401) error at from 2021-08-27T12:42:27Z, to 2021-08-27T12:47:07Z on metric workload_tpcc_newOrder_error_total{instance="34.139.254.172:2111"} Wraps: (402) attached stack trace -- stack trace: | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).checkUptime.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:78 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).checkMetrics | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:206 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).checkUptime | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:56 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).listen.func1 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:282 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1371 Wraps: (403) expected <=100 errors, found from 68204.000000, to 190717.000000 Error types: (1) *secondary.withSecondaryError (2) *secondary.withSecondaryError (3) *secondary.withSecondaryError (4) *secondary.withSecondaryError (5) *secondary.withSecondaryError (6) *secondary.withSecondaryError (7) *secondary.withSecondaryError (8) *secondary.withSecondaryError (9) *secondary.withSecondaryError (10) *secondary.withSecondaryError (11) *secondary.withSecondaryError (12) *secondary.withSecondaryError (13) *secondary.withSecondaryError (14) *secondary.withSecondaryError (15) *secondary.withSecondaryError (16) *secondary.withSecondaryError (17) *secondary.withSecondaryError (18) *secondary.withSecondaryError (19) *secondary.withSecondaryError (20) *secondary.withSecondaryError (21) *secondary.withSecondaryError (22) *secondary.withSecondaryError (23) *secondary.withSecondaryError (24) *secondary.withSecondaryError (25) *secondary.withSecondaryError (26) *secondary.withSecondaryError (27) *secondary.withSecondaryError (28) *secondary.withSecondaryError (29) *secondary.withSecondaryError (30) *secondary.withSecondaryError (31) *secondary.withSecondaryError (32) *secondary.withSecondaryError (33) *secondary.withSecondaryError (34) *secondary.withSecondaryError (35) *secondary.withSecondaryError (36) *secondary.withSecondaryError (37) *secondary.withSecondaryError (38) *secondary.withSecondaryError (39) *secondary.withSecondaryError (40) *secondary.withSecondaryError (41) *secondary.withSecondaryError (42) *secondary.withSecondaryError (43) *secondary.withSecondaryError (44) *secondary.withSecondaryError (45) *secondary.withSecondaryError (46) *secondary.withSecondaryError (47) *secondary.withSecondaryError (48) *secondary.withSecondaryError (49) *secondary.withSecondaryError (50) *secondary.withSecondaryError (51) *secondary.withSecondaryError (52) *secondary.withSecondaryError (53) *secondary.withSecondaryError (54) *secondary.withSecondaryError (55) *secondary.withSecondaryError (56) *secondary.withSecondaryError (57) *secondary.withSecondaryError (58) *secondary.withSecondaryError (59) *secondary.withSecondaryError (60) *secondary.withSecondaryError (61) *secondary.withSecondaryError (62) *secondary.withSecondaryError (63) *secondary.withSecondaryError (64) *secondary.withSecondaryError (65) *secondary.withSecondaryError (66) *secondary.withSecondaryError (67) *secondary.withSecondaryError (68) *secondary.withSecondaryError (69) *secondary.withSecondaryError (70) *secondary.withSecondaryError (71) *secondary.withSecondaryError (72) *secondary.withSecondaryError (73) *secondary.withSecondaryError (74) *secondary.withSecondaryError (75) *secondary.withSecondaryError (76) *secondary.withSecondaryError (77) *secondary.withSecondaryError (78) *secondary.withSecondaryError (79) *secondary.withSecondaryError (80) *secondary.withSecondaryError (81) *secondary.withSecondaryError (82) *secondary.withSecondaryError (83) *secondary.withSecondaryError (84) *secondary.withSecondaryError (85) *secondary.withSecondaryError (86) *secondary.withSecondaryError (87) *secondary.withSecondaryError (88) *secondary.withSecondaryError (89) *secondary.withSecondaryError (90) *secondary.withSecondaryError (91) *secondary.withSecondaryError (92) *secondary.withSecondaryError (93) *secondary.withSecondaryError (94) *secondary.withSecondaryError (95) *secondary.withSecondaryError (96) *secondary.withSecondaryError (97) *secondary.withSecondaryError (98) *secondary.withSecondaryError (99) *secondary.withSecondaryError (100) *secondary.withSecondaryError (101) *secondary.withSecondaryError (102) *secondary.withSecondaryError (103) *secondary.withSecondaryError (104) *secondary.withSecondaryError (105) *secondary.withSecondaryError (106) *secondary.withSecondaryError (107) *secondary.withSecondaryError (108) *secondary.withSecondaryError (109) *secondary.withSecondaryError (110) *secondary.withSecondaryError (111) *secondary.withSecondaryError (112) *secondary.withSecondaryError (113) *secondary.withSecondaryError (114) *secondary.withSecondaryError (115) *secondary.withSecondaryError (116) *secondary.withSecondaryError (117) *secondary.withSecondaryError (118) *secondary.withSecondaryError (119) *secondary.withSecondaryError (120) *secondary.withSecondaryError (121) *secondary.withSecondaryError (122) *secondary.withSecondaryError (123) *secondary.withSecondaryError (124) *secondary.withSecondaryError (125) *secondary.withSecondaryError (126) *secondary.withSecondaryError (127) *secondary.withSecondaryError (128) *secondary.withSecondaryError (129) *secondary.withSecondaryError (130) *secondary.withSecondaryError (131) *secondary.withSecondaryError (132) *secondary.withSecondaryError (133) *secondary.withSecondaryError (134) *secondary.withSecondaryError (135) *secondary.withSecondaryError (136) *secondary.withSecondaryError (137) *secondary.withSecondaryError (138) *secondary.withSecondaryError (139) *secondary.withSecondaryError (140) *secondary.withSecondaryError (141) *secondary.withSecondaryError (142) *secondary.withSecondaryError (143) *secondary.withSecondaryError (144) *secondary.withSecondaryError (145) *secondary.withSecondaryError (146) *secondary.withSecondaryError (147) *secondary.withSecondaryError (148) *secondary.withSecondaryError (149) *secondary.withSecondaryError (150) *secondary.withSecondaryError (151) *secondary.withSecondaryError (152) *secondary.withSecondaryError (153) *secondary.withSecondaryError (154) *secondary.withSecondaryError (155) *secondary.withSecondaryError (156) *secondary.withSecondaryError (157) *secondary.withSecondaryError (158) *secondary.withSecondaryError (159) *secondary.withSecondaryError (160) *secondary.withSecondaryError (161) *secondary.withSecondaryError (162) *secondary.withSecondaryError (163) *secondary.withSecondaryError (164) *secondary.withSecondaryError (165) *secondary.withSecondaryError (166) *secondary.withSecondaryError (167) *secondary.withSecondaryError (168) *secondary.withSecondaryError (169) *secondary.withSecondaryError (170) *secondary.withSecondaryError (171) *secondary.withSecondaryError (172) *secondary.withSecondaryError (173) *secondary.withSecondaryError (174) *secondary.withSecondaryError (175) *secondary.withSecondaryError (176) *secondary.withSecondaryError (177) *secondary.withSecondaryError (178) *secondary.withSecondaryError (179) *secondary.withSecondaryError (180) *secondary.withSecondaryError (181) *secondary.withSecondaryError (182) *secondary.withSecondaryError (183) *secondary.withSecondaryError (184) *secondary.withSecondaryError (185) *secondary.withSecondaryError (186) *secondary.withSecondaryError (187) *secondary.withSecondaryError (188) *secondary.withSecondaryError (189) *secondary.withSecondaryError (190) *secondary.withSecondaryError (191) *secondary.withSecondaryError (192) *secondary.withSecondaryError (193) *secondary.withSecondaryError (194) *secondary.withSecondaryError (195) *secondary.withSecondaryError (196) *secondary.withSecondaryError (197) *secondary.withSecondaryError (198) *secondary.withSecondaryError (199) *secondary.withSecondaryError (200) *secondary.withSecondaryError (201) *secondary.withSecondaryError (202) *secondary.withSecondaryError (203) *secondary.withSecondaryError (204) *secondary.withSecondaryError (205) *secondary.withSecondaryError (206) *secondary.withSecondaryError (207) *secondary.withSecondaryError (208) *secondary.withSecondaryError (209) *secondary.withSecondaryError (210) *secondary.withSecondaryError (211) *secondary.withSecondaryError (212) *secondary.withSecondaryError (213) *secondary.withSecondaryError (214) *secondary.withSecondaryError (215) *secondary.withSecondaryError (216) *secondary.withSecondaryError (217) *secondary.withSecondaryError (218) *secondary.withSecondaryError (219) *secondary.withSecondaryError (220) *secondary.withSecondaryError (221) *secondary.withSecondaryError (222) *secondary.withSecondaryError (223) *secondary.withSecondaryError (224) *secondary.withSecondaryError (225) *secondary.withSecondaryError (226) *secondary.withSecondaryError (227) *secondary.withSecondaryError (228) *secondary.withSecondaryError (229) *secondary.withSecondaryError (230) *secondary.withSecondaryError (231) *secondary.withSecondaryError (232) *secondary.withSecondaryError (233) *secondary.withSecondaryError (234) *secondary.withSecondaryError (235) *secondary.withSecondaryError (236) *secondary.withSecondaryError (237) *secondary.withSecondaryError (238) *secondary.withSecondaryError (239) *secondary.withSecondaryError (240) *secondary.withSecondaryError (241) *secondary.withSecondaryError (242) *secondary.withSecondaryError (243) *secondary.withSecondaryError (244) *secondary.withSecondaryError (245) *secondary.withSecondaryError (246) *secondary.withSecondaryError (247) *secondary.withSecondaryError (248) *secondary.withSecondaryError (249) *secondary.withSecondaryError (250) *secondary.withSecondaryError (251) *secondary.withSecondaryError (252) *secondary.withSecondaryError (253) *secondary.withSecondaryError (254) *secondary.withSecondaryError (255) *secondary.withSecondaryError (256) *secondary.withSecondaryError (257) *secondary.withSecondaryError (258) *secondary.withSecondaryError (259) *secondary.withSecondaryError (260) *secondary.withSecondaryError (261) *secondary.withSecondaryError (262) *secondary.withSecondaryError (263) *secondary.withSecondaryError (264) *secondary.withSecondaryError (265) *secondary.withSecondaryError (266) *secondary.withSecondaryError (267) *secondary.withSecondaryError (268) *secondary.withSecondaryError (269) *secondary.withSecondaryError (270) *secondary.withSecondaryError (271) *secondary.withSecondaryError (272) *secondary.withSecondaryError (273) *secondary.withSecondaryError (274) *secondary.withSecondaryError (275) *secondary.withSecondaryError (276) *secondary.withSecondaryError (277) *secondary.withSecondaryError (278) *secondary.withSecondaryError (279) *secondary.withSecondaryError (280) *secondary.withSecondaryError (281) *secondary.withSecondaryError (282) *secondary.withSecondaryError (283) *secondary.withSecondaryError (284) *secondary.withSecondaryError (285) *secondary.withSecondaryError (286) *secondary.withSecondaryError (287) *secondary.withSecondaryError (288) *secondary.withSecondaryError (289) *secondary.withSecondaryError (290) *secondary.withSecondaryError (291) *secondary.withSecondaryError (292) *secondary.withSecondaryError (293) *secondary.withSecondaryError (294) *secondary.withSecondaryError (295) *secondary.withSecondaryError (296) *secondary.withSecondaryError (297) *secondary.withSecondaryError (298) *secondary.withSecondaryError (299) *secondary.withSecondaryError (300) *secondary.withSecondaryError (301) *secondary.withSecondaryError (302) *secondary.withSecondaryError (303) *secondary.withSecondaryError (304) *secondary.withSecondaryError (305) *secondary.withSecondaryError (306) *secondary.withSecondaryError (307) *secondary.withSecondaryError (308) *secondary.withSecondaryError (309) *secondary.withSecondaryError (310) *secondary.withSecondaryError (311) *secondary.withSecondaryError (312) *secondary.withSecondaryError (313) *secondary.withSecondaryError (314) *secondary.withSecondaryError (315) *secondary.withSecondaryError (316) *secondary.withSecondaryError (317) *secondary.withSecondaryError (318) *secondary.withSecondaryError (319) *secondary.withSecondaryError (320) *secondary.withSecondaryError (321) *secondary.withSecondaryError (322) *secondary.withSecondaryError (323) *secondary.withSecondaryError (324) *secondary.withSecondaryError (325) *secondary.withSecondaryError (326) *secondary.withSecondaryError (327) *secondary.withSecondaryError (328) *secondary.withSecondaryError (329) *secondary.withSecondaryError (330) *secondary.withSecondaryError (331) *secondary.withSecondaryError (332) *secondary.withSecondaryError (333) *secondary.withSecondaryError (334) *secondary.withSecondaryError (335) *secondary.withSecondaryError (336) *secondary.withSecondaryError (337) *secondary.withSecondaryError (338) *secondary.withSecondaryError (339) *secondary.withSecondaryError (340) *secondary.withSecondaryError (341) *secondary.withSecondaryError (342) *secondary.withSecondaryError (343) *secondary.withSecondaryError (344) *secondary.withSecondaryError (345) *secondary.withSecondaryError (346) *secondary.withSecondaryError (347) *secondary.withSecondaryError (348) *secondary.withSecondaryError (349) *secondary.withSecondaryError (350) *secondary.withSecondaryError (351) *secondary.withSecondaryError (352) *secondary.withSecondaryError (353) *secondary.withSecondaryError (354) *secondary.withSecondaryError (355) *secondary.withSecondaryError (356) *secondary.withSecondaryError (357) *secondary.withSecondaryError (358) *secondary.withSecondaryError (359) *secondary.withSecondaryError (360) *secondary.withSecondaryError (361) *secondary.withSecondaryError (362) *secondary.withSecondaryError (363) *secondary.withSecondaryError (364) *secondary.withSecondaryError (365) *secondary.withSecondaryError (366) *secondary.withSecondaryError (367) *secondary.withSecondaryError (368) *secondary.withSecondaryError (369) *secondary.withSecondaryError (370) *secondary.withSecondaryError (371) *secondary.withSecondaryError (372) *secondary.withSecondaryError (373) *secondary.withSecondaryError (374) *secondary.withSecondaryError (375) *secondary.withSecondaryError (376) *secondary.withSecondaryError (377) *secondary.withSecondaryError (378) *secondary.withSecondaryError (379) *secondary.withSecondaryError (380) *secondary.withSecondaryError (381) *secondary.withSecondaryError (382) *secondary.withSecondaryError (383) *secondary.withSecondaryError (384) *secondary.withSecondaryError (385) *secondary.withSecondaryError (386) *secondary.withSecondaryError (387) *secondary.withSecondaryError (388) *secondary.withSecondaryError (389) *secondary.withSecondaryError (390) *secondary.withSecondaryError (391) *secondary.withSecondaryError (392) *secondary.withSecondaryError (393) *secondary.withSecondaryError (394) *secondary.withSecondaryError (395) *secondary.withSecondaryError (396) *secondary.withSecondaryError (397) *secondary.withSecondaryError (398) *secondary.withSecondaryError (399) *secondary.withSecondaryError (400) *withstack.withStack (401) *errutil.withPrefix (402) *withstack.withStack (403) *errutil.leafError ``` <details><summary>Reproduce</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) </p> </details> /cc @cockroachdb/multiregion <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*tpcc/multiregion/survive=region/chaos=true.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
2.0
roachtest: tpcc/multiregion/survive=region/chaos=true failed - roachtest.tpcc/multiregion/survive=region/chaos=true [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=3366874&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=3366874&tab=artifacts#/tpcc/multiregion/survive=region/chaos=true) on master @ [8cae60f603ccc4d83137167b3b31cab09be9d41a](https://github.com/cockroachdb/cockroach/commits/8cae60f603ccc4d83137167b3b31cab09be9d41a): ``` | | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).checkUptime | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:56 | | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).listen.func1 | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:282 | | runtime.goexit | | /usr/local/go/src/runtime/asm_amd64.s:1371 | Wraps: (4) expected <=100 errors, found from 51649.000000, to 144458.000000 | Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.leafError Wraps: (399) secondary error attachment | error at from 2021-08-27T12:42:27Z, to 2021-08-27T12:47:07Z on metric workload_tpcc_newOrder_error_total{instance="34.139.254.172:2121"}: expected <=100 errors, found from 68062.000000, to 190722.000000 | (1) attached stack trace | -- stack trace: | | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).checkMetrics | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:207 | | [...repeated from below...] | Wraps: (2) error at from 2021-08-27T12:42:27Z, to 2021-08-27T12:47:07Z on metric workload_tpcc_newOrder_error_total{instance="34.139.254.172:2121"} | Wraps: (3) attached stack trace | -- stack trace: | | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).checkUptime.func2 | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:78 | | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).checkMetrics | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:206 | | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).checkUptime | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:56 | | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).listen.func1 | | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:282 | | runtime.goexit | | /usr/local/go/src/runtime/asm_amd64.s:1371 | Wraps: (4) expected <=100 errors, found from 68062.000000, to 190722.000000 | Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.leafError Wraps: (400) attached stack trace -- stack trace: | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).checkMetrics | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:207 | [...repeated from below...] Wraps: (401) error at from 2021-08-27T12:42:27Z, to 2021-08-27T12:47:07Z on metric workload_tpcc_newOrder_error_total{instance="34.139.254.172:2111"} Wraps: (402) attached stack trace -- stack trace: | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).checkUptime.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:78 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).checkMetrics | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:206 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).checkUptime | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:56 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccChaosEventProcessor).listen.func1 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/drt.go:282 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1371 Wraps: (403) expected <=100 errors, found from 68204.000000, to 190717.000000 Error types: (1) *secondary.withSecondaryError (2) *secondary.withSecondaryError (3) *secondary.withSecondaryError (4) *secondary.withSecondaryError (5) *secondary.withSecondaryError (6) *secondary.withSecondaryError (7) *secondary.withSecondaryError (8) *secondary.withSecondaryError (9) *secondary.withSecondaryError (10) *secondary.withSecondaryError (11) *secondary.withSecondaryError (12) *secondary.withSecondaryError (13) *secondary.withSecondaryError (14) *secondary.withSecondaryError (15) *secondary.withSecondaryError (16) *secondary.withSecondaryError (17) *secondary.withSecondaryError (18) *secondary.withSecondaryError (19) *secondary.withSecondaryError (20) *secondary.withSecondaryError (21) *secondary.withSecondaryError (22) *secondary.withSecondaryError (23) *secondary.withSecondaryError (24) *secondary.withSecondaryError (25) *secondary.withSecondaryError (26) *secondary.withSecondaryError (27) *secondary.withSecondaryError (28) *secondary.withSecondaryError (29) *secondary.withSecondaryError (30) *secondary.withSecondaryError (31) *secondary.withSecondaryError (32) *secondary.withSecondaryError (33) *secondary.withSecondaryError (34) *secondary.withSecondaryError (35) *secondary.withSecondaryError (36) *secondary.withSecondaryError (37) *secondary.withSecondaryError (38) *secondary.withSecondaryError (39) *secondary.withSecondaryError (40) *secondary.withSecondaryError (41) *secondary.withSecondaryError (42) *secondary.withSecondaryError (43) *secondary.withSecondaryError (44) *secondary.withSecondaryError (45) *secondary.withSecondaryError (46) *secondary.withSecondaryError (47) *secondary.withSecondaryError (48) *secondary.withSecondaryError (49) *secondary.withSecondaryError (50) *secondary.withSecondaryError (51) *secondary.withSecondaryError (52) *secondary.withSecondaryError (53) *secondary.withSecondaryError (54) *secondary.withSecondaryError (55) *secondary.withSecondaryError (56) *secondary.withSecondaryError (57) *secondary.withSecondaryError (58) *secondary.withSecondaryError (59) *secondary.withSecondaryError (60) *secondary.withSecondaryError (61) *secondary.withSecondaryError (62) *secondary.withSecondaryError (63) *secondary.withSecondaryError (64) *secondary.withSecondaryError (65) *secondary.withSecondaryError (66) *secondary.withSecondaryError (67) *secondary.withSecondaryError (68) *secondary.withSecondaryError (69) *secondary.withSecondaryError (70) *secondary.withSecondaryError (71) *secondary.withSecondaryError (72) *secondary.withSecondaryError (73) *secondary.withSecondaryError (74) *secondary.withSecondaryError (75) *secondary.withSecondaryError (76) *secondary.withSecondaryError (77) *secondary.withSecondaryError (78) *secondary.withSecondaryError (79) *secondary.withSecondaryError (80) *secondary.withSecondaryError (81) *secondary.withSecondaryError (82) *secondary.withSecondaryError (83) *secondary.withSecondaryError (84) *secondary.withSecondaryError (85) *secondary.withSecondaryError (86) *secondary.withSecondaryError (87) *secondary.withSecondaryError (88) *secondary.withSecondaryError (89) *secondary.withSecondaryError (90) *secondary.withSecondaryError (91) *secondary.withSecondaryError (92) *secondary.withSecondaryError (93) *secondary.withSecondaryError (94) *secondary.withSecondaryError (95) *secondary.withSecondaryError (96) *secondary.withSecondaryError (97) *secondary.withSecondaryError (98) *secondary.withSecondaryError (99) *secondary.withSecondaryError (100) *secondary.withSecondaryError (101) *secondary.withSecondaryError (102) *secondary.withSecondaryError (103) *secondary.withSecondaryError (104) *secondary.withSecondaryError (105) *secondary.withSecondaryError (106) *secondary.withSecondaryError (107) *secondary.withSecondaryError (108) *secondary.withSecondaryError (109) *secondary.withSecondaryError (110) *secondary.withSecondaryError (111) *secondary.withSecondaryError (112) *secondary.withSecondaryError (113) *secondary.withSecondaryError (114) *secondary.withSecondaryError (115) *secondary.withSecondaryError (116) *secondary.withSecondaryError (117) *secondary.withSecondaryError (118) *secondary.withSecondaryError (119) *secondary.withSecondaryError (120) *secondary.withSecondaryError (121) *secondary.withSecondaryError (122) *secondary.withSecondaryError (123) *secondary.withSecondaryError (124) *secondary.withSecondaryError (125) *secondary.withSecondaryError (126) *secondary.withSecondaryError (127) *secondary.withSecondaryError (128) *secondary.withSecondaryError (129) *secondary.withSecondaryError (130) *secondary.withSecondaryError (131) *secondary.withSecondaryError (132) *secondary.withSecondaryError (133) *secondary.withSecondaryError (134) *secondary.withSecondaryError (135) *secondary.withSecondaryError (136) *secondary.withSecondaryError (137) *secondary.withSecondaryError (138) *secondary.withSecondaryError (139) *secondary.withSecondaryError (140) *secondary.withSecondaryError (141) *secondary.withSecondaryError (142) *secondary.withSecondaryError (143) *secondary.withSecondaryError (144) *secondary.withSecondaryError (145) *secondary.withSecondaryError (146) *secondary.withSecondaryError (147) *secondary.withSecondaryError (148) *secondary.withSecondaryError (149) *secondary.withSecondaryError (150) *secondary.withSecondaryError (151) *secondary.withSecondaryError (152) *secondary.withSecondaryError (153) *secondary.withSecondaryError (154) *secondary.withSecondaryError (155) *secondary.withSecondaryError (156) *secondary.withSecondaryError (157) *secondary.withSecondaryError (158) *secondary.withSecondaryError (159) *secondary.withSecondaryError (160) *secondary.withSecondaryError (161) *secondary.withSecondaryError (162) *secondary.withSecondaryError (163) *secondary.withSecondaryError (164) *secondary.withSecondaryError (165) *secondary.withSecondaryError (166) *secondary.withSecondaryError (167) *secondary.withSecondaryError (168) *secondary.withSecondaryError (169) *secondary.withSecondaryError (170) *secondary.withSecondaryError (171) *secondary.withSecondaryError (172) *secondary.withSecondaryError (173) *secondary.withSecondaryError (174) *secondary.withSecondaryError (175) *secondary.withSecondaryError (176) *secondary.withSecondaryError (177) *secondary.withSecondaryError (178) *secondary.withSecondaryError (179) *secondary.withSecondaryError (180) *secondary.withSecondaryError (181) *secondary.withSecondaryError (182) *secondary.withSecondaryError (183) *secondary.withSecondaryError (184) *secondary.withSecondaryError (185) *secondary.withSecondaryError (186) *secondary.withSecondaryError (187) *secondary.withSecondaryError (188) *secondary.withSecondaryError (189) *secondary.withSecondaryError (190) *secondary.withSecondaryError (191) *secondary.withSecondaryError (192) *secondary.withSecondaryError (193) *secondary.withSecondaryError (194) *secondary.withSecondaryError (195) *secondary.withSecondaryError (196) *secondary.withSecondaryError (197) *secondary.withSecondaryError (198) *secondary.withSecondaryError (199) *secondary.withSecondaryError (200) *secondary.withSecondaryError (201) *secondary.withSecondaryError (202) *secondary.withSecondaryError (203) *secondary.withSecondaryError (204) *secondary.withSecondaryError (205) *secondary.withSecondaryError (206) *secondary.withSecondaryError (207) *secondary.withSecondaryError (208) *secondary.withSecondaryError (209) *secondary.withSecondaryError (210) *secondary.withSecondaryError (211) *secondary.withSecondaryError (212) *secondary.withSecondaryError (213) *secondary.withSecondaryError (214) *secondary.withSecondaryError (215) *secondary.withSecondaryError (216) *secondary.withSecondaryError (217) *secondary.withSecondaryError (218) *secondary.withSecondaryError (219) *secondary.withSecondaryError (220) *secondary.withSecondaryError (221) *secondary.withSecondaryError (222) *secondary.withSecondaryError (223) *secondary.withSecondaryError (224) *secondary.withSecondaryError (225) *secondary.withSecondaryError (226) *secondary.withSecondaryError (227) *secondary.withSecondaryError (228) *secondary.withSecondaryError (229) *secondary.withSecondaryError (230) *secondary.withSecondaryError (231) *secondary.withSecondaryError (232) *secondary.withSecondaryError (233) *secondary.withSecondaryError (234) *secondary.withSecondaryError (235) *secondary.withSecondaryError (236) *secondary.withSecondaryError (237) *secondary.withSecondaryError (238) *secondary.withSecondaryError (239) *secondary.withSecondaryError (240) *secondary.withSecondaryError (241) *secondary.withSecondaryError (242) *secondary.withSecondaryError (243) *secondary.withSecondaryError (244) *secondary.withSecondaryError (245) *secondary.withSecondaryError (246) *secondary.withSecondaryError (247) *secondary.withSecondaryError (248) *secondary.withSecondaryError (249) *secondary.withSecondaryError (250) *secondary.withSecondaryError (251) *secondary.withSecondaryError (252) *secondary.withSecondaryError (253) *secondary.withSecondaryError (254) *secondary.withSecondaryError (255) *secondary.withSecondaryError (256) *secondary.withSecondaryError (257) *secondary.withSecondaryError (258) *secondary.withSecondaryError (259) *secondary.withSecondaryError (260) *secondary.withSecondaryError (261) *secondary.withSecondaryError (262) *secondary.withSecondaryError (263) *secondary.withSecondaryError (264) *secondary.withSecondaryError (265) *secondary.withSecondaryError (266) *secondary.withSecondaryError (267) *secondary.withSecondaryError (268) *secondary.withSecondaryError (269) *secondary.withSecondaryError (270) *secondary.withSecondaryError (271) *secondary.withSecondaryError (272) *secondary.withSecondaryError (273) *secondary.withSecondaryError (274) *secondary.withSecondaryError (275) *secondary.withSecondaryError (276) *secondary.withSecondaryError (277) *secondary.withSecondaryError (278) *secondary.withSecondaryError (279) *secondary.withSecondaryError (280) *secondary.withSecondaryError (281) *secondary.withSecondaryError (282) *secondary.withSecondaryError (283) *secondary.withSecondaryError (284) *secondary.withSecondaryError (285) *secondary.withSecondaryError (286) *secondary.withSecondaryError (287) *secondary.withSecondaryError (288) *secondary.withSecondaryError (289) *secondary.withSecondaryError (290) *secondary.withSecondaryError (291) *secondary.withSecondaryError (292) *secondary.withSecondaryError (293) *secondary.withSecondaryError (294) *secondary.withSecondaryError (295) *secondary.withSecondaryError (296) *secondary.withSecondaryError (297) *secondary.withSecondaryError (298) *secondary.withSecondaryError (299) *secondary.withSecondaryError (300) *secondary.withSecondaryError (301) *secondary.withSecondaryError (302) *secondary.withSecondaryError (303) *secondary.withSecondaryError (304) *secondary.withSecondaryError (305) *secondary.withSecondaryError (306) *secondary.withSecondaryError (307) *secondary.withSecondaryError (308) *secondary.withSecondaryError (309) *secondary.withSecondaryError (310) *secondary.withSecondaryError (311) *secondary.withSecondaryError (312) *secondary.withSecondaryError (313) *secondary.withSecondaryError (314) *secondary.withSecondaryError (315) *secondary.withSecondaryError (316) *secondary.withSecondaryError (317) *secondary.withSecondaryError (318) *secondary.withSecondaryError (319) *secondary.withSecondaryError (320) *secondary.withSecondaryError (321) *secondary.withSecondaryError (322) *secondary.withSecondaryError (323) *secondary.withSecondaryError (324) *secondary.withSecondaryError (325) *secondary.withSecondaryError (326) *secondary.withSecondaryError (327) *secondary.withSecondaryError (328) *secondary.withSecondaryError (329) *secondary.withSecondaryError (330) *secondary.withSecondaryError (331) *secondary.withSecondaryError (332) *secondary.withSecondaryError (333) *secondary.withSecondaryError (334) *secondary.withSecondaryError (335) *secondary.withSecondaryError (336) *secondary.withSecondaryError (337) *secondary.withSecondaryError (338) *secondary.withSecondaryError (339) *secondary.withSecondaryError (340) *secondary.withSecondaryError (341) *secondary.withSecondaryError (342) *secondary.withSecondaryError (343) *secondary.withSecondaryError (344) *secondary.withSecondaryError (345) *secondary.withSecondaryError (346) *secondary.withSecondaryError (347) *secondary.withSecondaryError (348) *secondary.withSecondaryError (349) *secondary.withSecondaryError (350) *secondary.withSecondaryError (351) *secondary.withSecondaryError (352) *secondary.withSecondaryError (353) *secondary.withSecondaryError (354) *secondary.withSecondaryError (355) *secondary.withSecondaryError (356) *secondary.withSecondaryError (357) *secondary.withSecondaryError (358) *secondary.withSecondaryError (359) *secondary.withSecondaryError (360) *secondary.withSecondaryError (361) *secondary.withSecondaryError (362) *secondary.withSecondaryError (363) *secondary.withSecondaryError (364) *secondary.withSecondaryError (365) *secondary.withSecondaryError (366) *secondary.withSecondaryError (367) *secondary.withSecondaryError (368) *secondary.withSecondaryError (369) *secondary.withSecondaryError (370) *secondary.withSecondaryError (371) *secondary.withSecondaryError (372) *secondary.withSecondaryError (373) *secondary.withSecondaryError (374) *secondary.withSecondaryError (375) *secondary.withSecondaryError (376) *secondary.withSecondaryError (377) *secondary.withSecondaryError (378) *secondary.withSecondaryError (379) *secondary.withSecondaryError (380) *secondary.withSecondaryError (381) *secondary.withSecondaryError (382) *secondary.withSecondaryError (383) *secondary.withSecondaryError (384) *secondary.withSecondaryError (385) *secondary.withSecondaryError (386) *secondary.withSecondaryError (387) *secondary.withSecondaryError (388) *secondary.withSecondaryError (389) *secondary.withSecondaryError (390) *secondary.withSecondaryError (391) *secondary.withSecondaryError (392) *secondary.withSecondaryError (393) *secondary.withSecondaryError (394) *secondary.withSecondaryError (395) *secondary.withSecondaryError (396) *secondary.withSecondaryError (397) *secondary.withSecondaryError (398) *secondary.withSecondaryError (399) *secondary.withSecondaryError (400) *withstack.withStack (401) *errutil.withPrefix (402) *withstack.withStack (403) *errutil.leafError ``` <details><summary>Reproduce</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) </p> </details> /cc @cockroachdb/multiregion <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*tpcc/multiregion/survive=region/chaos=true.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
test
roachtest tpcc multiregion survive region chaos true failed roachtest tpcc multiregion survive region chaos true with on master github com cockroachdb cockroach pkg cmd roachtest tests tpccchaoseventprocessor checkuptime home agent work go src github com cockroachdb cockroach pkg cmd roachtest tests drt go github com cockroachdb cockroach pkg cmd roachtest tests tpccchaoseventprocessor listen home agent work go src github com cockroachdb cockroach pkg cmd roachtest tests drt go runtime goexit usr local go src runtime asm s wraps expected errors found from to error types withstack withstack errutil withprefix withstack withstack errutil leaferror wraps secondary error attachment error at from to on metric workload tpcc neworder error total instance expected errors found from to attached stack trace stack trace github com cockroachdb cockroach pkg cmd roachtest tests tpccchaoseventprocessor checkmetrics home agent work go src github com cockroachdb cockroach pkg cmd roachtest tests drt go wraps error at from to on metric workload tpcc neworder error total instance wraps attached stack trace stack trace github com cockroachdb cockroach pkg cmd roachtest tests tpccchaoseventprocessor checkuptime home agent work go src github com cockroachdb cockroach pkg cmd roachtest tests drt go github com cockroachdb cockroach pkg cmd roachtest tests tpccchaoseventprocessor checkmetrics home agent work go src github com cockroachdb cockroach pkg cmd roachtest tests drt go github com cockroachdb cockroach pkg cmd roachtest tests tpccchaoseventprocessor checkuptime home agent work go src github com cockroachdb cockroach pkg cmd roachtest tests drt go github com cockroachdb cockroach pkg cmd roachtest tests tpccchaoseventprocessor listen home agent work go src github com cockroachdb cockroach pkg cmd roachtest tests drt go runtime goexit usr local go src runtime asm s wraps expected errors found from to error types withstack withstack errutil withprefix withstack withstack errutil leaferror wraps attached stack trace stack trace github com cockroachdb cockroach pkg cmd roachtest tests tpccchaoseventprocessor checkmetrics home agent work go src github com cockroachdb cockroach pkg cmd roachtest tests drt go wraps error at from to on metric workload tpcc neworder error total instance wraps attached stack trace stack trace github com cockroachdb cockroach pkg cmd roachtest tests tpccchaoseventprocessor checkuptime home agent work go src github com cockroachdb cockroach pkg cmd roachtest tests drt go github com cockroachdb cockroach pkg cmd roachtest tests tpccchaoseventprocessor checkmetrics home agent work go src github com cockroachdb cockroach pkg cmd roachtest tests drt go github com cockroachdb cockroach pkg cmd roachtest tests tpccchaoseventprocessor checkuptime home agent work go src github com cockroachdb cockroach pkg cmd roachtest tests drt go github com cockroachdb cockroach pkg cmd roachtest tests tpccchaoseventprocessor listen home agent work go src github com cockroachdb cockroach pkg cmd roachtest tests drt go runtime goexit usr local go src runtime asm s wraps expected errors found from to error types secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror secondary withsecondaryerror withstack withstack errutil withprefix withstack withstack errutil leaferror reproduce see cc cockroachdb multiregion
1
72,537
8,750,704,058
IssuesEvent
2018-12-13 20:00:11
oasis-tcs/sarif-spec
https://api.github.com/repos/oasis-tcs/sarif-spec
closed
Version control details not strongly associated with results
CSD.2 design-approved enhancement impact-non-breaking-change resolved-fixed triage-approved
The version control details object tells us about the VS server, revision, etc., but doesn't contain an enlistment root location. As a result, we can't use this information to do things like rewrite URLs to point to hosted content.
1.0
Version control details not strongly associated with results - The version control details object tells us about the VS server, revision, etc., but doesn't contain an enlistment root location. As a result, we can't use this information to do things like rewrite URLs to point to hosted content.
non_test
version control details not strongly associated with results the version control details object tells us about the vs server revision etc but doesn t contain an enlistment root location as a result we can t use this information to do things like rewrite urls to point to hosted content
0
216,153
24,243,020,003
IssuesEvent
2022-09-27 08:21:58
BranislavBeno/Sprint-Statistics-Viewer
https://api.github.com/repos/BranislavBeno/Sprint-Statistics-Viewer
closed
spring-boot-starter-actuator-2.7.4.jar: 5 vulnerabilities (highest severity is: 7.5) - autoclosed
security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-boot-starter-actuator-2.7.4.jar</b></p></summary> <p></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.yaml/snakeyaml/1.30/8fde7fe2586328ac3c68db92045e1c8759125000/snakeyaml-1.30.jar</p> <p> <p>Found in HEAD commit: <a href="https://github.com/BranislavBeno/Sprint-Statistics-Viewer/commit/0b94b8b8927a1ac8359fec5e6a5a959d31186d90">0b94b8b8927a1ac8359fec5e6a5a959d31186d90</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2022-25857](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25857) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | snakeyaml-1.30.jar | Transitive | N/A | &#10060; | | [CVE-2022-38749](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38749) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | snakeyaml-1.30.jar | Transitive | N/A | &#10060; | | [CVE-2022-38752](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38752) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | snakeyaml-1.30.jar | Transitive | N/A | &#10060; | | [CVE-2022-38751](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38751) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | snakeyaml-1.30.jar | Transitive | N/A | &#10060; | | [CVE-2022-38750](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38750) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | snakeyaml-1.30.jar | Transitive | N/A | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-25857</summary> ### Vulnerable Library - <b>snakeyaml-1.30.jar</b></p> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="https://bitbucket.org/snakeyaml/snakeyaml">https://bitbucket.org/snakeyaml/snakeyaml</a></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.yaml/snakeyaml/1.30/8fde7fe2586328ac3c68db92045e1c8759125000/snakeyaml-1.30.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-actuator-2.7.4.jar (Root Library) - spring-boot-starter-2.7.4.jar - :x: **snakeyaml-1.30.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/BranislavBeno/Sprint-Statistics-Viewer/commit/0b94b8b8927a1ac8359fec5e6a5a959d31186d90">0b94b8b8927a1ac8359fec5e6a5a959d31186d90</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> The package org.yaml:snakeyaml from 0 and before 1.31 are vulnerable to Denial of Service (DoS) due missing to nested depth limitation for collections. <p>Publish Date: 2022-08-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25857>CVE-2022-25857</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25857">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25857</a></p> <p>Release Date: 2022-08-30</p> <p>Fix Resolution: org.yaml:snakeyaml:1.31</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-38749</summary> ### Vulnerable Library - <b>snakeyaml-1.30.jar</b></p> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="https://bitbucket.org/snakeyaml/snakeyaml">https://bitbucket.org/snakeyaml/snakeyaml</a></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.yaml/snakeyaml/1.30/8fde7fe2586328ac3c68db92045e1c8759125000/snakeyaml-1.30.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-actuator-2.7.4.jar (Root Library) - spring-boot-starter-2.7.4.jar - :x: **snakeyaml-1.30.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/BranislavBeno/Sprint-Statistics-Viewer/commit/0b94b8b8927a1ac8359fec5e6a5a959d31186d90">0b94b8b8927a1ac8359fec5e6a5a959d31186d90</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stackoverflow. <p>Publish Date: 2022-09-05 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38749>CVE-2022-38749</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>6.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://bitbucket.org/snakeyaml/snakeyaml/issues/526/stackoverflow-oss-fuzz-47027">https://bitbucket.org/snakeyaml/snakeyaml/issues/526/stackoverflow-oss-fuzz-47027</a></p> <p>Release Date: 2022-09-05</p> <p>Fix Resolution: org.yaml:snakeyaml:1.31</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-38752</summary> ### Vulnerable Library - <b>snakeyaml-1.30.jar</b></p> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="https://bitbucket.org/snakeyaml/snakeyaml">https://bitbucket.org/snakeyaml/snakeyaml</a></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.yaml/snakeyaml/1.30/8fde7fe2586328ac3c68db92045e1c8759125000/snakeyaml-1.30.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-actuator-2.7.4.jar (Root Library) - spring-boot-starter-2.7.4.jar - :x: **snakeyaml-1.30.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/BranislavBeno/Sprint-Statistics-Viewer/commit/0b94b8b8927a1ac8359fec5e6a5a959d31186d90">0b94b8b8927a1ac8359fec5e6a5a959d31186d90</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stack-overflow. <p>Publish Date: 2022-09-05 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38752>CVE-2022-38752</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>6.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-9w3m-gqgf-c4p9">https://github.com/advisories/GHSA-9w3m-gqgf-c4p9</a></p> <p>Release Date: 2022-09-05</p> <p>Fix Resolution: org.yaml:snakeyaml:1.32 </p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-38751</summary> ### Vulnerable Library - <b>snakeyaml-1.30.jar</b></p> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="https://bitbucket.org/snakeyaml/snakeyaml">https://bitbucket.org/snakeyaml/snakeyaml</a></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.yaml/snakeyaml/1.30/8fde7fe2586328ac3c68db92045e1c8759125000/snakeyaml-1.30.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-actuator-2.7.4.jar (Root Library) - spring-boot-starter-2.7.4.jar - :x: **snakeyaml-1.30.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/BranislavBeno/Sprint-Statistics-Viewer/commit/0b94b8b8927a1ac8359fec5e6a5a959d31186d90">0b94b8b8927a1ac8359fec5e6a5a959d31186d90</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stackoverflow. <p>Publish Date: 2022-09-05 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38751>CVE-2022-38751</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>6.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=47039">https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=47039</a></p> <p>Release Date: 2022-09-05</p> <p>Fix Resolution: org.yaml:snakeyaml:1.31</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-38750</summary> ### Vulnerable Library - <b>snakeyaml-1.30.jar</b></p> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="https://bitbucket.org/snakeyaml/snakeyaml">https://bitbucket.org/snakeyaml/snakeyaml</a></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.yaml/snakeyaml/1.30/8fde7fe2586328ac3c68db92045e1c8759125000/snakeyaml-1.30.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-actuator-2.7.4.jar (Root Library) - spring-boot-starter-2.7.4.jar - :x: **snakeyaml-1.30.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/BranislavBeno/Sprint-Statistics-Viewer/commit/0b94b8b8927a1ac8359fec5e6a5a959d31186d90">0b94b8b8927a1ac8359fec5e6a5a959d31186d90</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stackoverflow. <p>Publish Date: 2022-09-05 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38750>CVE-2022-38750</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=47027">https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=47027</a></p> <p>Release Date: 2022-09-05</p> <p>Fix Resolution: org.yaml:snakeyaml:1.31</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
True
spring-boot-starter-actuator-2.7.4.jar: 5 vulnerabilities (highest severity is: 7.5) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-boot-starter-actuator-2.7.4.jar</b></p></summary> <p></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.yaml/snakeyaml/1.30/8fde7fe2586328ac3c68db92045e1c8759125000/snakeyaml-1.30.jar</p> <p> <p>Found in HEAD commit: <a href="https://github.com/BranislavBeno/Sprint-Statistics-Viewer/commit/0b94b8b8927a1ac8359fec5e6a5a959d31186d90">0b94b8b8927a1ac8359fec5e6a5a959d31186d90</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2022-25857](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25857) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | snakeyaml-1.30.jar | Transitive | N/A | &#10060; | | [CVE-2022-38749](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38749) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | snakeyaml-1.30.jar | Transitive | N/A | &#10060; | | [CVE-2022-38752](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38752) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | snakeyaml-1.30.jar | Transitive | N/A | &#10060; | | [CVE-2022-38751](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38751) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | snakeyaml-1.30.jar | Transitive | N/A | &#10060; | | [CVE-2022-38750](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38750) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | snakeyaml-1.30.jar | Transitive | N/A | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-25857</summary> ### Vulnerable Library - <b>snakeyaml-1.30.jar</b></p> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="https://bitbucket.org/snakeyaml/snakeyaml">https://bitbucket.org/snakeyaml/snakeyaml</a></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.yaml/snakeyaml/1.30/8fde7fe2586328ac3c68db92045e1c8759125000/snakeyaml-1.30.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-actuator-2.7.4.jar (Root Library) - spring-boot-starter-2.7.4.jar - :x: **snakeyaml-1.30.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/BranislavBeno/Sprint-Statistics-Viewer/commit/0b94b8b8927a1ac8359fec5e6a5a959d31186d90">0b94b8b8927a1ac8359fec5e6a5a959d31186d90</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> The package org.yaml:snakeyaml from 0 and before 1.31 are vulnerable to Denial of Service (DoS) due missing to nested depth limitation for collections. <p>Publish Date: 2022-08-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25857>CVE-2022-25857</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25857">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25857</a></p> <p>Release Date: 2022-08-30</p> <p>Fix Resolution: org.yaml:snakeyaml:1.31</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-38749</summary> ### Vulnerable Library - <b>snakeyaml-1.30.jar</b></p> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="https://bitbucket.org/snakeyaml/snakeyaml">https://bitbucket.org/snakeyaml/snakeyaml</a></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.yaml/snakeyaml/1.30/8fde7fe2586328ac3c68db92045e1c8759125000/snakeyaml-1.30.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-actuator-2.7.4.jar (Root Library) - spring-boot-starter-2.7.4.jar - :x: **snakeyaml-1.30.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/BranislavBeno/Sprint-Statistics-Viewer/commit/0b94b8b8927a1ac8359fec5e6a5a959d31186d90">0b94b8b8927a1ac8359fec5e6a5a959d31186d90</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stackoverflow. <p>Publish Date: 2022-09-05 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38749>CVE-2022-38749</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>6.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://bitbucket.org/snakeyaml/snakeyaml/issues/526/stackoverflow-oss-fuzz-47027">https://bitbucket.org/snakeyaml/snakeyaml/issues/526/stackoverflow-oss-fuzz-47027</a></p> <p>Release Date: 2022-09-05</p> <p>Fix Resolution: org.yaml:snakeyaml:1.31</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-38752</summary> ### Vulnerable Library - <b>snakeyaml-1.30.jar</b></p> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="https://bitbucket.org/snakeyaml/snakeyaml">https://bitbucket.org/snakeyaml/snakeyaml</a></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.yaml/snakeyaml/1.30/8fde7fe2586328ac3c68db92045e1c8759125000/snakeyaml-1.30.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-actuator-2.7.4.jar (Root Library) - spring-boot-starter-2.7.4.jar - :x: **snakeyaml-1.30.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/BranislavBeno/Sprint-Statistics-Viewer/commit/0b94b8b8927a1ac8359fec5e6a5a959d31186d90">0b94b8b8927a1ac8359fec5e6a5a959d31186d90</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stack-overflow. <p>Publish Date: 2022-09-05 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38752>CVE-2022-38752</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>6.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-9w3m-gqgf-c4p9">https://github.com/advisories/GHSA-9w3m-gqgf-c4p9</a></p> <p>Release Date: 2022-09-05</p> <p>Fix Resolution: org.yaml:snakeyaml:1.32 </p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-38751</summary> ### Vulnerable Library - <b>snakeyaml-1.30.jar</b></p> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="https://bitbucket.org/snakeyaml/snakeyaml">https://bitbucket.org/snakeyaml/snakeyaml</a></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.yaml/snakeyaml/1.30/8fde7fe2586328ac3c68db92045e1c8759125000/snakeyaml-1.30.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-actuator-2.7.4.jar (Root Library) - spring-boot-starter-2.7.4.jar - :x: **snakeyaml-1.30.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/BranislavBeno/Sprint-Statistics-Viewer/commit/0b94b8b8927a1ac8359fec5e6a5a959d31186d90">0b94b8b8927a1ac8359fec5e6a5a959d31186d90</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stackoverflow. <p>Publish Date: 2022-09-05 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38751>CVE-2022-38751</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>6.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=47039">https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=47039</a></p> <p>Release Date: 2022-09-05</p> <p>Fix Resolution: org.yaml:snakeyaml:1.31</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-38750</summary> ### Vulnerable Library - <b>snakeyaml-1.30.jar</b></p> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="https://bitbucket.org/snakeyaml/snakeyaml">https://bitbucket.org/snakeyaml/snakeyaml</a></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.yaml/snakeyaml/1.30/8fde7fe2586328ac3c68db92045e1c8759125000/snakeyaml-1.30.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-actuator-2.7.4.jar (Root Library) - spring-boot-starter-2.7.4.jar - :x: **snakeyaml-1.30.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/BranislavBeno/Sprint-Statistics-Viewer/commit/0b94b8b8927a1ac8359fec5e6a5a959d31186d90">0b94b8b8927a1ac8359fec5e6a5a959d31186d90</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stackoverflow. <p>Publish Date: 2022-09-05 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38750>CVE-2022-38750</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=47027">https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=47027</a></p> <p>Release Date: 2022-09-05</p> <p>Fix Resolution: org.yaml:snakeyaml:1.31</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
non_test
spring boot starter actuator jar vulnerabilities highest severity is autoclosed vulnerable library spring boot starter actuator jar path to dependency file build gradle path to vulnerable library home wss scanner gradle caches modules files org yaml snakeyaml snakeyaml jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high snakeyaml jar transitive n a medium snakeyaml jar transitive n a medium snakeyaml jar transitive n a medium snakeyaml jar transitive n a medium snakeyaml jar transitive n a details cve vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file build gradle path to vulnerable library home wss scanner gradle caches modules files org yaml snakeyaml snakeyaml jar dependency hierarchy spring boot starter actuator jar root library spring boot starter jar x snakeyaml jar vulnerable library found in head commit a href found in base branch master vulnerability details the package org yaml snakeyaml from and before are vulnerable to denial of service dos due missing to nested depth limitation for collections publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org yaml snakeyaml step up your open source security game with mend cve vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file build gradle path to vulnerable library home wss scanner gradle caches modules files org yaml snakeyaml snakeyaml jar dependency hierarchy spring boot starter actuator jar root library spring boot starter jar x snakeyaml jar vulnerable library found in head commit a href found in base branch master vulnerability details using snakeyaml to parse untrusted yaml files may be vulnerable to denial of service attacks dos if the parser is running on user supplied input an attacker may supply content that causes the parser to crash by stackoverflow publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org yaml snakeyaml step up your open source security game with mend cve vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file build gradle path to vulnerable library home wss scanner gradle caches modules files org yaml snakeyaml snakeyaml jar dependency hierarchy spring boot starter actuator jar root library spring boot starter jar x snakeyaml jar vulnerable library found in head commit a href found in base branch master vulnerability details using snakeyaml to parse untrusted yaml files may be vulnerable to denial of service attacks dos if the parser is running on user supplied input an attacker may supply content that causes the parser to crash by stack overflow publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org yaml snakeyaml step up your open source security game with mend cve vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file build gradle path to vulnerable library home wss scanner gradle caches modules files org yaml snakeyaml snakeyaml jar dependency hierarchy spring boot starter actuator jar root library spring boot starter jar x snakeyaml jar vulnerable library found in head commit a href found in base branch master vulnerability details using snakeyaml to parse untrusted yaml files may be vulnerable to denial of service attacks dos if the parser is running on user supplied input an attacker may supply content that causes the parser to crash by stackoverflow publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org yaml snakeyaml step up your open source security game with mend cve vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file build gradle path to vulnerable library home wss scanner gradle caches modules files org yaml snakeyaml snakeyaml jar dependency hierarchy spring boot starter actuator jar root library spring boot starter jar x snakeyaml jar vulnerable library found in head commit a href found in base branch master vulnerability details using snakeyaml to parse untrusted yaml files may be vulnerable to denial of service attacks dos if the parser is running on user supplied input an attacker may supply content that causes the parser to crash by stackoverflow publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org yaml snakeyaml step up your open source security game with mend
0
286,053
24,715,952,102
IssuesEvent
2022-10-20 06:57:16
HyphaApp/hypha
https://api.github.com/repos/HyphaApp/hypha
closed
[Epic] How the PAF progress looks in Hypha activity feed view
Status: Tested - approved for live ✅ Type: Feature Theme: Project Page Epic Adopter:OTF Winter Sprint :skier:
## Additional context This is a sub-issue of #2911 ### Description This issue is to show how the PAF progress looks in the activity feed. - [x] PM submits completed PAF for approval from CA - [x] CA requests more information - [x] CA submits PAF to CEO, Finance, Compliance for approval - [x] CEO, Finance, Compliance request change - [x] CEO, Finance, Compliance for approval - [ ] Contract approved/signed/mutually executed -- both CEO and contractor signed contract (I think we need to show this, but not sure right now. Unsure if we need to display contract has been signed. **We need to find out if contract needs to be displayed**). ## Priority - High (keeping you from completing day-to-day tasks) ## Ideal deadline October 2022
1.0
[Epic] How the PAF progress looks in Hypha activity feed view - ## Additional context This is a sub-issue of #2911 ### Description This issue is to show how the PAF progress looks in the activity feed. - [x] PM submits completed PAF for approval from CA - [x] CA requests more information - [x] CA submits PAF to CEO, Finance, Compliance for approval - [x] CEO, Finance, Compliance request change - [x] CEO, Finance, Compliance for approval - [ ] Contract approved/signed/mutually executed -- both CEO and contractor signed contract (I think we need to show this, but not sure right now. Unsure if we need to display contract has been signed. **We need to find out if contract needs to be displayed**). ## Priority - High (keeping you from completing day-to-day tasks) ## Ideal deadline October 2022
test
how the paf progress looks in hypha activity feed view additional context this is a sub issue of description this issue is to show how the paf progress looks in the activity feed pm submits completed paf for approval from ca ca requests more information ca submits paf to ceo finance compliance for approval ceo finance compliance request change ceo finance compliance for approval contract approved signed mutually executed both ceo and contractor signed contract i think we need to show this but not sure right now unsure if we need to display contract has been signed we need to find out if contract needs to be displayed priority high keeping you from completing day to day tasks ideal deadline october
1
273,431
23,751,778,691
IssuesEvent
2022-08-31 21:24:43
pulumi/pulumi
https://api.github.com/repos/pulumi/pulumi
closed
Possible Python SDK regression in 3.38.0: module 'pulumi.runtime.settings' has no attribute '_set_test_mode_enabled'
kind/bug area/testing language/python awaiting-feedback
### What happened? After upgrade to 3.38.0, actions like `pulumi up` and `pulumi preview` are failing with the following stacktrace: ``` $ pulumi up Previewing update ([...]): Type Name Plan Info pulumi:pulumi:Stack [...] 1 error Diagnostics: pulumi:pulumi:Stack ([...]): error: Program failed with an unhandled exception: Traceback (most recent call last): File "/usr/local/bin/pulumi-language-python-exec", line 111, in <module> loop.run_until_complete(coro) File "/usr/local/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete return future.result() File "/usr/local/lib/python3.10/site-packages/pulumi/runtime/stack.py", line 126, in run_in_stack await run_pulumi_func(lambda: Stack(func)) File "/usr/local/lib/python3.10/site-packages/pulumi/runtime/stack.py", line 49, in run_pulumi_func func() File "/usr/local/lib/python3.10/site-packages/pulumi/runtime/stack.py", line 126, in <lambda> await run_pulumi_func(lambda: Stack(func)) File "/usr/local/lib/python3.10/site-packages/pulumi/runtime/stack.py", line 149, in __init__ func() File "/usr/local/bin/pulumi-language-python-exec", line 110, in <lambda> coro = pulumi.runtime.run_in_stack(lambda: runpy.run_path(args.PROGRAM, run_name='__main__')) File "/usr/local/lib/python3.10/runpy.py", line 289, in run_path return _run_module_code(code, init_globals, run_name, File "/usr/local/lib/python3.10/runpy.py", line 96, in _run_module_code _run_code(code, mod_globals, init_globals, File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "__main__.py", line 14, in <module> pulumi.runtime.settings._set_test_mode_enabled(True) AttributeError: module 'pulumi.runtime.settings' has no attribute '_set_test_mode_enabled' ``` Rolling back just the Python SDK to 3.37.2 (`pip install pulumi==3.37.2`) even with CLI still at 3.38.0 seems to resolve the issue (previews and updates are successful). ### Steps to reproduce 1. Have a python 3.10.6 environment 2. Have latest pulumi client and python SDK (3.38.0) installed 3. Run `pulumi preview` or `pulumi up` on a stack ### Expected Behavior Successful execution of the requested action. ### Actual Behavior Failed execution of the requested action with stacktrace above. ### Output of `pulumi about` ``` # pulumi about CLI Version 3.38.0 Go Version go1.19 Go Compiler gc Plugins NAME VERSION aws 5.13.0 aws-native 0.22.0 command 0.4.1 kubernetes 3.21.0 postgresql 3.6.0 python unknown random 4.8.0 Host OS debian Version 11.4 Arch x86_64 This project is written in python: executable='/usr/local/bin/python3' version='3.10.6' Backend Name [...] URL [...] User [...] Organizations Dependencies: NAME VERSION backoff 2.1.2 black 22.6.0 boto3 1.24.22 coloredlogs 15.0.1 debugpy 1.6.0 Jinja2 3.1.2 kubernetes 23.6.0 mergedeep 1.3.4 packaging 21.3.0 pip 22.2.1 pre-commit 2.20.0 pulumi-aws 5.13.0 pulumi-aws-native 0.22.0 pulumi-command 0.4.1 pulumi-kubernetes 3.21.0 pulumi-postgresql 3.6.0 pulumi-random 4.8.0 pyhumps 3.7.2 wheel 0.37.1 ``` ### Additional context We do not make use of the associated "test mode" or environment variable `PULUMI_TEST_MODE`, which are the only references I could find to the error itself. ### Contributing Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
1.0
Possible Python SDK regression in 3.38.0: module 'pulumi.runtime.settings' has no attribute '_set_test_mode_enabled' - ### What happened? After upgrade to 3.38.0, actions like `pulumi up` and `pulumi preview` are failing with the following stacktrace: ``` $ pulumi up Previewing update ([...]): Type Name Plan Info pulumi:pulumi:Stack [...] 1 error Diagnostics: pulumi:pulumi:Stack ([...]): error: Program failed with an unhandled exception: Traceback (most recent call last): File "/usr/local/bin/pulumi-language-python-exec", line 111, in <module> loop.run_until_complete(coro) File "/usr/local/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete return future.result() File "/usr/local/lib/python3.10/site-packages/pulumi/runtime/stack.py", line 126, in run_in_stack await run_pulumi_func(lambda: Stack(func)) File "/usr/local/lib/python3.10/site-packages/pulumi/runtime/stack.py", line 49, in run_pulumi_func func() File "/usr/local/lib/python3.10/site-packages/pulumi/runtime/stack.py", line 126, in <lambda> await run_pulumi_func(lambda: Stack(func)) File "/usr/local/lib/python3.10/site-packages/pulumi/runtime/stack.py", line 149, in __init__ func() File "/usr/local/bin/pulumi-language-python-exec", line 110, in <lambda> coro = pulumi.runtime.run_in_stack(lambda: runpy.run_path(args.PROGRAM, run_name='__main__')) File "/usr/local/lib/python3.10/runpy.py", line 289, in run_path return _run_module_code(code, init_globals, run_name, File "/usr/local/lib/python3.10/runpy.py", line 96, in _run_module_code _run_code(code, mod_globals, init_globals, File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "__main__.py", line 14, in <module> pulumi.runtime.settings._set_test_mode_enabled(True) AttributeError: module 'pulumi.runtime.settings' has no attribute '_set_test_mode_enabled' ``` Rolling back just the Python SDK to 3.37.2 (`pip install pulumi==3.37.2`) even with CLI still at 3.38.0 seems to resolve the issue (previews and updates are successful). ### Steps to reproduce 1. Have a python 3.10.6 environment 2. Have latest pulumi client and python SDK (3.38.0) installed 3. Run `pulumi preview` or `pulumi up` on a stack ### Expected Behavior Successful execution of the requested action. ### Actual Behavior Failed execution of the requested action with stacktrace above. ### Output of `pulumi about` ``` # pulumi about CLI Version 3.38.0 Go Version go1.19 Go Compiler gc Plugins NAME VERSION aws 5.13.0 aws-native 0.22.0 command 0.4.1 kubernetes 3.21.0 postgresql 3.6.0 python unknown random 4.8.0 Host OS debian Version 11.4 Arch x86_64 This project is written in python: executable='/usr/local/bin/python3' version='3.10.6' Backend Name [...] URL [...] User [...] Organizations Dependencies: NAME VERSION backoff 2.1.2 black 22.6.0 boto3 1.24.22 coloredlogs 15.0.1 debugpy 1.6.0 Jinja2 3.1.2 kubernetes 23.6.0 mergedeep 1.3.4 packaging 21.3.0 pip 22.2.1 pre-commit 2.20.0 pulumi-aws 5.13.0 pulumi-aws-native 0.22.0 pulumi-command 0.4.1 pulumi-kubernetes 3.21.0 pulumi-postgresql 3.6.0 pulumi-random 4.8.0 pyhumps 3.7.2 wheel 0.37.1 ``` ### Additional context We do not make use of the associated "test mode" or environment variable `PULUMI_TEST_MODE`, which are the only references I could find to the error itself. ### Contributing Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
test
possible python sdk regression in module pulumi runtime settings has no attribute set test mode enabled what happened after upgrade to actions like pulumi up and pulumi preview are failing with the following stacktrace pulumi up previewing update type name plan info pulumi pulumi stack error diagnostics pulumi pulumi stack error program failed with an unhandled exception traceback most recent call last file usr local bin pulumi language python exec line in loop run until complete coro file usr local lib asyncio base events py line in run until complete return future result file usr local lib site packages pulumi runtime stack py line in run in stack await run pulumi func lambda stack func file usr local lib site packages pulumi runtime stack py line in run pulumi func func file usr local lib site packages pulumi runtime stack py line in await run pulumi func lambda stack func file usr local lib site packages pulumi runtime stack py line in init func file usr local bin pulumi language python exec line in coro pulumi runtime run in stack lambda runpy run path args program run name main file usr local lib runpy py line in run path return run module code code init globals run name file usr local lib runpy py line in run module code run code code mod globals init globals file usr local lib runpy py line in run code exec code run globals file main py line in pulumi runtime settings set test mode enabled true attributeerror module pulumi runtime settings has no attribute set test mode enabled rolling back just the python sdk to pip install pulumi even with cli still at seems to resolve the issue previews and updates are successful steps to reproduce have a python environment have latest pulumi client and python sdk installed run pulumi preview or pulumi up on a stack expected behavior successful execution of the requested action actual behavior failed execution of the requested action with stacktrace above output of pulumi about pulumi about cli version go version go compiler gc plugins name version aws aws native command kubernetes postgresql python unknown random host os debian version arch this project is written in python executable usr local bin version backend name url user organizations dependencies name version backoff black coloredlogs debugpy kubernetes mergedeep packaging pip pre commit pulumi aws pulumi aws native pulumi command pulumi kubernetes pulumi postgresql pulumi random pyhumps wheel additional context we do not make use of the associated test mode or environment variable pulumi test mode which are the only references i could find to the error itself contributing vote on this issue by adding a 👍 reaction to contribute a fix for this issue leave a comment and link to your pull request if you ve opened one already
1
43,838
11,308,305,168
IssuesEvent
2020-01-19 04:22:32
garciparedes/ng-katex
https://api.github.com/repos/garciparedes/ng-katex
closed
Improve Module Building
build enhancement waiting wontfix
Due to some problems like #34 it's necessary to configure [`ng-packagr`](https://github.com/dherges/ng-packagr) to build the Angular Module.
1.0
Improve Module Building - Due to some problems like #34 it's necessary to configure [`ng-packagr`](https://github.com/dherges/ng-packagr) to build the Angular Module.
non_test
improve module building due to some problems like it s necessary to configure to build the angular module
0
52,194
6,223,214,859
IssuesEvent
2017-07-10 11:16:11
LDMW/app
https://api.github.com/repos/LDMW/app
closed
Copy doesn't fit into mobile search box
please-test question UI
The new copy from the designs doesn't fit into the search box on mobile: <img src="https://user-images.githubusercontent.com/16775804/27957588-fd6c7e14-6316-11e7-94db-84c1b6aeba0a.png" width=200px /> To fit it must be 12px on iPhone 5, which would not pass our accessibility requirements: <img src="https://user-images.githubusercontent.com/16775804/27957727-9ec9977e-6317-11e7-8db1-62772aa74b2b.png" width=200px />
1.0
Copy doesn't fit into mobile search box - The new copy from the designs doesn't fit into the search box on mobile: <img src="https://user-images.githubusercontent.com/16775804/27957588-fd6c7e14-6316-11e7-94db-84c1b6aeba0a.png" width=200px /> To fit it must be 12px on iPhone 5, which would not pass our accessibility requirements: <img src="https://user-images.githubusercontent.com/16775804/27957727-9ec9977e-6317-11e7-8db1-62772aa74b2b.png" width=200px />
test
copy doesn t fit into mobile search box the new copy from the designs doesn t fit into the search box on mobile to fit it must be on iphone which would not pass our accessibility requirements
1
311,612
26,801,501,500
IssuesEvent
2023-02-01 15:23:46
OudayAhmed/Assignment-1-DECIDE
https://api.github.com/repos/OudayAhmed/Assignment-1-DECIDE
closed
Create two input files for "YES" and "NO" outputs
test
Create a sets of input files to test the program. Set one should contain an input files that generate the outcome "YES". Set two should contain aninput files that generate the outcome "NO".
1.0
Create two input files for "YES" and "NO" outputs - Create a sets of input files to test the program. Set one should contain an input files that generate the outcome "YES". Set two should contain aninput files that generate the outcome "NO".
test
create two input files for yes and no outputs create a sets of input files to test the program set one should contain an input files that generate the outcome yes set two should contain aninput files that generate the outcome no
1
82,717
16,018,661,478
IssuesEvent
2021-04-20 19:25:34
microsoft/electionguard-python
https://api.github.com/repos/microsoft/electionguard-python
closed
Add bit-packed serialization mechanism
code owner only enhancement
currently the `Serializable` interface supports `to_json` and `from_json` as a consuming application i would like to serialize and deserialize using a binary serialization mechanism so that i can conserve space for objects in transit and on disk. both `message pack` and `protobuf` are reasonable options. Additionally, as a consuming application i would like to have strong confidence that the serialization parsers behave consistently so i do not have to worry about malformed data issues. consider using [EverParse](https://www.microsoft.com/en-us/research/publication/everparse/)
1.0
Add bit-packed serialization mechanism - currently the `Serializable` interface supports `to_json` and `from_json` as a consuming application i would like to serialize and deserialize using a binary serialization mechanism so that i can conserve space for objects in transit and on disk. both `message pack` and `protobuf` are reasonable options. Additionally, as a consuming application i would like to have strong confidence that the serialization parsers behave consistently so i do not have to worry about malformed data issues. consider using [EverParse](https://www.microsoft.com/en-us/research/publication/everparse/)
non_test
add bit packed serialization mechanism currently the serializable interface supports to json and from json as a consuming application i would like to serialize and deserialize using a binary serialization mechanism so that i can conserve space for objects in transit and on disk both message pack and protobuf are reasonable options additionally as a consuming application i would like to have strong confidence that the serialization parsers behave consistently so i do not have to worry about malformed data issues consider using
0
199,539
15,047,104,077
IssuesEvent
2021-02-03 08:27:14
IntellectualSites/FastAsyncWorldEdit
https://api.github.com/repos/IntellectualSites/FastAsyncWorldEdit
opened
Error when using //regen on Tuinity
Requires Testing
**/fawe debugpaste**: https://athion.net/ISPaster/paste/view/451bb19d003843d29c9d41b55492fd32 **another attempt with FAWE-591**: https://pastebin.com/4Gjvpzan **Required Information** - FAWE Version Number (`/version FastAsyncWorldEdit`): 587-591 - Spigot/Paper Version Number (`/version`): Tuinity-183 - Minecraft Version: 1.16.5 **Describe the bug** //regen doesn't work and spits out an error in console **To Reproduce** Steps to reproduce the behavior: 1. Mark pos1 and pos2 2. do //regen 3. See error in console **Plugins being used on the server** 1L1D, BetterGiants, BKCommonLib, BlockLocker, Core, CoreProtect, CustomCommands, DeadChest, dynmap*, Enchantments_plus, Essentials, EssentialsGeoIP, FastAsyncWorldEdit (WorldEdit), GSit, HolographicDisplays, ItemJoin, ItemSlotMachine, LevelledMobs, LibsDisguises, LightCleaner, LuckPerms, Maplands, My_Worlds, MythicMobs, Negativity, OpenInv, PlaceholderAPI, PlugMan, ProtocolLib, PsudoCommand, QuickShop, SilkSpawners, TAB, TCCoasters, TCPShield*, TimeIsMoney*, Train_Carts, UniqueRewards, Vault, ViaBackwards, ViaVersion, Vivecraft-Spigot-Extensions, WorldEditSelectionVisualizer, WorldGuard **Checklist**: <!--- Make sure you've completed the following steps (put an "X" between of brackets): --> - [X] I included all information required in the sections above - [X] I made sure there are no duplicates of this report [(Use Search)](https://github.com/IntellectualSites/FastAsyncWorldEdit/issues?q=is%3Aissue) - [X] I made sure I am using an up-to-date version of [FastAsyncWorldEdit for 1.16.5](https://ci.athion.net/job/FastAsyncWorldEdit-1.16/) - [X] I made sure the bug/error is not caused by any other plugin
1.0
Error when using //regen on Tuinity - **/fawe debugpaste**: https://athion.net/ISPaster/paste/view/451bb19d003843d29c9d41b55492fd32 **another attempt with FAWE-591**: https://pastebin.com/4Gjvpzan **Required Information** - FAWE Version Number (`/version FastAsyncWorldEdit`): 587-591 - Spigot/Paper Version Number (`/version`): Tuinity-183 - Minecraft Version: 1.16.5 **Describe the bug** //regen doesn't work and spits out an error in console **To Reproduce** Steps to reproduce the behavior: 1. Mark pos1 and pos2 2. do //regen 3. See error in console **Plugins being used on the server** 1L1D, BetterGiants, BKCommonLib, BlockLocker, Core, CoreProtect, CustomCommands, DeadChest, dynmap*, Enchantments_plus, Essentials, EssentialsGeoIP, FastAsyncWorldEdit (WorldEdit), GSit, HolographicDisplays, ItemJoin, ItemSlotMachine, LevelledMobs, LibsDisguises, LightCleaner, LuckPerms, Maplands, My_Worlds, MythicMobs, Negativity, OpenInv, PlaceholderAPI, PlugMan, ProtocolLib, PsudoCommand, QuickShop, SilkSpawners, TAB, TCCoasters, TCPShield*, TimeIsMoney*, Train_Carts, UniqueRewards, Vault, ViaBackwards, ViaVersion, Vivecraft-Spigot-Extensions, WorldEditSelectionVisualizer, WorldGuard **Checklist**: <!--- Make sure you've completed the following steps (put an "X" between of brackets): --> - [X] I included all information required in the sections above - [X] I made sure there are no duplicates of this report [(Use Search)](https://github.com/IntellectualSites/FastAsyncWorldEdit/issues?q=is%3Aissue) - [X] I made sure I am using an up-to-date version of [FastAsyncWorldEdit for 1.16.5](https://ci.athion.net/job/FastAsyncWorldEdit-1.16/) - [X] I made sure the bug/error is not caused by any other plugin
test
error when using regen on tuinity fawe debugpaste another attempt with fawe required information fawe version number version fastasyncworldedit spigot paper version number version tuinity minecraft version describe the bug regen doesn t work and spits out an error in console to reproduce steps to reproduce the behavior mark and do regen see error in console plugins being used on the server bettergiants bkcommonlib blocklocker core coreprotect customcommands deadchest dynmap enchantments plus essentials essentialsgeoip fastasyncworldedit worldedit gsit holographicdisplays itemjoin itemslotmachine levelledmobs libsdisguises lightcleaner luckperms maplands my worlds mythicmobs negativity openinv placeholderapi plugman protocollib psudocommand quickshop silkspawners tab tccoasters tcpshield timeismoney train carts uniquerewards vault viabackwards viaversion vivecraft spigot extensions worldeditselectionvisualizer worldguard checklist i included all information required in the sections above i made sure there are no duplicates of this report i made sure i am using an up to date version of i made sure the bug error is not caused by any other plugin
1