Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
4
112
repo_url
stringlengths
33
141
action
stringclasses
3 values
title
stringlengths
1
1.02k
labels
stringlengths
4
1.54k
body
stringlengths
1
262k
index
stringclasses
17 values
text_combine
stringlengths
95
262k
label
stringclasses
2 values
text
stringlengths
96
252k
binary_label
int64
0
1
198,657
6,975,358,027
IssuesEvent
2017-12-12 06:36:04
xcat2/xcat-core
https://api.github.com/repos/xcat2/xcat-core
closed
Hierarchical mode support with nodenames in synclist
priority:high
Hi! It looks like having nodenames in synclist doesn't work in hiearchical mode. I have a syncfile that contains: ``` /tmp/a -> (sh-101-60) /tmp/a /tmp/b -> (sh-6-34) /tmp/b ``` `sh-101-60` is directly managed by the management node, while `sh-6-34` is managed by a service node: ``` # lsdef -c sh-101-60,sh-6-34 -i servicenode sh-101-60: servicenode= sh-6-34: servicenode=sh-hn03 ``` Distributing files on `sh-101-60` (direct mode) works normally: ``` # updatenode sh-101-60 -F File synchronization has completed for nodes. # md5sum /tmp/a && ssh sh-101-60 md5sum /tmp/a c563744e1d6d090285937d352e3128db /tmp/a c563744e1d6d090285937d352e3128db /tmp/a ``` but it fails on `sh-6-34` (hierarchical mode): ``` # updatenode sh-6-34 -f File synchronization has completed for service nodes. # updatenode sh-6-34 -F sh-6-34: Permission denied (publickey,hostbased). sh-6-34: /tmp/rsync_sh-6-34: line 3: syntax error near unexpected token `(' sh-6-34: /tmp/rsync_sh-6-34: line 3: `/usr/bin/rsync --rsync-path /usr/bin/rsync -Lprogtz /var/xcat/syncfiles/tmp/b sh-hn01.SUNet@sh-6-34:(sh-6-34)/tmp' File synchronization has completed for nodes. ``` The problem is that the `rsync` script generated for the hierarchical mode fails to filter synclist entries based on hostnames and includes them all, with an erroneous syntax: ``` # ssh sh-hn03 cat /tmp/rsync_sh-6-34 #!/bin/sh /usr/bin/ssh sh-hn01.SUNet@sh-6-34 '/bin/mkdir -p (sh-6-34)/tmp (sh-101-60)/tmp' /usr/bin/rsync --rsync-path /usr/bin/rsync -Lprogtz /var/xcat/syncfiles/tmp/b sh-hn01.SUNet@sh-6-34:(sh-6-34)/tmp /usr/bin/rsync --rsync-path /usr/bin/rsync -Lprogtz /var/xcat/syncfiles/tmp/a sh-hn01.SUNet@sh-6-34:(sh-101-60)/tmp ``` Note how the generated `rsync` command still includes the destination hostname inside parenthesis as the destination path for the file. Could you please provide a fix for this? This is currently making the hierarchical mode pretty unusable for us. Thanks!
1.0
Hierarchical mode support with nodenames in synclist - Hi! It looks like having nodenames in synclist doesn't work in hiearchical mode. I have a syncfile that contains: ``` /tmp/a -> (sh-101-60) /tmp/a /tmp/b -> (sh-6-34) /tmp/b ``` `sh-101-60` is directly managed by the management node, while `sh-6-34` is managed by a service node: ``` # lsdef -c sh-101-60,sh-6-34 -i servicenode sh-101-60: servicenode= sh-6-34: servicenode=sh-hn03 ``` Distributing files on `sh-101-60` (direct mode) works normally: ``` # updatenode sh-101-60 -F File synchronization has completed for nodes. # md5sum /tmp/a && ssh sh-101-60 md5sum /tmp/a c563744e1d6d090285937d352e3128db /tmp/a c563744e1d6d090285937d352e3128db /tmp/a ``` but it fails on `sh-6-34` (hierarchical mode): ``` # updatenode sh-6-34 -f File synchronization has completed for service nodes. # updatenode sh-6-34 -F sh-6-34: Permission denied (publickey,hostbased). sh-6-34: /tmp/rsync_sh-6-34: line 3: syntax error near unexpected token `(' sh-6-34: /tmp/rsync_sh-6-34: line 3: `/usr/bin/rsync --rsync-path /usr/bin/rsync -Lprogtz /var/xcat/syncfiles/tmp/b sh-hn01.SUNet@sh-6-34:(sh-6-34)/tmp' File synchronization has completed for nodes. ``` The problem is that the `rsync` script generated for the hierarchical mode fails to filter synclist entries based on hostnames and includes them all, with an erroneous syntax: ``` # ssh sh-hn03 cat /tmp/rsync_sh-6-34 #!/bin/sh /usr/bin/ssh sh-hn01.SUNet@sh-6-34 '/bin/mkdir -p (sh-6-34)/tmp (sh-101-60)/tmp' /usr/bin/rsync --rsync-path /usr/bin/rsync -Lprogtz /var/xcat/syncfiles/tmp/b sh-hn01.SUNet@sh-6-34:(sh-6-34)/tmp /usr/bin/rsync --rsync-path /usr/bin/rsync -Lprogtz /var/xcat/syncfiles/tmp/a sh-hn01.SUNet@sh-6-34:(sh-101-60)/tmp ``` Note how the generated `rsync` command still includes the destination hostname inside parenthesis as the destination path for the file. Could you please provide a fix for this? This is currently making the hierarchical mode pretty unusable for us. Thanks!
non_test
hierarchical mode support with nodenames in synclist hi it looks like having nodenames in synclist doesn t work in hiearchical mode i have a syncfile that contains tmp a sh tmp a tmp b sh tmp b sh is directly managed by the management node while sh is managed by a service node lsdef c sh sh i servicenode sh servicenode sh servicenode sh distributing files on sh direct mode works normally updatenode sh f file synchronization has completed for nodes tmp a ssh sh tmp a tmp a tmp a but it fails on sh hierarchical mode updatenode sh f file synchronization has completed for service nodes updatenode sh f sh permission denied publickey hostbased sh tmp rsync sh line syntax error near unexpected token sh tmp rsync sh line usr bin rsync rsync path usr bin rsync lprogtz var xcat syncfiles tmp b sh sunet sh sh tmp file synchronization has completed for nodes the problem is that the rsync script generated for the hierarchical mode fails to filter synclist entries based on hostnames and includes them all with an erroneous syntax ssh sh cat tmp rsync sh bin sh usr bin ssh sh sunet sh bin mkdir p sh tmp sh tmp usr bin rsync rsync path usr bin rsync lprogtz var xcat syncfiles tmp b sh sunet sh sh tmp usr bin rsync rsync path usr bin rsync lprogtz var xcat syncfiles tmp a sh sunet sh sh tmp note how the generated rsync command still includes the destination hostname inside parenthesis as the destination path for the file could you please provide a fix for this this is currently making the hierarchical mode pretty unusable for us thanks
0
121,534
25,983,827,075
IssuesEvent
2022-12-19 21:31:51
pyiron/pyiron_atomistics
https://api.github.com/repos/pyiron/pyiron_atomistics
opened
TableJob should update its functionality in a more explicit way
code_smell
`TableJob` has all sorts of built-in functions, and these depend one what pyiron modules have been loaded. However, right now it's super implicit and difficult to see when and where these get added. Right now: - New methods are defined [here](https://github.com/pyiron/pyiron_atomistics/blob/main/pyiron_atomistics/table/funct.py) - These are imported and appended to `pyiron_base.JobTable` [here](https://github.com/pyiron/pyiron_atomistics/blob/main/pyiron_atomistics/table/datamining.py) - Then [in `__init__`](https://github.com/pyiron/pyiron_atomistics/blob/b096732c16dd19d4522ca57ea1ac10979736b765/pyiron_atomistics/__init__.py#L68) `JobTable` is re-imported from the above location, but only as a string in the `JOB_CLASS_DICT` What I want - [x] New methods are defined as a list -- how it is is already fine - [ ] `JobTable` is imported right in `__init__` - [ ] Also in `__init__` we call a *public* method to add our new methods to the `JobTable` - [ ] Bonus: Such a method can add a safety check here so the methods list doesn't just keep getting longer on every import / warns the user if they try to add a method with the same name as an existing method
1.0
TableJob should update its functionality in a more explicit way - `TableJob` has all sorts of built-in functions, and these depend one what pyiron modules have been loaded. However, right now it's super implicit and difficult to see when and where these get added. Right now: - New methods are defined [here](https://github.com/pyiron/pyiron_atomistics/blob/main/pyiron_atomistics/table/funct.py) - These are imported and appended to `pyiron_base.JobTable` [here](https://github.com/pyiron/pyiron_atomistics/blob/main/pyiron_atomistics/table/datamining.py) - Then [in `__init__`](https://github.com/pyiron/pyiron_atomistics/blob/b096732c16dd19d4522ca57ea1ac10979736b765/pyiron_atomistics/__init__.py#L68) `JobTable` is re-imported from the above location, but only as a string in the `JOB_CLASS_DICT` What I want - [x] New methods are defined as a list -- how it is is already fine - [ ] `JobTable` is imported right in `__init__` - [ ] Also in `__init__` we call a *public* method to add our new methods to the `JobTable` - [ ] Bonus: Such a method can add a safety check here so the methods list doesn't just keep getting longer on every import / warns the user if they try to add a method with the same name as an existing method
non_test
tablejob should update its functionality in a more explicit way tablejob has all sorts of built in functions and these depend one what pyiron modules have been loaded however right now it s super implicit and difficult to see when and where these get added right now new methods are defined these are imported and appended to pyiron base jobtable then jobtable is re imported from the above location but only as a string in the job class dict what i want new methods are defined as a list how it is is already fine jobtable is imported right in init also in init we call a public method to add our new methods to the jobtable bonus such a method can add a safety check here so the methods list doesn t just keep getting longer on every import warns the user if they try to add a method with the same name as an existing method
0
204,616
15,504,567,320
IssuesEvent
2021-03-11 14:26:24
xdefilab/xhalflife-base
https://api.github.com/repos/xdefilab/xhalflife-base
closed
pool中显示的weth数量和stat中显示的数量不一致
API High bug kovan_test
<img width="1224" alt="图片" src="https://user-images.githubusercontent.com/23711319/110191330-b4ca8c80-7e62-11eb-8170-3801e02ba849.png"> <img width="1306" alt="图片" src="https://user-images.githubusercontent.com/23711319/110191333-bdbb5e00-7e62-11eb-97db-a1fcebf39549.png">
1.0
pool中显示的weth数量和stat中显示的数量不一致 - <img width="1224" alt="图片" src="https://user-images.githubusercontent.com/23711319/110191330-b4ca8c80-7e62-11eb-8170-3801e02ba849.png"> <img width="1306" alt="图片" src="https://user-images.githubusercontent.com/23711319/110191333-bdbb5e00-7e62-11eb-97db-a1fcebf39549.png">
test
pool中显示的weth数量和stat中显示的数量不一致 img width alt 图片 src img width alt 图片 src
1
4,496
2,610,095,099
IssuesEvent
2015-02-26 18:28:41
chrsmith/dsdsdaadf
https://api.github.com/repos/chrsmith/dsdsdaadf
opened
深圳红蓝光怎样治青春痘
auto-migrated Priority-Medium Type-Defect
``` 深圳红蓝光怎样治青春痘【深圳韩方科颜全国热线400-869-1818�� �24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以�� �国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品� ��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反 弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国�� �专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸� ��的痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:21
1.0
深圳红蓝光怎样治青春痘 - ``` 深圳红蓝光怎样治青春痘【深圳韩方科颜全国热线400-869-1818�� �24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以�� �国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品� ��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反 弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国�� �专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸� ��的痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:21
non_test
深圳红蓝光怎样治青春痘 深圳红蓝光怎样治青春痘【 �� � 】深圳韩方科颜专业祛痘连锁机构,机构以�� �国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品� ��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反 弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国�� �专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸� ��的痘痘。 original issue reported on code google com by szft com on may at
0
26,467
5,255,653,429
IssuesEvent
2017-02-02 16:04:23
darkf/darkfo
https://api.github.com/repos/darkf/darkfo
closed
Recommend pillow instead of PIL in the README
documentation
Pillow is a newer, more actively maintained fork of PIL and is Python 3 compatible
1.0
Recommend pillow instead of PIL in the README - Pillow is a newer, more actively maintained fork of PIL and is Python 3 compatible
non_test
recommend pillow instead of pil in the readme pillow is a newer more actively maintained fork of pil and is python compatible
0
329,687
28,301,702,736
IssuesEvent
2023-04-10 06:52:52
dotnet/machinelearning-modelbuilder
https://api.github.com/repos/dotnet/machinelearning-modelbuilder
closed
Value prediction: After changing the feature column 's Data type from "Single" to "String", the training should be completed successfully for Data source from SQLPassword.
Priority:2 Reported by: Test
**System Information (please complete the following information):** Windows OS: Windows-11-Enterprise-21H2 ML.Net Model Builder 2019&2022: 16.13.5.2216801 (Main Build) Microsoft Visual Studio Enterprise: 2019(16.11.11); 2022(17.1.1) .Net: 5.0 & 6.0 **Describe the bug** - On which step of the process did you run into an issue: Training Value prediction with Data source from SQLPassword. **TestMatrix** https://testpass.blob.core.windows.net/test-pass-data/taxi-fare.csv **To Reproduce** Steps to reproduce the behavior: 1. Pre-requisite: Do [Pre-requisites/SQLPassword](https://github.com/dotnet/machinelearning-tools/blob/main/docs/tests/Instructions/Pre-Requisites/SQLPassword.md) before repro this issue. 2. Select Create a new project from the Visual Studio start window; 4. Choose the C# Console App (.NET Core) project template; 5. Add model builder by right click on the project; 6. Select Value prediction scenario>Local (CPU); 7. On Data page, select>SQL Server>Microsoft SQL Server>SQL Server Authentication to input the data source; 8. Choose "fare_amount" as label and then click "Avanced data option". 9. Change **"rate_code" or "passenger_count" or "trip_time_in_secs" or "trip_distance"** column 's Data type from "Single" to "String" and save it. 10. Click Start training button, prompt model builder error as below screenshot. **Expected behavior** The training should be completed successfully after changing the feature column 's Data type from "Single" to "String". **Screenshots** Data settings: ![image](https://user-images.githubusercontent.com/99375895/159425678-9df70994-008c-4cca-ba9c-415f8222e912.png) Error screenshot: ![image](https://user-images.githubusercontent.com/99375895/159425833-7ea7a707-1aa4-4232-a36f-3c038b9b41b5.png) **Additional context** When do the same thing for the .csv file data source, the training can be completed successfully. ![image](https://user-images.githubusercontent.com/99375895/159426565-59710623-35b3-4f95-88b2-a78c0dcf2d19.png) ![image](https://user-images.githubusercontent.com/99375895/159426703-be551344-7a0e-43bf-9a92-fb41da9c7f15.png)
1.0
Value prediction: After changing the feature column 's Data type from "Single" to "String", the training should be completed successfully for Data source from SQLPassword. - **System Information (please complete the following information):** Windows OS: Windows-11-Enterprise-21H2 ML.Net Model Builder 2019&2022: 16.13.5.2216801 (Main Build) Microsoft Visual Studio Enterprise: 2019(16.11.11); 2022(17.1.1) .Net: 5.0 & 6.0 **Describe the bug** - On which step of the process did you run into an issue: Training Value prediction with Data source from SQLPassword. **TestMatrix** https://testpass.blob.core.windows.net/test-pass-data/taxi-fare.csv **To Reproduce** Steps to reproduce the behavior: 1. Pre-requisite: Do [Pre-requisites/SQLPassword](https://github.com/dotnet/machinelearning-tools/blob/main/docs/tests/Instructions/Pre-Requisites/SQLPassword.md) before repro this issue. 2. Select Create a new project from the Visual Studio start window; 4. Choose the C# Console App (.NET Core) project template; 5. Add model builder by right click on the project; 6. Select Value prediction scenario>Local (CPU); 7. On Data page, select>SQL Server>Microsoft SQL Server>SQL Server Authentication to input the data source; 8. Choose "fare_amount" as label and then click "Avanced data option". 9. Change **"rate_code" or "passenger_count" or "trip_time_in_secs" or "trip_distance"** column 's Data type from "Single" to "String" and save it. 10. Click Start training button, prompt model builder error as below screenshot. **Expected behavior** The training should be completed successfully after changing the feature column 's Data type from "Single" to "String". **Screenshots** Data settings: ![image](https://user-images.githubusercontent.com/99375895/159425678-9df70994-008c-4cca-ba9c-415f8222e912.png) Error screenshot: ![image](https://user-images.githubusercontent.com/99375895/159425833-7ea7a707-1aa4-4232-a36f-3c038b9b41b5.png) **Additional context** When do the same thing for the .csv file data source, the training can be completed successfully. ![image](https://user-images.githubusercontent.com/99375895/159426565-59710623-35b3-4f95-88b2-a78c0dcf2d19.png) ![image](https://user-images.githubusercontent.com/99375895/159426703-be551344-7a0e-43bf-9a92-fb41da9c7f15.png)
test
value prediction after changing the feature column s data type from single to string the training should be completed successfully for data source from sqlpassword system information please complete the following information windows os windows enterprise ml net model builder main build microsoft visual studio enterprise net describe the bug on which step of the process did you run into an issue training value prediction with data source from sqlpassword testmatrix to reproduce steps to reproduce the behavior pre requisite do before repro this issue select create a new project from the visual studio start window choose the c console app net core project template add model builder by right click on the project select value prediction scenario local cpu on data page select sql server microsoft sql server sql server authentication to input the data source choose fare amount as label and then click avanced data option change rate code or passenger count or trip time in secs or trip distance column s data type from single to string and save it click start training button prompt model builder error as below screenshot expected behavior the training should be completed successfully after changing the feature column s data type from single to string screenshots data settings error screenshot additional context when do the same thing for the csv file data source the training can be completed successfully
1
372,018
25,978,678,801
IssuesEvent
2022-12-19 16:50:59
oVirt/terraform-provider-ovirt
https://api.github.com/repos/oVirt/terraform-provider-ovirt
opened
Configure NIC con VM
documentation
I am trying to use the last version 2.1.5 migrating from the old resource format. Currently I'm having problems on understanding how to pass network configuration to the internal configured NIC. Previously this was done using the structure _nic_configuration_, now gone. I thought I could use cloud-init with the internal_custom_script, like this: ` "network": "ethernets": "eth0": "addresses": - "192.168.0.110" "gateway4": "192.168.0.1" "name": "eth0" "nameservers": "addresses": - "8.8.8.8" "version": 2 "runcmd": - "echo 'exampleexample' | passwd --stdin root" "ssh_authorized_keys": - "ssh-rsa XXXX" "timezone": "CEST" ` But the "network" section is being ignored. Please notice that copying the above configuration in /etc/cloud/cloud.cfg.d in the VM correctly configures the network interface. Could a workaround be to copy the above file using runcmd to the above directory? Thanks
1.0
Configure NIC con VM - I am trying to use the last version 2.1.5 migrating from the old resource format. Currently I'm having problems on understanding how to pass network configuration to the internal configured NIC. Previously this was done using the structure _nic_configuration_, now gone. I thought I could use cloud-init with the internal_custom_script, like this: ` "network": "ethernets": "eth0": "addresses": - "192.168.0.110" "gateway4": "192.168.0.1" "name": "eth0" "nameservers": "addresses": - "8.8.8.8" "version": 2 "runcmd": - "echo 'exampleexample' | passwd --stdin root" "ssh_authorized_keys": - "ssh-rsa XXXX" "timezone": "CEST" ` But the "network" section is being ignored. Please notice that copying the above configuration in /etc/cloud/cloud.cfg.d in the VM correctly configures the network interface. Could a workaround be to copy the above file using runcmd to the above directory? Thanks
non_test
configure nic con vm i am trying to use the last version migrating from the old resource format currently i m having problems on understanding how to pass network configuration to the internal configured nic previously this was done using the structure nic configuration now gone i thought i could use cloud init with the internal custom script like this network ethernets addresses name nameservers addresses version runcmd echo exampleexample passwd stdin root ssh authorized keys ssh rsa xxxx timezone cest but the network section is being ignored please notice that copying the above configuration in etc cloud cloud cfg d in the vm correctly configures the network interface could a workaround be to copy the above file using runcmd to the above directory thanks
0
14,674
3,284,414,464
IssuesEvent
2015-10-28 16:32:11
jgirald/ES2015F
https://api.github.com/repos/jgirald/ES2015F
closed
Animació caminar civil babilònic
Design TeamA
### Description Crear la animación de caminar para el modelo de civil **babilónico** creado en el issue #139 . ### Acceptance Criteria La animación tiene su propia definición dentro de blender para ser diferenciada de las demás.
1.0
Animació caminar civil babilònic - ### Description Crear la animación de caminar para el modelo de civil **babilónico** creado en el issue #139 . ### Acceptance Criteria La animación tiene su propia definición dentro de blender para ser diferenciada de las demás.
non_test
animació caminar civil babilònic description crear la animación de caminar para el modelo de civil babilónico creado en el issue acceptance criteria la animación tiene su propia definición dentro de blender para ser diferenciada de las demás
0
571,310
17,023,281,508
IssuesEvent
2021-07-03 01:12:41
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
Add support for concatenated GZIP files
Component: osmosis Priority: major Resolution: fixed Type: enhancement
**[Submitted to the original trac issue database at 12.45pm, Friday, 8th August 2008]** Add workaround to allow contatenated GZIP files as per the workaround in http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4691425
1.0
Add support for concatenated GZIP files - **[Submitted to the original trac issue database at 12.45pm, Friday, 8th August 2008]** Add workaround to allow contatenated GZIP files as per the workaround in http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4691425
non_test
add support for concatenated gzip files add workaround to allow contatenated gzip files as per the workaround in
0
220,144
17,150,620,230
IssuesEvent
2021-07-13 20:04:05
godotengine/godot
https://api.github.com/repos/godotengine/godot
opened
Different printed results of running project
bug needs testing topic:core
### Godot version 4.0.dev.custom_build. 7c432c923 ### System information Ubuntu 21.04 - Nvidia GTX 970, Gnome shell 3.38 X11 ### Issue description This code ``` extends Node2D func _ready() -> void: for i in range(200): var child: Node = ClassDB.instantiate("AnimatedSprite2D") child.set_name("Special Node " + str(i)) add_child(child) func _process(delta: float) -> void: var choosen_node = find_node("Special Node " + str(randi() % 200), true, false) print(choosen_node) ``` Should always print info about node which exists(tested on 3.x branch and always it worked fine) When I tried to run this code with normal build of Godot, then I got this results(this is proper output) ``` [AnimatedSprite2D:30786192644] [AnimatedSprite2D:30719083776] [AnimatedSprite2D:29578233020] [AnimatedSprite2D:29611787454] ``` but when I run project with Godot compiled with sanitizers support, then I got this output ``` [Object:null] [Object:null] [Object:null] [Object:null] ``` Both compilation commands are equal(`godot --version` prints exactly the same hash for both apps), except of course using use_ubsan=yes and use_asan=yes, so I suspect that this may be problem with all applications which starts/works slower. ### Steps to reproduce Just run test project ### Minimal reproduction project [rr.zip](https://github.com/godotengine/godot/files/6811645/rr.zip)
1.0
Different printed results of running project - ### Godot version 4.0.dev.custom_build. 7c432c923 ### System information Ubuntu 21.04 - Nvidia GTX 970, Gnome shell 3.38 X11 ### Issue description This code ``` extends Node2D func _ready() -> void: for i in range(200): var child: Node = ClassDB.instantiate("AnimatedSprite2D") child.set_name("Special Node " + str(i)) add_child(child) func _process(delta: float) -> void: var choosen_node = find_node("Special Node " + str(randi() % 200), true, false) print(choosen_node) ``` Should always print info about node which exists(tested on 3.x branch and always it worked fine) When I tried to run this code with normal build of Godot, then I got this results(this is proper output) ``` [AnimatedSprite2D:30786192644] [AnimatedSprite2D:30719083776] [AnimatedSprite2D:29578233020] [AnimatedSprite2D:29611787454] ``` but when I run project with Godot compiled with sanitizers support, then I got this output ``` [Object:null] [Object:null] [Object:null] [Object:null] ``` Both compilation commands are equal(`godot --version` prints exactly the same hash for both apps), except of course using use_ubsan=yes and use_asan=yes, so I suspect that this may be problem with all applications which starts/works slower. ### Steps to reproduce Just run test project ### Minimal reproduction project [rr.zip](https://github.com/godotengine/godot/files/6811645/rr.zip)
test
different printed results of running project godot version dev custom build system information ubuntu nvidia gtx gnome shell issue description this code extends func ready void for i in range var child node classdb instantiate child set name special node str i add child child func process delta float void var choosen node find node special node str randi true false print choosen node should always print info about node which exists tested on x branch and always it worked fine when i tried to run this code with normal build of godot then i got this results this is proper output but when i run project with godot compiled with sanitizers support then i got this output both compilation commands are equal godot version prints exactly the same hash for both apps except of course using use ubsan yes and use asan yes so i suspect that this may be problem with all applications which starts works slower steps to reproduce just run test project minimal reproduction project
1
302,242
26,134,527,685
IssuesEvent
2022-12-29 10:19:20
TencentBlueKing/bkui-vue3
https://api.github.com/repos/TencentBlueKing/bkui-vue3
closed
feature(all): 组件和icon引入优化
enhancement stag/test
#### 组件 之前引入 ``` import BkMessage from '@bkui-vue/message'; ``` 优化后 ``` import { Message } from 'bkui-vue'; ``` #### icon 之前引入 ``` import { AngleDown, Plus, Search } from '@bkui-vue/icon'; ``` 优化后 ``` import { AngleDown, Plus, Search } from 'bkui-vue/lib/icon'; ```
1.0
feature(all): 组件和icon引入优化 - #### 组件 之前引入 ``` import BkMessage from '@bkui-vue/message'; ``` 优化后 ``` import { Message } from 'bkui-vue'; ``` #### icon 之前引入 ``` import { AngleDown, Plus, Search } from '@bkui-vue/icon'; ``` 优化后 ``` import { AngleDown, Plus, Search } from 'bkui-vue/lib/icon'; ```
test
feature all 组件和icon引入优化 组件 之前引入 import bkmessage from bkui vue message 优化后 import message from bkui vue icon 之前引入 import angledown plus search from bkui vue icon 优化后 import angledown plus search from bkui vue lib icon
1
9,916
3,078,590,289
IssuesEvent
2015-08-21 11:20:40
ppekrol/ravenqa
https://api.github.com/repos/ppekrol/ravenqa
closed
Can connect to Traffic Watch.
test
1. Configure connection (Admin). 2. Wait for some logs. 3. Disconnect. 4. Reconnect. 5. Export logs and validate file.
1.0
Can connect to Traffic Watch. - 1. Configure connection (Admin). 2. Wait for some logs. 3. Disconnect. 4. Reconnect. 5. Export logs and validate file.
test
can connect to traffic watch configure connection admin wait for some logs disconnect reconnect export logs and validate file
1
273,039
23,723,074,608
IssuesEvent
2022-08-30 16:58:20
18F/fedramp-automation
https://api.github.com/repos/18F/fedramp-automation
closed
Clearly distinguish message types
story ui ux g: fedramp integration testing f: session 3 GSA
**Extended Description** As a user, I would like clarity on what different message types in the UI mean. **Acceptance Criteria** - [ ] A visual symbol to clarify positive messaging - [ ] A visual symbol to clarify diagnostic messaging on what needs to be fixed - [ ] Helper text that indicates if an individual message is positive or diagnostic when the user selects or hover over it - [ ] Reconsider usage of severity background colors **Story Tasks** - [ ] Tasks... **Definition of Done** - WITH UI - [ ] Acceptance criteria met - [ ] Unit test coverage of our code > 90% - needs automation story - [ ] Accessibility tests pass - needs automation story - [ ] Automated code quality checks pass - [ ] Security reviewed and reported - check in with Wes on what we could do here - [ ] Reviewed against plain language guidelines - [ ] Design QA passed - [ ] Code must be self-documenting - [ ] No local tech debt - [ ] Documentation updated - [ ] Architectural Decision Record completed as necessary for significant design choices - [ ] PR reviewed & approved - [ ] Source code merged
1.0
Clearly distinguish message types - **Extended Description** As a user, I would like clarity on what different message types in the UI mean. **Acceptance Criteria** - [ ] A visual symbol to clarify positive messaging - [ ] A visual symbol to clarify diagnostic messaging on what needs to be fixed - [ ] Helper text that indicates if an individual message is positive or diagnostic when the user selects or hover over it - [ ] Reconsider usage of severity background colors **Story Tasks** - [ ] Tasks... **Definition of Done** - WITH UI - [ ] Acceptance criteria met - [ ] Unit test coverage of our code > 90% - needs automation story - [ ] Accessibility tests pass - needs automation story - [ ] Automated code quality checks pass - [ ] Security reviewed and reported - check in with Wes on what we could do here - [ ] Reviewed against plain language guidelines - [ ] Design QA passed - [ ] Code must be self-documenting - [ ] No local tech debt - [ ] Documentation updated - [ ] Architectural Decision Record completed as necessary for significant design choices - [ ] PR reviewed & approved - [ ] Source code merged
test
clearly distinguish message types extended description as a user i would like clarity on what different message types in the ui mean acceptance criteria a visual symbol to clarify positive messaging a visual symbol to clarify diagnostic messaging on what needs to be fixed helper text that indicates if an individual message is positive or diagnostic when the user selects or hover over it reconsider usage of severity background colors story tasks tasks definition of done with ui acceptance criteria met unit test coverage of our code needs automation story accessibility tests pass needs automation story automated code quality checks pass security reviewed and reported check in with wes on what we could do here reviewed against plain language guidelines design qa passed code must be self documenting no local tech debt documentation updated architectural decision record completed as necessary for significant design choices pr reviewed approved source code merged
1
94,977
11,943,899,824
IssuesEvent
2020-04-03 00:42:49
crossplane/crossplane
https://api.github.com/repos/crossplane/crossplane
closed
How should Crossplane support "resource dependencies"?
design proposal question
Crossplane currently manages a small set of cloud resources (RDS, CloudSQL, CloudMemorystore, etc). Many of these resources are dependent on other resources that we do not support today, for example many AWS resources require the creator to specify a subnet or security group in which they will be created. Crossplane does not model these resources today; the operator must create subnet groups, etc out of band. I feel that we don't have good alignment around how we're going to "solve" this in Crossplane. I've heard two different approaches discussed when this topic comes up: 1. The 'magic' approach. If a customer asks Crossplane to deploy an `RDSInstance`, an `EKSCluster`, and a `ReplicationGroup` Crossplane automatically creates and/or updates the appropriate VPCs, subnets, security groups, firewall rules, IAM policies, etc in order to ensure workloads running in the `EKSCluster` can consume the `RDSInstance` and `ReplicationGroup`. 2. The 'infrastructure as code' approach. In order for the above to happen the cluster operator and/or developer must explicitly create correctly configured `VPC`, `Subnet`, `IAM`, etc managed resources via Crossplane, then model their relationship to each other. i.e. Configure their `RDSInstance` to reference the correct `Subnet`, etc. Related issues: #351 #324
1.0
How should Crossplane support "resource dependencies"? - Crossplane currently manages a small set of cloud resources (RDS, CloudSQL, CloudMemorystore, etc). Many of these resources are dependent on other resources that we do not support today, for example many AWS resources require the creator to specify a subnet or security group in which they will be created. Crossplane does not model these resources today; the operator must create subnet groups, etc out of band. I feel that we don't have good alignment around how we're going to "solve" this in Crossplane. I've heard two different approaches discussed when this topic comes up: 1. The 'magic' approach. If a customer asks Crossplane to deploy an `RDSInstance`, an `EKSCluster`, and a `ReplicationGroup` Crossplane automatically creates and/or updates the appropriate VPCs, subnets, security groups, firewall rules, IAM policies, etc in order to ensure workloads running in the `EKSCluster` can consume the `RDSInstance` and `ReplicationGroup`. 2. The 'infrastructure as code' approach. In order for the above to happen the cluster operator and/or developer must explicitly create correctly configured `VPC`, `Subnet`, `IAM`, etc managed resources via Crossplane, then model their relationship to each other. i.e. Configure their `RDSInstance` to reference the correct `Subnet`, etc. Related issues: #351 #324
non_test
how should crossplane support resource dependencies crossplane currently manages a small set of cloud resources rds cloudsql cloudmemorystore etc many of these resources are dependent on other resources that we do not support today for example many aws resources require the creator to specify a subnet or security group in which they will be created crossplane does not model these resources today the operator must create subnet groups etc out of band i feel that we don t have good alignment around how we re going to solve this in crossplane i ve heard two different approaches discussed when this topic comes up the magic approach if a customer asks crossplane to deploy an rdsinstance an ekscluster and a replicationgroup crossplane automatically creates and or updates the appropriate vpcs subnets security groups firewall rules iam policies etc in order to ensure workloads running in the ekscluster can consume the rdsinstance and replicationgroup the infrastructure as code approach in order for the above to happen the cluster operator and or developer must explicitly create correctly configured vpc subnet iam etc managed resources via crossplane then model their relationship to each other i e configure their rdsinstance to reference the correct subnet etc related issues
0
59,569
14,422,009,537
IssuesEvent
2020-12-05 01:03:30
mpulsemobile/doccano
https://api.github.com/repos/mpulsemobile/doccano
opened
WS-2019-0427 (Medium) detected in elliptic-6.4.0.tgz
security vulnerability
## WS-2019-0427 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elliptic-6.4.0.tgz</b></p></summary> <p>EC cryptography</p> <p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.4.0.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.4.0.tgz</a></p> <p>Path to dependency file: doccano/app/server/static/package.json</p> <p>Path to vulnerable library: doccano/app/server/static/node_modules/elliptic/package.json</p> <p> Dependency Hierarchy: - webpack-4.12.0.tgz (Root Library) - node-libs-browser-2.1.0.tgz - crypto-browserify-3.12.0.tgz - browserify-sign-4.0.4.tgz - :x: **elliptic-6.4.0.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The function getNAF() in elliptic library has information leakage. This issue is mitigated in version 6.5.2 <p>Publish Date: 2019-11-22 <p>URL: <a href=https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a>WS-2019-0427</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: N/A - Attack Complexity: N/A - Privileges Required: N/A - User Interaction: N/A - Scope: N/A - Impact Metrics: - Confidentiality Impact: N/A - Integrity Impact: N/A - Availability Impact: N/A </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a">https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a</a></p> <p>Release Date: 2020-05-24</p> <p>Fix Resolution: v6.5.2</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"elliptic","packageVersion":"6.4.0","isTransitiveDependency":true,"dependencyTree":"webpack:4.12.0;node-libs-browser:2.1.0;crypto-browserify:3.12.0;browserify-sign:4.0.4;elliptic:6.4.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v6.5.2"}],"vulnerabilityIdentifier":"WS-2019-0427","vulnerabilityDetails":"The function getNAF() in elliptic library has information leakage. This issue is mitigated in version 6.5.2","vulnerabilityUrl":"https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a","cvss3Severity":"medium","cvss3Score":"5.0","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> -->
True
WS-2019-0427 (Medium) detected in elliptic-6.4.0.tgz - ## WS-2019-0427 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elliptic-6.4.0.tgz</b></p></summary> <p>EC cryptography</p> <p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.4.0.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.4.0.tgz</a></p> <p>Path to dependency file: doccano/app/server/static/package.json</p> <p>Path to vulnerable library: doccano/app/server/static/node_modules/elliptic/package.json</p> <p> Dependency Hierarchy: - webpack-4.12.0.tgz (Root Library) - node-libs-browser-2.1.0.tgz - crypto-browserify-3.12.0.tgz - browserify-sign-4.0.4.tgz - :x: **elliptic-6.4.0.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The function getNAF() in elliptic library has information leakage. This issue is mitigated in version 6.5.2 <p>Publish Date: 2019-11-22 <p>URL: <a href=https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a>WS-2019-0427</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: N/A - Attack Complexity: N/A - Privileges Required: N/A - User Interaction: N/A - Scope: N/A - Impact Metrics: - Confidentiality Impact: N/A - Integrity Impact: N/A - Availability Impact: N/A </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a">https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a</a></p> <p>Release Date: 2020-05-24</p> <p>Fix Resolution: v6.5.2</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"elliptic","packageVersion":"6.4.0","isTransitiveDependency":true,"dependencyTree":"webpack:4.12.0;node-libs-browser:2.1.0;crypto-browserify:3.12.0;browserify-sign:4.0.4;elliptic:6.4.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v6.5.2"}],"vulnerabilityIdentifier":"WS-2019-0427","vulnerabilityDetails":"The function getNAF() in elliptic library has information leakage. This issue is mitigated in version 6.5.2","vulnerabilityUrl":"https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a","cvss3Severity":"medium","cvss3Score":"5.0","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> -->
non_test
ws medium detected in elliptic tgz ws medium severity vulnerability vulnerable library elliptic tgz ec cryptography library home page a href path to dependency file doccano app server static package json path to vulnerable library doccano app server static node modules elliptic package json dependency hierarchy webpack tgz root library node libs browser tgz crypto browserify tgz browserify sign tgz x elliptic tgz vulnerable library vulnerability details the function getnaf in elliptic library has information leakage this issue is mitigated in version publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier ws vulnerabilitydetails the function getnaf in elliptic library has information leakage this issue is mitigated in version vulnerabilityurl
0
140,089
11,302,166,364
IssuesEvent
2020-01-17 17:02:49
aliasrobotics/RVD
https://api.github.com/repos/aliasrobotics/RVD
opened
CWE-362/CWE-367 (race), This usually indicates a security flaw. If an attacker can change any... @ /canopen_master/objdict.h:287
CWE-362 CWE-367 bug flawfinder level_4 static analysis testing triage
```yaml { "flaw": { "application": "N/A", "package": "N/A", "reported-by": "Alias Robotics", "subsystem": "N/A", "date-detected": "2020-01-17 (17:02)", "specificity": "subject-specific", "date-reported": "2020-01-17 (17:02)", "reproduction": "See artifacts below (if available)", "reproducibility": "always", "detected-by-method": "testing static", "reproduction-image": "gitlab.com/aliasrobotics/offensive/alurity/pipelines/active/pipeline_test_flawfinder/-/jobs/407177018/artifacts/download", "trace": "(context) template <typename T> T & access(){", "architectural-location": "application-specific", "languages": "None", "issue": "", "reported-by-relationship": "automatic", "detected-by": "Alias Robotics", "phase": "testing" }, "exploitation": { "exploitation-image": "", "description": "", "exploitation-vector": "" }, "severity": { "rvss-score": 0, "severity-description": "", "cvss-vector": "", "cvss-score": 0, "rvss-vector": "" }, "keywords": [ "flawfinder", "level_4", "static analysis", "testing", "triage", "CWE-362", "CWE-367", "bug" ], "vendor": null, "mitigation": { "pull-request": "", "date-mitigation": "", "description": "Set up the correct permissions (e.g., using setuid()) and try to open the file directly" }, "title": "CWE-362/CWE-367 (race), This usually indicates a security flaw. If an attacker can change any... @ /canopen_master/objdict.h:287", "type": "bug", "cwe": [ "CWE-362", "CWE-367" ], "description": "This usually indicates a security flaw. If an attacker can change anything along the path between the call to access() and the files actual use (e.g., by moving files), the attacker can exploit the race condition (CWE-362/CWE-367!). Set up the correct permissions (e.g., using setuid()) and try to open the file directly. ", "cve": "None", "system": "./install/canopen_master/include/canopen_master/objdict.h:287:35", "id": 1, "links": "" } ```
1.0
CWE-362/CWE-367 (race), This usually indicates a security flaw. If an attacker can change any... @ /canopen_master/objdict.h:287 - ```yaml { "flaw": { "application": "N/A", "package": "N/A", "reported-by": "Alias Robotics", "subsystem": "N/A", "date-detected": "2020-01-17 (17:02)", "specificity": "subject-specific", "date-reported": "2020-01-17 (17:02)", "reproduction": "See artifacts below (if available)", "reproducibility": "always", "detected-by-method": "testing static", "reproduction-image": "gitlab.com/aliasrobotics/offensive/alurity/pipelines/active/pipeline_test_flawfinder/-/jobs/407177018/artifacts/download", "trace": "(context) template <typename T> T & access(){", "architectural-location": "application-specific", "languages": "None", "issue": "", "reported-by-relationship": "automatic", "detected-by": "Alias Robotics", "phase": "testing" }, "exploitation": { "exploitation-image": "", "description": "", "exploitation-vector": "" }, "severity": { "rvss-score": 0, "severity-description": "", "cvss-vector": "", "cvss-score": 0, "rvss-vector": "" }, "keywords": [ "flawfinder", "level_4", "static analysis", "testing", "triage", "CWE-362", "CWE-367", "bug" ], "vendor": null, "mitigation": { "pull-request": "", "date-mitigation": "", "description": "Set up the correct permissions (e.g., using setuid()) and try to open the file directly" }, "title": "CWE-362/CWE-367 (race), This usually indicates a security flaw. If an attacker can change any... @ /canopen_master/objdict.h:287", "type": "bug", "cwe": [ "CWE-362", "CWE-367" ], "description": "This usually indicates a security flaw. If an attacker can change anything along the path between the call to access() and the files actual use (e.g., by moving files), the attacker can exploit the race condition (CWE-362/CWE-367!). Set up the correct permissions (e.g., using setuid()) and try to open the file directly. ", "cve": "None", "system": "./install/canopen_master/include/canopen_master/objdict.h:287:35", "id": 1, "links": "" } ```
test
cwe cwe race this usually indicates a security flaw if an attacker can change any canopen master objdict h yaml flaw application n a package n a reported by alias robotics subsystem n a date detected specificity subject specific date reported reproduction see artifacts below if available reproducibility always detected by method testing static reproduction image gitlab com aliasrobotics offensive alurity pipelines active pipeline test flawfinder jobs artifacts download trace context template t access architectural location application specific languages none issue reported by relationship automatic detected by alias robotics phase testing exploitation exploitation image description exploitation vector severity rvss score severity description cvss vector cvss score rvss vector keywords flawfinder level static analysis testing triage cwe cwe bug vendor null mitigation pull request date mitigation description set up the correct permissions e g using setuid and try to open the file directly title cwe cwe race this usually indicates a security flaw if an attacker can change any canopen master objdict h type bug cwe cwe cwe description this usually indicates a security flaw if an attacker can change anything along the path between the call to access and the files actual use e g by moving files the attacker can exploit the race condition cwe cwe set up the correct permissions e g using setuid and try to open the file directly cve none system install canopen master include canopen master objdict h id links
1
7,754
2,930,276,963
IssuesEvent
2015-06-29 01:37:00
golang/go
https://api.github.com/repos/golang/go
opened
x/net/websocket: TestClose fails on Plan 9
OS-Plan9 Testing
See http://build.golang.org/log/86c5f54b2e864b4a89f8756c4c069739fb314cc9 ``` 015/06/28 17:48:38 Test WebSocket server listening on 127.0.0.1:51846 --- FAIL: TestClose (0.00s) websocket_test.go:447: ws.Close(): expected error, got <nil> FAIL ```
1.0
x/net/websocket: TestClose fails on Plan 9 - See http://build.golang.org/log/86c5f54b2e864b4a89f8756c4c069739fb314cc9 ``` 015/06/28 17:48:38 Test WebSocket server listening on 127.0.0.1:51846 --- FAIL: TestClose (0.00s) websocket_test.go:447: ws.Close(): expected error, got <nil> FAIL ```
test
x net websocket testclose fails on plan see test websocket server listening on fail testclose websocket test go ws close expected error got fail
1
295,765
22,270,765,860
IssuesEvent
2022-06-10 12:07:48
layerai/dbt-layer
https://api.github.com/repos/layerai/dbt-layer
closed
add dbt-layer to dbt docs site's Available Adapters page
documentation
The [Available Adapters](https://docs.getdbt.com/docs/available-adapters) page is one of the dbt community's most-visited docs pages. It would be of great benefit for first-time visitors to the dbt docs to see: 1. that this adapter is a possible option for using dbt-core, and 2. the large and diverse set of supported databases in the dbt ecosystem. https://github.com/dbt-labs/docs.getdbt.com/issues/1489 exists to address this with all as-of-yet undocumented adapters. We just released [Documenting a new adapter](https://docs.getdbt.com/docs/contributing/documenting-a-new-adapter), a new guide on how to add an adapter to the Available Adapters page. I'd love to see this adapter on that page, so feel free to reach out with any questions/blockers by either replying to this issue, or posting in the #adapter-ecosystem channel of the dbt Community Slack. Looking forward to the contribution!
1.0
add dbt-layer to dbt docs site's Available Adapters page - The [Available Adapters](https://docs.getdbt.com/docs/available-adapters) page is one of the dbt community's most-visited docs pages. It would be of great benefit for first-time visitors to the dbt docs to see: 1. that this adapter is a possible option for using dbt-core, and 2. the large and diverse set of supported databases in the dbt ecosystem. https://github.com/dbt-labs/docs.getdbt.com/issues/1489 exists to address this with all as-of-yet undocumented adapters. We just released [Documenting a new adapter](https://docs.getdbt.com/docs/contributing/documenting-a-new-adapter), a new guide on how to add an adapter to the Available Adapters page. I'd love to see this adapter on that page, so feel free to reach out with any questions/blockers by either replying to this issue, or posting in the #adapter-ecosystem channel of the dbt Community Slack. Looking forward to the contribution!
non_test
add dbt layer to dbt docs site s available adapters page the page is one of the dbt community s most visited docs pages it would be of great benefit for first time visitors to the dbt docs to see that this adapter is a possible option for using dbt core and the large and diverse set of supported databases in the dbt ecosystem exists to address this with all as of yet undocumented adapters we just released a new guide on how to add an adapter to the available adapters page i d love to see this adapter on that page so feel free to reach out with any questions blockers by either replying to this issue or posting in the adapter ecosystem channel of the dbt community slack looking forward to the contribution
0
192,693
14,626,536,119
IssuesEvent
2020-12-23 10:32:10
CSOIreland/PxStat
https://api.github.com/repos/CSOIreland/PxStat
closed
[BUG] Copy share button not working in data view
bug fixed released tested
**Describe the bug** Copy share button not working in data view
1.0
[BUG] Copy share button not working in data view - **Describe the bug** Copy share button not working in data view
test
copy share button not working in data view describe the bug copy share button not working in data view
1
55,187
30,622,364,442
IssuesEvent
2023-07-24 09:08:09
python/cpython
https://api.github.com/repos/python/cpython
closed
Adding selectors has two KeyError exceptions in the success path
type-feature performance topic-asyncio
Similar to https://github.com/python/cpython/issues/106527, adding a new asyncio reader has to hit `_SelectorMapping.__getitem__` which is expected to raise and catch KeyError twice since the reader will not yet be in the map. When connections are constantly being added and removed because devices are being polled over http/websocket the overhead of adding/removing readers adds up. For a webserver with connections constantly being added/removed, the cost of adding and removing impacts how many clients can be handled Another place I see this come up is with dbus connections which need to get torn down and created at fast clip when dealing with bluetooth devices. See https://github.com/python/cpython/issues/106527#issuecomment-1627468269 and https://github.com/python/cpython/issues/106527#issuecomment-1625923919 for where this was split from <!-- gh-linked-prs --> ### Linked PRs * gh-106665 <!-- /gh-linked-prs -->
True
Adding selectors has two KeyError exceptions in the success path - Similar to https://github.com/python/cpython/issues/106527, adding a new asyncio reader has to hit `_SelectorMapping.__getitem__` which is expected to raise and catch KeyError twice since the reader will not yet be in the map. When connections are constantly being added and removed because devices are being polled over http/websocket the overhead of adding/removing readers adds up. For a webserver with connections constantly being added/removed, the cost of adding and removing impacts how many clients can be handled Another place I see this come up is with dbus connections which need to get torn down and created at fast clip when dealing with bluetooth devices. See https://github.com/python/cpython/issues/106527#issuecomment-1627468269 and https://github.com/python/cpython/issues/106527#issuecomment-1625923919 for where this was split from <!-- gh-linked-prs --> ### Linked PRs * gh-106665 <!-- /gh-linked-prs -->
non_test
adding selectors has two keyerror exceptions in the success path similar to adding a new asyncio reader has to hit selectormapping getitem which is expected to raise and catch keyerror twice since the reader will not yet be in the map when connections are constantly being added and removed because devices are being polled over http websocket the overhead of adding removing readers adds up for a webserver with connections constantly being added removed the cost of adding and removing impacts how many clients can be handled another place i see this come up is with dbus connections which need to get torn down and created at fast clip when dealing with bluetooth devices see and for where this was split from linked prs gh
0
85,895
8,001,821,784
IssuesEvent
2018-07-23 05:49:30
bitcoinjs/bitcoinjs-lib
https://api.github.com/repos/bitcoinjs/bitcoinjs-lib
opened
Integration testing in a browser
how to / question / docs testing
@fanatid I noticed you appear have a nice setup in https://github.com/cryptocoinjs/secp256k1-node that you have set up using `karma`. What is your experience there? Is it something we could roll-out to all of the `bitcoinjs` dependencies/libraries easily? Is it possible to target problematic browsers like Safari etc?
1.0
Integration testing in a browser - @fanatid I noticed you appear have a nice setup in https://github.com/cryptocoinjs/secp256k1-node that you have set up using `karma`. What is your experience there? Is it something we could roll-out to all of the `bitcoinjs` dependencies/libraries easily? Is it possible to target problematic browsers like Safari etc?
test
integration testing in a browser fanatid i noticed you appear have a nice setup in that you have set up using karma what is your experience there is it something we could roll out to all of the bitcoinjs dependencies libraries easily is it possible to target problematic browsers like safari etc
1
53,694
13,879,974,156
IssuesEvent
2020-10-17 16:43:33
Theatreers/Theatreers
https://api.github.com/repos/Theatreers/Theatreers
opened
WS-2020-0091 (High) detected in http-proxy-1.18.0.tgz
security vulnerability
## WS-2020-0091 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-proxy-1.18.0.tgz</b></p></summary> <p>HTTP proxying for the masses</p> <p>Library home page: <a href="https://registry.npmjs.org/http-proxy/-/http-proxy-1.18.0.tgz">https://registry.npmjs.org/http-proxy/-/http-proxy-1.18.0.tgz</a></p> <p>Path to dependency file: Theatreers/src/Theatreers.Frontend.Old/package.json</p> <p>Path to vulnerable library: Theatreers/src/Theatreers.Frontend.Old/node_modules/http-proxy/package.json,Theatreers/src/Theatreers.Frontend.Old/node_modules/http-proxy/package.json</p> <p> Dependency Hierarchy: - cli-service-4.0.5.tgz (Root Library) - webpack-dev-server-3.9.0.tgz - http-proxy-middleware-0.19.1.tgz - :x: **http-proxy-1.18.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Theatreers/Theatreers/commit/5b84ea045b36c4ad6f9fda41cea95252584b7e55">5b84ea045b36c4ad6f9fda41cea95252584b7e55</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function. <p>Publish Date: 2020-05-14 <p>URL: <a href=https://github.com/http-party/node-http-proxy/pull/1447>WS-2020-0091</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1486">https://www.npmjs.com/advisories/1486</a></p> <p>Release Date: 2020-05-26</p> <p>Fix Resolution: http-proxy - 1.18.1 </p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2020-0091 (High) detected in http-proxy-1.18.0.tgz - ## WS-2020-0091 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-proxy-1.18.0.tgz</b></p></summary> <p>HTTP proxying for the masses</p> <p>Library home page: <a href="https://registry.npmjs.org/http-proxy/-/http-proxy-1.18.0.tgz">https://registry.npmjs.org/http-proxy/-/http-proxy-1.18.0.tgz</a></p> <p>Path to dependency file: Theatreers/src/Theatreers.Frontend.Old/package.json</p> <p>Path to vulnerable library: Theatreers/src/Theatreers.Frontend.Old/node_modules/http-proxy/package.json,Theatreers/src/Theatreers.Frontend.Old/node_modules/http-proxy/package.json</p> <p> Dependency Hierarchy: - cli-service-4.0.5.tgz (Root Library) - webpack-dev-server-3.9.0.tgz - http-proxy-middleware-0.19.1.tgz - :x: **http-proxy-1.18.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Theatreers/Theatreers/commit/5b84ea045b36c4ad6f9fda41cea95252584b7e55">5b84ea045b36c4ad6f9fda41cea95252584b7e55</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function. <p>Publish Date: 2020-05-14 <p>URL: <a href=https://github.com/http-party/node-http-proxy/pull/1447>WS-2020-0091</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1486">https://www.npmjs.com/advisories/1486</a></p> <p>Release Date: 2020-05-26</p> <p>Fix Resolution: http-proxy - 1.18.1 </p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
ws high detected in http proxy tgz ws high severity vulnerability vulnerable library http proxy tgz http proxying for the masses library home page a href path to dependency file theatreers src theatreers frontend old package json path to vulnerable library theatreers src theatreers frontend old node modules http proxy package json theatreers src theatreers frontend old node modules http proxy package json dependency hierarchy cli service tgz root library webpack dev server tgz http proxy middleware tgz x http proxy tgz vulnerable library found in head commit a href found in base branch master vulnerability details versions of http proxy prior to are vulnerable to denial of service an http request with a long body triggers an err http headers sent unhandled exception that crashes the proxy server this is only possible when the proxy server sets headers in the proxy request using the proxyreq setheader function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution http proxy step up your open source security game with whitesource
0
40,885
5,321,297,099
IssuesEvent
2017-02-14 13:08:44
e107inc/e107
https://api.github.com/repos/e107inc/e107
closed
Cannot sign into the admin login directly.
testing required
Can sign in main site then go to admin, but the admin login page seems to be broken. Latest GIT, Google Chrome 56.0.2924.87 (64-bit), Firefox, Edge, all same issue. ![image](https://cloud.githubusercontent.com/assets/4084331/22862800/39076a58-f0fb-11e6-8aef-1a04bbff1875.png)
1.0
Cannot sign into the admin login directly. - Can sign in main site then go to admin, but the admin login page seems to be broken. Latest GIT, Google Chrome 56.0.2924.87 (64-bit), Firefox, Edge, all same issue. ![image](https://cloud.githubusercontent.com/assets/4084331/22862800/39076a58-f0fb-11e6-8aef-1a04bbff1875.png)
test
cannot sign into the admin login directly can sign in main site then go to admin but the admin login page seems to be broken latest git google chrome bit firefox edge all same issue
1
86,449
3,715,298,202
IssuesEvent
2016-03-03 00:57:21
movabletype/smartphone-app
https://api.github.com/repos/movabletype/smartphone-app
closed
Delete Page/Entry you just created doesn't behave like delete older Entry/Page
bug Priority: LOW
Steps: 1. Go to Manage Entries, select an entry. 2. Tap Settings of footer for the entry, tap "Delete this Ennry". Expected: App delete the entry and move to previous screen (Manage Entries in here) automatically. And it works as you expected. 3. Create New Entry, send the entry to MT. 4. Tap Settings of footer for the entry, tap "Delete this Ennry". Expected: App delete the entry and move to previous screen (Website/Blog dashboard or Manage Entries) automatically. Observed: App delete the entry but still display the Edit Entry screen. FYI: If you try to "Save" or "Delete this Entry" after step 4, you just get error, because it's already deleted. All you can do is close the edit entry screen and back to Dashboard.
1.0
Delete Page/Entry you just created doesn't behave like delete older Entry/Page - Steps: 1. Go to Manage Entries, select an entry. 2. Tap Settings of footer for the entry, tap "Delete this Ennry". Expected: App delete the entry and move to previous screen (Manage Entries in here) automatically. And it works as you expected. 3. Create New Entry, send the entry to MT. 4. Tap Settings of footer for the entry, tap "Delete this Ennry". Expected: App delete the entry and move to previous screen (Website/Blog dashboard or Manage Entries) automatically. Observed: App delete the entry but still display the Edit Entry screen. FYI: If you try to "Save" or "Delete this Entry" after step 4, you just get error, because it's already deleted. All you can do is close the edit entry screen and back to Dashboard.
non_test
delete page entry you just created doesn t behave like delete older entry page steps go to manage entries select an entry tap settings of footer for the entry tap delete this ennry expected app delete the entry and move to previous screen manage entries in here automatically and it works as you expected create new entry send the entry to mt tap settings of footer for the entry tap delete this ennry expected app delete the entry and move to previous screen website blog dashboard or manage entries automatically observed app delete the entry but still display the edit entry screen fyi if you try to save or delete this entry after step you just get error because it s already deleted all you can do is close the edit entry screen and back to dashboard
0
281,315
24,382,398,769
IssuesEvent
2022-10-04 08:56:32
ubtue/tuefind
https://api.github.com/repos/ubtue/tuefind
closed
Verlinkte Nebentitel
System: RelBib ready for testing
Bei diesem Titel sind sehr viele Nebentitel in der Vollanzeige verlinkt. Der Nebentitel "The way" führt dabei zu mehreren Treffern. Das sollte ja eigentlich nicht sein. Was ist denn der Hintergrund für die verlinkten Nebentitel? Muss diese Verlinkung überhaupt angeboten werden? ![grafik](https://user-images.githubusercontent.com/25769591/177571966-a3887139-f3eb-4b3a-ad93-34c774225e1d.png) https://www.relbib.de/Record/643825339?lng=de Hier die o.g. Trefferliste: https://www.relbib.de/Search/Results?lookfor=%22The+way%22&type=Title
1.0
Verlinkte Nebentitel - Bei diesem Titel sind sehr viele Nebentitel in der Vollanzeige verlinkt. Der Nebentitel "The way" führt dabei zu mehreren Treffern. Das sollte ja eigentlich nicht sein. Was ist denn der Hintergrund für die verlinkten Nebentitel? Muss diese Verlinkung überhaupt angeboten werden? ![grafik](https://user-images.githubusercontent.com/25769591/177571966-a3887139-f3eb-4b3a-ad93-34c774225e1d.png) https://www.relbib.de/Record/643825339?lng=de Hier die o.g. Trefferliste: https://www.relbib.de/Search/Results?lookfor=%22The+way%22&type=Title
test
verlinkte nebentitel bei diesem titel sind sehr viele nebentitel in der vollanzeige verlinkt der nebentitel the way führt dabei zu mehreren treffern das sollte ja eigentlich nicht sein was ist denn der hintergrund für die verlinkten nebentitel muss diese verlinkung überhaupt angeboten werden hier die o g trefferliste
1
156,064
19,809,431,758
IssuesEvent
2022-01-19 10:33:22
benchmarkdebricked/angular
https://api.github.com/repos/benchmarkdebricked/angular
opened
CVE-2018-16472 (High) detected in cached-path-relative-1.0.1.tgz
security vulnerability
## CVE-2018-16472 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>cached-path-relative-1.0.1.tgz</b></p></summary> <p>Memoize the results of the path.relative function</p> <p>Library home page: <a href="https://registry.npmjs.org/cached-path-relative/-/cached-path-relative-1.0.1.tgz">https://registry.npmjs.org/cached-path-relative/-/cached-path-relative-1.0.1.tgz</a></p> <p> Dependency Hierarchy: - karma-2.0.0.tgz (Root Library) - browserify-14.5.0.tgz - :x: **cached-path-relative-1.0.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/benchmarkdebricked/angular/commit/d5d066ec5f183ee78b6a36dc6eaa6714647bc518">d5d066ec5f183ee78b6a36dc6eaa6714647bc518</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A prototype pollution attack in cached-path-relative versions <=1.0.1 allows an attacker to inject properties on Object.prototype which are then inherited by all the JS objects through the prototype chain causing a DoS attack. <p>Publish Date: 2018-11-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16472>CVE-2018-16472</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-16472">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-16472</a></p> <p>Release Date: 2018-11-06</p> <p>Fix Resolution: node-cached-path-relative - 1.0.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-16472 (High) detected in cached-path-relative-1.0.1.tgz - ## CVE-2018-16472 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>cached-path-relative-1.0.1.tgz</b></p></summary> <p>Memoize the results of the path.relative function</p> <p>Library home page: <a href="https://registry.npmjs.org/cached-path-relative/-/cached-path-relative-1.0.1.tgz">https://registry.npmjs.org/cached-path-relative/-/cached-path-relative-1.0.1.tgz</a></p> <p> Dependency Hierarchy: - karma-2.0.0.tgz (Root Library) - browserify-14.5.0.tgz - :x: **cached-path-relative-1.0.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/benchmarkdebricked/angular/commit/d5d066ec5f183ee78b6a36dc6eaa6714647bc518">d5d066ec5f183ee78b6a36dc6eaa6714647bc518</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A prototype pollution attack in cached-path-relative versions <=1.0.1 allows an attacker to inject properties on Object.prototype which are then inherited by all the JS objects through the prototype chain causing a DoS attack. <p>Publish Date: 2018-11-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16472>CVE-2018-16472</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-16472">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-16472</a></p> <p>Release Date: 2018-11-06</p> <p>Fix Resolution: node-cached-path-relative - 1.0.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve high detected in cached path relative tgz cve high severity vulnerability vulnerable library cached path relative tgz memoize the results of the path relative function library home page a href dependency hierarchy karma tgz root library browserify tgz x cached path relative tgz vulnerable library found in head commit a href vulnerability details a prototype pollution attack in cached path relative versions allows an attacker to inject properties on object prototype which are then inherited by all the js objects through the prototype chain causing a dos attack publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node cached path relative step up your open source security game with whitesource
0
77,739
3,507,239,769
IssuesEvent
2016-01-08 12:06:25
OregonCore/OregonCore
https://api.github.com/repos/OregonCore/OregonCore
closed
Buffs Stack (BB #718)
migrated Priority: Medium Type: Bug
This issue was migrated from bitbucket. **Original Reporter:** Forthehorde **Original Date:** 13.10.2014 20:35:39 GMT+0000 **Original Priority:** major **Original Type:** bug **Original State:** resolved **Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/718 <hr> Buffs are stackable. on the pic is ID-27502 - [Scroll of Stamina V] but I think every buffs can stack
1.0
Buffs Stack (BB #718) - This issue was migrated from bitbucket. **Original Reporter:** Forthehorde **Original Date:** 13.10.2014 20:35:39 GMT+0000 **Original Priority:** major **Original Type:** bug **Original State:** resolved **Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/718 <hr> Buffs are stackable. on the pic is ID-27502 - [Scroll of Stamina V] but I think every buffs can stack
non_test
buffs stack bb this issue was migrated from bitbucket original reporter forthehorde original date gmt original priority major original type bug original state resolved direct link buffs are stackable on the pic is id but i think every buffs can stack
0
127,944
10,500,985,235
IssuesEvent
2019-09-26 11:48:36
GSG-G7/lang-mate
https://api.github.com/repos/GSG-G7/lang-mate
closed
PUT /api/v1/users/changepassword
T5h query testing validation
- [x] Route - /api/v1/users/changepassword - we will use this route to change the user password ---- - [x] request body ``` req.body { userId : "1", newPassword : "newPassword" } ``` ---- - [x] Request Validation the new password must be validate contain more then 5 characters and numbers ---- - [x] Queries - we should hashing the password ``` updatePassword(userId, newPassword) ``` - [x] **Testing** * [x] testing the route * [x] testing the query
1.0
PUT /api/v1/users/changepassword - - [x] Route - /api/v1/users/changepassword - we will use this route to change the user password ---- - [x] request body ``` req.body { userId : "1", newPassword : "newPassword" } ``` ---- - [x] Request Validation the new password must be validate contain more then 5 characters and numbers ---- - [x] Queries - we should hashing the password ``` updatePassword(userId, newPassword) ``` - [x] **Testing** * [x] testing the route * [x] testing the query
test
put api users changepassword route api users changepassword we will use this route to change the user password request body req body userid newpassword newpassword request validation the new password must be validate contain more then characters and numbers queries we should hashing the password updatepassword userid newpassword testing testing the route testing the query
1
121,671
10,191,734,324
IssuesEvent
2019-08-12 09:13:06
microsoft/AzureStorageExplorer
https://api.github.com/repos/microsoft/AzureStorageExplorer
opened
An error dialog pops up when selecting 'Set Public Access Level…' for one blob container
:beetle: regression :gear: blobs 🧪 testing
**Storage Explorer Version:** 1.9.0 **Build:** 20190810.5 **Platform/OS:** Linux Ubuntu 19.04/Windows 10/macOS High Sierra **Architecture:** ia32/x64 **Regression From:** Previous release(1.9.0) **Steps to reproduce:** 1. Expand a storage account. 2. Right click the 'Blob Containers' then select 'Set Public Access Level...'. 3. Check the result. **Expect Experience:** No error occurs. **Actual Experience** An error dialog occurs. ![image](https://user-images.githubusercontent.com/41351993/62853585-66edb100-bd1f-11e9-9c59-45d26b55def0.png)
1.0
An error dialog pops up when selecting 'Set Public Access Level…' for one blob container - **Storage Explorer Version:** 1.9.0 **Build:** 20190810.5 **Platform/OS:** Linux Ubuntu 19.04/Windows 10/macOS High Sierra **Architecture:** ia32/x64 **Regression From:** Previous release(1.9.0) **Steps to reproduce:** 1. Expand a storage account. 2. Right click the 'Blob Containers' then select 'Set Public Access Level...'. 3. Check the result. **Expect Experience:** No error occurs. **Actual Experience** An error dialog occurs. ![image](https://user-images.githubusercontent.com/41351993/62853585-66edb100-bd1f-11e9-9c59-45d26b55def0.png)
test
an error dialog pops up when selecting set public access level… for one blob container storage explorer version build platform os linux ubuntu windows macos high sierra architecture regression from previous release steps to reproduce expand a storage account right click the blob containers then select set public access level check the result expect experience no error occurs actual experience an error dialog occurs
1
12,184
5,164,050,602
IssuesEvent
2017-01-17 09:21:32
v8mips/v8mips
https://api.github.com/repos/v8mips/v8mips
closed
Port assembler changes for enabling GrowHeap in Wasm
build-failure
Changes for enabling GrowHeap in Wasm https://codereview.chromium.org/1759873002/ should be ported to MIPS and MIPS64.
1.0
Port assembler changes for enabling GrowHeap in Wasm - Changes for enabling GrowHeap in Wasm https://codereview.chromium.org/1759873002/ should be ported to MIPS and MIPS64.
non_test
port assembler changes for enabling growheap in wasm changes for enabling growheap in wasm should be ported to mips and
0
224,245
17,674,121,987
IssuesEvent
2021-08-23 10:06:31
lutraconsulting/input-manual-tests
https://api.github.com/repos/lutraconsulting/input-manual-tests
opened
Test Execution InputApp 1.0.0 (iOS)
test execution
## Test plan for Input manual testing | Test environment | Value | |---|---| | Input Version: | 1.0.0 - 2.15.210820055619 | | Mergin Version: | 2021.8 | | Mergin URL: <> | public.cloudmerging.com | | QGIS Version: | 3.16 | | Mergin plugin Version: | 2021.4.1 | | Mobile OS: iOS | | Date of Execution: | 23.8.2021 | --- ### Test Cases - [ ] ( #2 ) TC 01: Mergin & Projects Manipulation - [ ] ( #3 ) TC 02: Sync & Project Status - [ ] ( #4 ) TC 03: Map Canvas - [ ] ( #5 ) TC 04: Recording - [ ] ( #6 ) TC 05: Forms - [ ] ( #7 ) TC 06: Data Providers - [ ] ( #8 ) TC 07: Translations - [ ] ( #18 ) TC 08: System Specifics - [ ] ( #19 ) TC 09: Welcome Screen & Project - [ ] ( #24 ) TC 10: Proj Tests - [ ] ( #37 ) TC 11: Subscriptions - [ ] ( #39 ) TC 12: Tickets --- | Test Execution Outcome | | |---|---| | Issues Created During Testing: | LINK TO ISSUE(S) | **Bugs Created**
1.0
Test Execution InputApp 1.0.0 (iOS) - ## Test plan for Input manual testing | Test environment | Value | |---|---| | Input Version: | 1.0.0 - 2.15.210820055619 | | Mergin Version: | 2021.8 | | Mergin URL: <> | public.cloudmerging.com | | QGIS Version: | 3.16 | | Mergin plugin Version: | 2021.4.1 | | Mobile OS: iOS | | Date of Execution: | 23.8.2021 | --- ### Test Cases - [ ] ( #2 ) TC 01: Mergin & Projects Manipulation - [ ] ( #3 ) TC 02: Sync & Project Status - [ ] ( #4 ) TC 03: Map Canvas - [ ] ( #5 ) TC 04: Recording - [ ] ( #6 ) TC 05: Forms - [ ] ( #7 ) TC 06: Data Providers - [ ] ( #8 ) TC 07: Translations - [ ] ( #18 ) TC 08: System Specifics - [ ] ( #19 ) TC 09: Welcome Screen & Project - [ ] ( #24 ) TC 10: Proj Tests - [ ] ( #37 ) TC 11: Subscriptions - [ ] ( #39 ) TC 12: Tickets --- | Test Execution Outcome | | |---|---| | Issues Created During Testing: | LINK TO ISSUE(S) | **Bugs Created**
test
test execution inputapp ios test plan for input manual testing test environment value input version mergin version mergin url public cloudmerging com qgis version mergin plugin version mobile os ios date of execution test cases tc mergin projects manipulation tc sync project status tc map canvas tc recording tc forms tc data providers tc translations tc system specifics tc welcome screen project tc proj tests tc subscriptions tc tickets test execution outcome issues created during testing link to issue s bugs created
1
298,164
9,196,441,150
IssuesEvent
2019-03-07 07:05:05
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
bronze5.eu - site is not usable
browser-firefox priority-normal
<!-- @browser: Firefox 65.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0 --> <!-- @reported_with: web --> **URL**: http://bronze5.eu **Browser / Version**: Firefox 65.0 **Operating System**: Windows 7 **Tested Another Browser**: Yes **Problem type**: Site is not usable **Description**: page is not displayed **Steps to Reproduce**: [![Screenshot Description](https://webcompat.com/uploads/2019/2/572265d8-5f0b-4314-a41d-28b15b9444fa-thumb.jpg)](https://webcompat.com/uploads/2019/2/572265d8-5f0b-4314-a41d-28b15b9444fa.jpg) <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
bronze5.eu - site is not usable - <!-- @browser: Firefox 65.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0 --> <!-- @reported_with: web --> **URL**: http://bronze5.eu **Browser / Version**: Firefox 65.0 **Operating System**: Windows 7 **Tested Another Browser**: Yes **Problem type**: Site is not usable **Description**: page is not displayed **Steps to Reproduce**: [![Screenshot Description](https://webcompat.com/uploads/2019/2/572265d8-5f0b-4314-a41d-28b15b9444fa-thumb.jpg)](https://webcompat.com/uploads/2019/2/572265d8-5f0b-4314-a41d-28b15b9444fa.jpg) <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_test
eu site is not usable url browser version firefox operating system windows tested another browser yes problem type site is not usable description page is not displayed steps to reproduce browser configuration none from with ❤️
0
152,667
12,124,122,073
IssuesEvent
2020-04-22 13:45:26
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
roachtest: acceptance/decommission failed
C-test-failure O-roachtest O-robot branch-provisional_202004210018_v20.1.0 release-blocker
[(roachtest).acceptance/decommission failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1891147&tab=buildLog) on [provisional_202004210018_v20.1.0@c0f00d2a156295a1cb79ed960940c167cb03ccbc](https://github.com/cockroachdb/cockroach/commits/c0f00d2a156295a1cb79ed960940c167cb03ccbc): ``` The test failed on branch=provisional_202004210018_v20.1.0, cloud=gce: test artifacts and logs in: artifacts/acceptance/decommission/run_1 cluster.go:1864,decommission.go:263,acceptance.go:85,test_runner.go:753: error with attached stack trace: main.execCmd /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:406 main.(*cluster).StartE /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:1859 main.(*cluster).Start /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:1864 main.runDecommissionAcceptance /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/decommission.go:263 main.registerAcceptance.func1 /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/acceptance.go:85 main.(*testRunner).runTest.func2 /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:753 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1357 - error with embedded safe details: %s returned: stderr: %s stdout: %s -- arg 1: <string> -- arg 2: <string> -- arg 3: <string> - /go/src/github.com/cockroachdb/cockroach/bin/roachprod start --env=COCKROACH_SCAN_MAX_IDLE_TIME=5ms local returned: stderr: ckroach/pkg/cmd/roachprod/install.(*SyncedCluster).Parallel.func1.1 /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install/cluster_synced.go:1622 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1357: 3: invalid version string 'c0f00d2' github.com/cockroachdb/cockroach/pkg/util/version.Parse /go/src/github.com/cockroachdb/cockroach/pkg/util/version/version.go:90 github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install.getCockroachVersion /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install/cockroach.go:96 github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install.Cockroach.Start.func2 /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install/cockroach.go:168 github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install.(*SyncedCluster).Parallel.func1.1 /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install/cluster_synced.go:1622 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1357: I200422 03:19:15.221141 1 cluster_synced.go:1704 command failed stdout: local: starting: - exit status 1 ``` <details><summary>More</summary><p> Artifacts: [/acceptance/decommission](https://teamcity.cockroachdb.com/viewLog.html?buildId=1891147&tab=artifacts#/acceptance/decommission) Related: - #47804 roachtest: acceptance/decommission failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker) - #47671 roachtest: acceptance/decommission failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Aacceptance%2Fdecommission.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
2.0
roachtest: acceptance/decommission failed - [(roachtest).acceptance/decommission failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1891147&tab=buildLog) on [provisional_202004210018_v20.1.0@c0f00d2a156295a1cb79ed960940c167cb03ccbc](https://github.com/cockroachdb/cockroach/commits/c0f00d2a156295a1cb79ed960940c167cb03ccbc): ``` The test failed on branch=provisional_202004210018_v20.1.0, cloud=gce: test artifacts and logs in: artifacts/acceptance/decommission/run_1 cluster.go:1864,decommission.go:263,acceptance.go:85,test_runner.go:753: error with attached stack trace: main.execCmd /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:406 main.(*cluster).StartE /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:1859 main.(*cluster).Start /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:1864 main.runDecommissionAcceptance /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/decommission.go:263 main.registerAcceptance.func1 /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/acceptance.go:85 main.(*testRunner).runTest.func2 /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:753 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1357 - error with embedded safe details: %s returned: stderr: %s stdout: %s -- arg 1: <string> -- arg 2: <string> -- arg 3: <string> - /go/src/github.com/cockroachdb/cockroach/bin/roachprod start --env=COCKROACH_SCAN_MAX_IDLE_TIME=5ms local returned: stderr: ckroach/pkg/cmd/roachprod/install.(*SyncedCluster).Parallel.func1.1 /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install/cluster_synced.go:1622 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1357: 3: invalid version string 'c0f00d2' github.com/cockroachdb/cockroach/pkg/util/version.Parse /go/src/github.com/cockroachdb/cockroach/pkg/util/version/version.go:90 github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install.getCockroachVersion /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install/cockroach.go:96 github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install.Cockroach.Start.func2 /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install/cockroach.go:168 github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install.(*SyncedCluster).Parallel.func1.1 /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install/cluster_synced.go:1622 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1357: I200422 03:19:15.221141 1 cluster_synced.go:1704 command failed stdout: local: starting: - exit status 1 ``` <details><summary>More</summary><p> Artifacts: [/acceptance/decommission](https://teamcity.cockroachdb.com/viewLog.html?buildId=1891147&tab=artifacts#/acceptance/decommission) Related: - #47804 roachtest: acceptance/decommission failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker) - #47671 roachtest: acceptance/decommission failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Aacceptance%2Fdecommission.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
test
roachtest acceptance decommission failed on the test failed on branch provisional cloud gce test artifacts and logs in artifacts acceptance decommission run cluster go decommission go acceptance go test runner go error with attached stack trace main execcmd go src github com cockroachdb cockroach pkg cmd roachtest cluster go main cluster starte go src github com cockroachdb cockroach pkg cmd roachtest cluster go main cluster start go src github com cockroachdb cockroach pkg cmd roachtest cluster go main rundecommissionacceptance go src github com cockroachdb cockroach pkg cmd roachtest decommission go main registeracceptance go src github com cockroachdb cockroach pkg cmd roachtest acceptance go main testrunner runtest go src github com cockroachdb cockroach pkg cmd roachtest test runner go runtime goexit usr local go src runtime asm s error with embedded safe details s returned stderr s stdout s arg arg arg go src github com cockroachdb cockroach bin roachprod start env cockroach scan max idle time local returned stderr ckroach pkg cmd roachprod install syncedcluster parallel go src github com cockroachdb cockroach pkg cmd roachprod install cluster synced go runtime goexit usr local go src runtime asm s invalid version string github com cockroachdb cockroach pkg util version parse go src github com cockroachdb cockroach pkg util version version go github com cockroachdb cockroach pkg cmd roachprod install getcockroachversion go src github com cockroachdb cockroach pkg cmd roachprod install cockroach go github com cockroachdb cockroach pkg cmd roachprod install cockroach start go src github com cockroachdb cockroach pkg cmd roachprod install cockroach go github com cockroachdb cockroach pkg cmd roachprod install syncedcluster parallel go src github com cockroachdb cockroach pkg cmd roachprod install cluster synced go runtime goexit usr local go src runtime asm s cluster synced go command failed stdout local starting exit status more artifacts related roachtest acceptance decommission failed roachtest acceptance decommission failed powered by
1
348,103
24,906,764,573
IssuesEvent
2022-10-29 10:54:47
Azure/terraform-azurerm-caf-enterprise-scale
https://api.github.com/repos/Azure/terraform-azurerm-caf-enterprise-scale
closed
Error received when running custom network connectivity deployment
documentation
<h3 id=communitynote>Community Note </h3> <ul> <li>Please vote on this issue by adding a &#128077; <a href="https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/">reaction</a> to the original issue to help the community and maintainers prioritize this request </li> <li>Please do not leave &quot;+1&quot; or &quot;me too&quot; comments, they generate extra noise for issue followers and do not help prioritize the request </li> <li>If you are interested in working on this issue or have submitted a pull request, please leave a comment </li> </ul> <h3 id=versions>Versions </h3> <p><strong>terraform</strong>: v1.1.2 </p> <p><strong>azure provider</strong>: 3.0.2 </p> <p><strong>module</strong>: 2.3.1 </p> <h3 id=description>Description </h3> <p>Error received when using the &quot;Deploy Connectivity Resources With Custom Settings&quot; configuration guide. Specific error received is: &quot;The given value is not suitable for child module variable &quot;configure<em>connectivity</em>resources&quot; defined at .terraform\modules\enterprise<em>scale\variables.tf:224,1-44: attribute &quot;settings&quot;: attribute │ &quot;hub</em>networks&quot;: element 0: attribute &quot;config&quot;: attribute &quot;<strong>enable<em>hub</em>network<em>mesh</em>peering</strong>&quot; is required.&quot; </p> <h4 id=describethebug>Describe the bug </h4> <p>The bug appears to be relating to new functionality within the terraform module for hub network mesh peering, which has a mandatory attribute missing from the <strong>settings.connectivity.tf</strong> </p> <h4 id=stepstoreproduce>Steps to Reproduce </h4> <p>Following the &quot;Deploy Connectivity Resources with Custom Settings&quot; guide reproduces the error. https://github.com/Azure/terraform-azurerm-caf-enterprise-scale/wiki/%5BExamples%5D-Deploy-Connectivity-Resources-With-Custom-Settings </p> <p>As above, by following the guide here: https://github.com/Azure/terraform-azurerm-caf-enterprise-scale/wiki/%5BExamples%5D-Deploy-Connectivity-Resources-With-Custom-Settings </p> <h4 id=screenshots>Screenshots </h4> <p><img src="https://user-images.githubusercontent.com/106317528/186480513-ed23aff6-4178-4ae0-b63b-b1e5904c5a91.png" alt=image> </p> <h4 id=additionalcontext>Additional context </h4> <p>I believe adding this line resolves the issue: <img src="https://user-images.githubusercontent.com/106317528/186480612-6b7a62fe-806a-4aed-9a48-ded844cd23a0.png" alt=image> </p>
1.0
Error received when running custom network connectivity deployment - <h3 id=communitynote>Community Note </h3> <ul> <li>Please vote on this issue by adding a &#128077; <a href="https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/">reaction</a> to the original issue to help the community and maintainers prioritize this request </li> <li>Please do not leave &quot;+1&quot; or &quot;me too&quot; comments, they generate extra noise for issue followers and do not help prioritize the request </li> <li>If you are interested in working on this issue or have submitted a pull request, please leave a comment </li> </ul> <h3 id=versions>Versions </h3> <p><strong>terraform</strong>: v1.1.2 </p> <p><strong>azure provider</strong>: 3.0.2 </p> <p><strong>module</strong>: 2.3.1 </p> <h3 id=description>Description </h3> <p>Error received when using the &quot;Deploy Connectivity Resources With Custom Settings&quot; configuration guide. Specific error received is: &quot;The given value is not suitable for child module variable &quot;configure<em>connectivity</em>resources&quot; defined at .terraform\modules\enterprise<em>scale\variables.tf:224,1-44: attribute &quot;settings&quot;: attribute │ &quot;hub</em>networks&quot;: element 0: attribute &quot;config&quot;: attribute &quot;<strong>enable<em>hub</em>network<em>mesh</em>peering</strong>&quot; is required.&quot; </p> <h4 id=describethebug>Describe the bug </h4> <p>The bug appears to be relating to new functionality within the terraform module for hub network mesh peering, which has a mandatory attribute missing from the <strong>settings.connectivity.tf</strong> </p> <h4 id=stepstoreproduce>Steps to Reproduce </h4> <p>Following the &quot;Deploy Connectivity Resources with Custom Settings&quot; guide reproduces the error. https://github.com/Azure/terraform-azurerm-caf-enterprise-scale/wiki/%5BExamples%5D-Deploy-Connectivity-Resources-With-Custom-Settings </p> <p>As above, by following the guide here: https://github.com/Azure/terraform-azurerm-caf-enterprise-scale/wiki/%5BExamples%5D-Deploy-Connectivity-Resources-With-Custom-Settings </p> <h4 id=screenshots>Screenshots </h4> <p><img src="https://user-images.githubusercontent.com/106317528/186480513-ed23aff6-4178-4ae0-b63b-b1e5904c5a91.png" alt=image> </p> <h4 id=additionalcontext>Additional context </h4> <p>I believe adding this line resolves the issue: <img src="https://user-images.githubusercontent.com/106317528/186480612-6b7a62fe-806a-4aed-9a48-ded844cd23a0.png" alt=image> </p>
non_test
error received when running custom network connectivity deployment community note please vote on this issue by adding a please do not leave quot quot or quot me too quot comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment versions terraform azure provider module description error received when using the quot deploy connectivity resources with custom settings quot configuration guide specific error received is quot the given value is not suitable for child module variable quot configure connectivity resources quot defined at terraform modules enterprise scale variables tf attribute quot settings quot attribute │ quot hub networks quot element attribute quot config quot attribute quot enable hub network mesh peering quot is required quot describe the bug the bug appears to be relating to new functionality within the terraform module for hub network mesh peering which has a mandatory attribute missing from the settings connectivity tf steps to reproduce following the quot deploy connectivity resources with custom settings quot guide reproduces the error as above by following the guide here screenshots additional context i believe adding this line resolves the issue
0
89,490
8,205,297,522
IssuesEvent
2018-09-03 09:40:51
humera987/HumTestData
https://api.github.com/repos/humera987/HumTestData
closed
humz_proj_test : ApiV1TestSuitesProjectIdIdCoverageGetPathParamNullValueId
humz_proj_test
Project : humz_proj_test Job : UAT Env : UAT Region : FXLabs/US_WEST_1 Result : fail Status Code : 200 Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Mon, 03 Sep 2018 06:59:49 GMT]} Endpoint : http://13.56.210.25/api/v1/test-suites/project-id/null/coverage Request : Response : { "requestId" : "None", "requestTime" : "2018-09-03T06:59:49.691+0000", "errors" : false, "messages" : [ ], "data" : { "totalEndpoints" : null, "totalSuites" : 0, "totalTestCases" : 0, "countByMethod" : [ ], "countByCategory" : [ ], "countBySeverity" : [ ] }, "totalPages" : 0, "totalElements" : 0 } Logs : Assertion [@StatusCode != 404] passed, not expecting [404] and found [200]Assertion [@StatusCode != 500] passed, not expecting [500] and found [200]Assertion [@StatusCode != 401] passed, not expecting [401] and found [200]Assertion [@StatusCode != 200] failed, not expecting [200] but found [200] --- FX Bot ---
1.0
humz_proj_test : ApiV1TestSuitesProjectIdIdCoverageGetPathParamNullValueId - Project : humz_proj_test Job : UAT Env : UAT Region : FXLabs/US_WEST_1 Result : fail Status Code : 200 Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Mon, 03 Sep 2018 06:59:49 GMT]} Endpoint : http://13.56.210.25/api/v1/test-suites/project-id/null/coverage Request : Response : { "requestId" : "None", "requestTime" : "2018-09-03T06:59:49.691+0000", "errors" : false, "messages" : [ ], "data" : { "totalEndpoints" : null, "totalSuites" : 0, "totalTestCases" : 0, "countByMethod" : [ ], "countByCategory" : [ ], "countBySeverity" : [ ] }, "totalPages" : 0, "totalElements" : 0 } Logs : Assertion [@StatusCode != 404] passed, not expecting [404] and found [200]Assertion [@StatusCode != 500] passed, not expecting [500] and found [200]Assertion [@StatusCode != 401] passed, not expecting [401] and found [200]Assertion [@StatusCode != 200] failed, not expecting [200] but found [200] --- FX Bot ---
test
humz proj test project humz proj test job uat env uat region fxlabs us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options content type transfer encoding date endpoint request response requestid none requesttime errors false messages data totalendpoints null totalsuites totaltestcases countbymethod countbycategory countbyseverity totalpages totalelements logs assertion passed not expecting and found assertion passed not expecting and found assertion passed not expecting and found assertion failed not expecting but found fx bot
1
202,586
15,287,033,228
IssuesEvent
2021-02-23 15:20:27
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
roachtest: tpcc/headroom/n4cpu16 failed
C-test-failure O-roachtest O-robot branch-release-20.2 release-blocker
[(roachtest).tpcc/headroom/n4cpu16 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2657159&tab=buildLog) on [release-20.2@8c79e2bc4b35d36c8527f4c40c974f03d9034f46](https://github.com/cockroachdb/cockroach/commits/8c79e2bc4b35d36c8527f4c40c974f03d9034f46): ``` | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1374 Wraps: (2) output in run_084044.658_n4_workload_run_tpcc Wraps: (3) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2657159-1612856700-31-n4cpu16:4 -- ./workload run tpcc --warehouses=1470 --histograms=perf/stats.json --ramp=5m0s --duration=2h0m0s {pgurl:1-3} returned | stderr: | ./workload: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./workload) | Error: COMMAND_PROBLEM: exit status 1 | (1) COMMAND_PROBLEM | Wraps: (2) Node 4. Command with error: | | ``` | | ./workload run tpcc --warehouses=1470 --histograms=perf/stats.json --ramp=5m0s --duration=2h0m0s {pgurl:1-3} | | ``` | Wraps: (3) exit status 1 | Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError | | stdout: Wraps: (4) exit status 20 Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *exec.ExitError cluster.go:2654,tpcc.go:174,tpcc.go:238,test_runner.go:755: monitor failure: monitor task failed: t.Fatal() was called (1) attached stack trace -- stack trace: | main.(*monitor).WaitE | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2642 | main.(*monitor).Wait | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2650 | main.runTPCC | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:174 | main.registerTPCC.func1 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:238 | main.(*testRunner).runTest.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:755 Wraps: (2) monitor failure Wraps: (3) attached stack trace -- stack trace: | main.(*monitor).wait.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2698 Wraps: (4) monitor task failed Wraps: (5) attached stack trace -- stack trace: | main.init | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2612 | runtime.doInit | /usr/local/go/src/runtime/proc.go:5652 | runtime.main | /usr/local/go/src/runtime/proc.go:191 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1374 Wraps: (6) t.Fatal() was called Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError ``` <details><summary>More</summary><p> Artifacts: [/tpcc/headroom/n4cpu16](https://teamcity.cockroachdb.com/viewLog.html?buildId=2657159&tab=artifacts#/tpcc/headroom/n4cpu16) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Atpcc%2Fheadroom%2Fn4cpu16.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
2.0
roachtest: tpcc/headroom/n4cpu16 failed - [(roachtest).tpcc/headroom/n4cpu16 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2657159&tab=buildLog) on [release-20.2@8c79e2bc4b35d36c8527f4c40c974f03d9034f46](https://github.com/cockroachdb/cockroach/commits/8c79e2bc4b35d36c8527f4c40c974f03d9034f46): ``` | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1374 Wraps: (2) output in run_084044.658_n4_workload_run_tpcc Wraps: (3) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2657159-1612856700-31-n4cpu16:4 -- ./workload run tpcc --warehouses=1470 --histograms=perf/stats.json --ramp=5m0s --duration=2h0m0s {pgurl:1-3} returned | stderr: | ./workload: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./workload) | Error: COMMAND_PROBLEM: exit status 1 | (1) COMMAND_PROBLEM | Wraps: (2) Node 4. Command with error: | | ``` | | ./workload run tpcc --warehouses=1470 --histograms=perf/stats.json --ramp=5m0s --duration=2h0m0s {pgurl:1-3} | | ``` | Wraps: (3) exit status 1 | Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError | | stdout: Wraps: (4) exit status 20 Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *exec.ExitError cluster.go:2654,tpcc.go:174,tpcc.go:238,test_runner.go:755: monitor failure: monitor task failed: t.Fatal() was called (1) attached stack trace -- stack trace: | main.(*monitor).WaitE | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2642 | main.(*monitor).Wait | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2650 | main.runTPCC | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:174 | main.registerTPCC.func1 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:238 | main.(*testRunner).runTest.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:755 Wraps: (2) monitor failure Wraps: (3) attached stack trace -- stack trace: | main.(*monitor).wait.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2698 Wraps: (4) monitor task failed Wraps: (5) attached stack trace -- stack trace: | main.init | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2612 | runtime.doInit | /usr/local/go/src/runtime/proc.go:5652 | runtime.main | /usr/local/go/src/runtime/proc.go:191 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1374 Wraps: (6) t.Fatal() was called Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError ``` <details><summary>More</summary><p> Artifacts: [/tpcc/headroom/n4cpu16](https://teamcity.cockroachdb.com/viewLog.html?buildId=2657159&tab=artifacts#/tpcc/headroom/n4cpu16) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Atpcc%2Fheadroom%2Fn4cpu16.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
test
roachtest tpcc headroom failed on runtime goexit usr local go src runtime asm s wraps output in run workload run tpcc wraps home agent work go src github com cockroachdb cockroach bin roachprod run teamcity workload run tpcc warehouses histograms perf stats json ramp duration pgurl returned stderr workload lib linux gnu libm so version glibc not found required by workload error command problem exit status command problem wraps node command with error workload run tpcc warehouses histograms perf stats json ramp duration pgurl wraps exit status error types errors cmd hintdetail withdetail exec exiterror stdout wraps exit status error types withstack withstack errutil withprefix main withcommanddetails exec exiterror cluster go tpcc go tpcc go test runner go monitor failure monitor task failed t fatal was called attached stack trace stack trace main monitor waite home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main runtpcc home agent work go src github com cockroachdb cockroach pkg cmd roachtest tpcc go main registertpcc home agent work go src github com cockroachdb cockroach pkg cmd roachtest tpcc go main testrunner runtest home agent work go src github com cockroachdb cockroach pkg cmd roachtest test runner go wraps monitor failure wraps attached stack trace stack trace main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go wraps monitor task failed wraps attached stack trace stack trace main init home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go runtime doinit usr local go src runtime proc go runtime main usr local go src runtime proc go runtime goexit usr local go src runtime asm s wraps t fatal was called error types withstack withstack errutil withprefix withstack withstack errutil withprefix withstack withstack errutil leaferror more artifacts powered by
1
259,978
8,202,273,130
IssuesEvent
2018-09-02 06:56:48
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
Torch.save with dill
medium priority
## Issue description With the release of PyTorch 0.4.1, ``torch.save`` with dill now breaks. ## Code example ``` import torch import dill net = torch.nn.LSTM(10, 10) torch.save(net, 'save.pt') # Works torch.save(net, 'save.pt', pickle_module=dill) # Recursion error ``` ``` File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/site-packages/dill/dill.py", line 1015, in save_super pickler.save_reduce(super, (obj.__thisclass__, obj.__self__), obj=obj) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 736, in save_tuple save(element) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 521, in save self.save_reduce(obj=obj, *rv) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 736, in save_tuple save(element) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/site-packages/dill/dill.py", line 1015, in save_super pickler.save_reduce(super, (obj.__thisclass__, obj.__self__), obj=obj) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 736, in save_tuple save(element) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 521, in save self.save_reduce(obj=obj, *rv) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 736, in save_tuple save(element) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/site-packages/dill/dill.py", line 1015, in save_super pickler.save_reduce(super, (obj.__thisclass__, obj.__self__), obj=obj) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 736, in save_tuple save(element) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 461, in save pid = self.persistent_id(obj) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/site-packages/torch/serialization.py", line 244, in persistent_id elif torch.is_storage(obj): RecursionError: maximum recursion depth exceeded ```
1.0
Torch.save with dill - ## Issue description With the release of PyTorch 0.4.1, ``torch.save`` with dill now breaks. ## Code example ``` import torch import dill net = torch.nn.LSTM(10, 10) torch.save(net, 'save.pt') # Works torch.save(net, 'save.pt', pickle_module=dill) # Recursion error ``` ``` File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/site-packages/dill/dill.py", line 1015, in save_super pickler.save_reduce(super, (obj.__thisclass__, obj.__self__), obj=obj) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 736, in save_tuple save(element) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 521, in save self.save_reduce(obj=obj, *rv) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 736, in save_tuple save(element) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/site-packages/dill/dill.py", line 1015, in save_super pickler.save_reduce(super, (obj.__thisclass__, obj.__self__), obj=obj) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 736, in save_tuple save(element) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 521, in save self.save_reduce(obj=obj, *rv) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 736, in save_tuple save(element) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/site-packages/dill/dill.py", line 1015, in save_super pickler.save_reduce(super, (obj.__thisclass__, obj.__self__), obj=obj) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 736, in save_tuple save(element) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/pickle.py", line 461, in save pid = self.persistent_id(obj) File "/Users/michaelp/.pyenv/versions/3.6.5/lib/python3.6/site-packages/torch/serialization.py", line 244, in persistent_id elif torch.is_storage(obj): RecursionError: maximum recursion depth exceeded ```
non_test
torch save with dill issue description with the release of pytorch torch save with dill now breaks code example import torch import dill net torch nn lstm torch save net save pt works torch save net save pt pickle module dill recursion error file users michaelp pyenv versions lib site packages dill dill py line in save super pickler save reduce super obj thisclass obj self obj obj file users michaelp pyenv versions lib pickle py line in save reduce save args file users michaelp pyenv versions lib pickle py line in save f self obj call unbound method with explicit self file users michaelp pyenv versions lib pickle py line in save tuple save element file users michaelp pyenv versions lib pickle py line in save self save reduce obj obj rv file users michaelp pyenv versions lib pickle py line in save reduce save args file users michaelp pyenv versions lib pickle py line in save f self obj call unbound method with explicit self file users michaelp pyenv versions lib pickle py line in save tuple save element file users michaelp pyenv versions lib pickle py line in save f self obj call unbound method with explicit self file users michaelp pyenv versions lib site packages dill dill py line in save super pickler save reduce super obj thisclass obj self obj obj file users michaelp pyenv versions lib pickle py line in save reduce save args file users michaelp pyenv versions lib pickle py line in save f self obj call unbound method with explicit self file users michaelp pyenv versions lib pickle py line in save tuple save element file users michaelp pyenv versions lib pickle py line in save self save reduce obj obj rv file users michaelp pyenv versions lib pickle py line in save reduce save args file users michaelp pyenv versions lib pickle py line in save f self obj call unbound method with explicit self file users michaelp pyenv versions lib pickle py line in save tuple save element file users michaelp pyenv versions lib pickle py line in save f self obj call unbound method with explicit self file users michaelp pyenv versions lib site packages dill dill py line in save super pickler save reduce super obj thisclass obj self obj obj file users michaelp pyenv versions lib pickle py line in save reduce save args file users michaelp pyenv versions lib pickle py line in save f self obj call unbound method with explicit self file users michaelp pyenv versions lib pickle py line in save tuple save element file users michaelp pyenv versions lib pickle py line in save pid self persistent id obj file users michaelp pyenv versions lib site packages torch serialization py line in persistent id elif torch is storage obj recursionerror maximum recursion depth exceeded
0
24,454
3,985,520,139
IssuesEvent
2016-05-07 23:09:49
scipy/scipy
https://api.github.com/repos/scipy/scipy
closed
wavfile.write() doesn't write 'fact' chunk for floating-point formats
defect scipy.io
This came up in https://github.com/erikd/libsndfile/issues/70: According to http://www-mmsp.ece.mcgill.ca/Documents/AudioFormats/WAVE/WAVE.html the WAVE_FORMAT_IEEE_FLOAT format is a non-PCM format and all non-PCM formats "must have a Fact chunk". When I'm writing 32-bit and 64-bit float files with `scipy.io.wavfile.write()`, the 'fact' chunk is not written: ```python #!/usr/bin/env python3 import numpy as np from scipy.io import wavfile for dtype in 'float32', 'float64': wavfile.write(dtype + '.wav', 44100, np.array([[1.0, -1.0], [0.75, -0.75], [0.5, -0.5], [0.25, -0.25]], dtype=dtype)) ``` Running `sndfile-info` on `float32.wav` and `float64.wav` reveals that they don't have a 'fact' chunk: ``` $ sndfile-info float32.wav Version : libsndfile-1.0.25 ======================================== File : float32.wav Length : 76 RIFF : 68 WAVE fmt : 16 Format : 0x3 => WAVE_FORMAT_IEEE_FLOAT Channels : 2 Sample Rate : 44100 Block Align : 8 Bit Width : 32 Bytes/sec : 352800 data : 32 End ---------------------------------------- Sample Rate : 44100 Frames : 4 Channels : 2 Format : 0x00010006 Sections : 1 Seekable : TRUE Duration : 00:00:00.000 Signal Max : 1 (0.00 dB) ``` ``` $ sndfile-info float64.wav Version : libsndfile-1.0.25 ======================================== File : float64.wav Length : 108 RIFF : 100 WAVE fmt : 16 Format : 0x3 => WAVE_FORMAT_IEEE_FLOAT Channels : 2 Sample Rate : 44100 Block Align : 16 Bit Width : 64 Bytes/sec : 705600 data : 64 End ---------------------------------------- Sample Rate : 44100 Frames : 4 Channels : 2 Format : 0x00010007 Sections : 1 Seekable : TRUE Duration : 00:00:00.000 Signal Max : 1 (0.00 dB) ``` I cannot upload the WAV files here in this issue, but you can get a 32-bit example at https://gist.github.com/mgeier/7224433#file-stereo-wav. For comparison, this is how it looks like for a 32-bit float file written with [libsndfile](http://www.mega-nerd.com/libsndfile/) (I'm not saying this is correct, but it's definitely different): ``` $ sndfile-info libsndfile_float32.wav Version : libsndfile-1.0.25 ======================================== File : libsndfile_float32.wav Length : 120 RIFF : 112 WAVE fmt : 16 Format : 0x3 => WAVE_FORMAT_IEEE_FLOAT Channels : 2 Sample Rate : 44100 Block Align : 8 Bit Width : 32 Bytes/sec : 352800 fact : 4 frames : 4 PEAK : 24 version : 1 time stamp : 1404898642 Ch Position Value 0 0 1 1 0 1 data : 32 End ---------------------------------------- Sample Rate : 44100 Frames : 4 Channels : 2 Format : 0x00010006 Sections : 1 Seekable : TRUE Duration : 00:00:00.000 Signal Max : 1 (0.00 dB) ```
1.0
wavfile.write() doesn't write 'fact' chunk for floating-point formats - This came up in https://github.com/erikd/libsndfile/issues/70: According to http://www-mmsp.ece.mcgill.ca/Documents/AudioFormats/WAVE/WAVE.html the WAVE_FORMAT_IEEE_FLOAT format is a non-PCM format and all non-PCM formats "must have a Fact chunk". When I'm writing 32-bit and 64-bit float files with `scipy.io.wavfile.write()`, the 'fact' chunk is not written: ```python #!/usr/bin/env python3 import numpy as np from scipy.io import wavfile for dtype in 'float32', 'float64': wavfile.write(dtype + '.wav', 44100, np.array([[1.0, -1.0], [0.75, -0.75], [0.5, -0.5], [0.25, -0.25]], dtype=dtype)) ``` Running `sndfile-info` on `float32.wav` and `float64.wav` reveals that they don't have a 'fact' chunk: ``` $ sndfile-info float32.wav Version : libsndfile-1.0.25 ======================================== File : float32.wav Length : 76 RIFF : 68 WAVE fmt : 16 Format : 0x3 => WAVE_FORMAT_IEEE_FLOAT Channels : 2 Sample Rate : 44100 Block Align : 8 Bit Width : 32 Bytes/sec : 352800 data : 32 End ---------------------------------------- Sample Rate : 44100 Frames : 4 Channels : 2 Format : 0x00010006 Sections : 1 Seekable : TRUE Duration : 00:00:00.000 Signal Max : 1 (0.00 dB) ``` ``` $ sndfile-info float64.wav Version : libsndfile-1.0.25 ======================================== File : float64.wav Length : 108 RIFF : 100 WAVE fmt : 16 Format : 0x3 => WAVE_FORMAT_IEEE_FLOAT Channels : 2 Sample Rate : 44100 Block Align : 16 Bit Width : 64 Bytes/sec : 705600 data : 64 End ---------------------------------------- Sample Rate : 44100 Frames : 4 Channels : 2 Format : 0x00010007 Sections : 1 Seekable : TRUE Duration : 00:00:00.000 Signal Max : 1 (0.00 dB) ``` I cannot upload the WAV files here in this issue, but you can get a 32-bit example at https://gist.github.com/mgeier/7224433#file-stereo-wav. For comparison, this is how it looks like for a 32-bit float file written with [libsndfile](http://www.mega-nerd.com/libsndfile/) (I'm not saying this is correct, but it's definitely different): ``` $ sndfile-info libsndfile_float32.wav Version : libsndfile-1.0.25 ======================================== File : libsndfile_float32.wav Length : 120 RIFF : 112 WAVE fmt : 16 Format : 0x3 => WAVE_FORMAT_IEEE_FLOAT Channels : 2 Sample Rate : 44100 Block Align : 8 Bit Width : 32 Bytes/sec : 352800 fact : 4 frames : 4 PEAK : 24 version : 1 time stamp : 1404898642 Ch Position Value 0 0 1 1 0 1 data : 32 End ---------------------------------------- Sample Rate : 44100 Frames : 4 Channels : 2 Format : 0x00010006 Sections : 1 Seekable : TRUE Duration : 00:00:00.000 Signal Max : 1 (0.00 dB) ```
non_test
wavfile write doesn t write fact chunk for floating point formats this came up in according to the wave format ieee float format is a non pcm format and all non pcm formats must have a fact chunk when i m writing bit and bit float files with scipy io wavfile write the fact chunk is not written python usr bin env import numpy as np from scipy io import wavfile for dtype in wavfile write dtype wav np array dtype dtype running sndfile info on wav and wav reveals that they don t have a fact chunk sndfile info wav version libsndfile file wav length riff wave fmt format wave format ieee float channels sample rate block align bit width bytes sec data end sample rate frames channels format sections seekable true duration signal max db sndfile info wav version libsndfile file wav length riff wave fmt format wave format ieee float channels sample rate block align bit width bytes sec data end sample rate frames channels format sections seekable true duration signal max db i cannot upload the wav files here in this issue but you can get a bit example at for comparison this is how it looks like for a bit float file written with i m not saying this is correct but it s definitely different sndfile info libsndfile wav version libsndfile file libsndfile wav length riff wave fmt format wave format ieee float channels sample rate block align bit width bytes sec fact frames peak version time stamp ch position value data end sample rate frames channels format sections seekable true duration signal max db
0
99,126
8,691,147,800
IssuesEvent
2018-12-04 00:03:00
CuBoulder/express
https://api.github.com/repos/CuBoulder/express
closed
Move CU Collections Test Content Module
Still Open at 3.0 improvement:Development improvement:Testing
## Context https://github.com/CuBoulder/express/tree/dev/modules/custom/cu_test_content_admin_bundle/cu_test_content_collections Now that the bundle is in its own repo, the test content should be moved to that codebase.
1.0
Move CU Collections Test Content Module - ## Context https://github.com/CuBoulder/express/tree/dev/modules/custom/cu_test_content_admin_bundle/cu_test_content_collections Now that the bundle is in its own repo, the test content should be moved to that codebase.
test
move cu collections test content module context now that the bundle is in its own repo the test content should be moved to that codebase
1
119,307
25,505,106,288
IssuesEvent
2022-11-28 08:50:14
5l1D3R/Veracode-Github-integration
https://api.github.com/repos/5l1D3R/Veracode-Github-integration
opened
Improper Output Neutralization for Logs ('CRLF Injection') [VID:69]
VeracodeFlaw: Medium Veracode Policy Scan
**Filename:** RemoveAccountCommand.java **Line:** 46 **CWE:** 117 (Improper Output Neutralization for Logs ('CRLF Injection')) <span>This call to org.apache.log4j.Category.info() could result in a log forging attack. Writing untrusted data into a log file allows an attacker to forge log entries or inject malicious content into log files. Corrupted log files can be used to cover an attacker's tracks or as a delivery mechanism for an attack on a log viewing or processing utility. For example, if a web administrator uses a browser-based utility to review logs, a cross-site scripting attack might be possible. The first argument to info() contains tainted data from the variable sqlQuery. The tainted data originated from earlier calls to AnnotationVirtualController.vc_annotation_entry, and java.sql.Statement.executeQuery.</span> <span>Avoid directly embedding user input in log files when possible. Sanitize untrusted data used to construct log entries by using a safe logging mechanism such as the OWASP ESAPI Logger, which will automatically remove unexpected carriage returns and line feeds and can be configured to use HTML entity encoding for non-alphanumeric data. Alternatively, some of the XSS escaping functions from the OWASP Java Encoder project will also sanitize CRLF sequences. Only create a custom blocklist when absolutely necessary. Always validate untrusted input to ensure that it conforms to the expected format, using centralized data validation routines when possible.</span> <span>References: <a href="https://cwe.mitre.org/data/definitions/117.html">CWE</a> <a href="https://owasp.org/www-community/attacks/Log_Injection">OWASP</a> <a href="https://docs.veracode.com/r/review_cleansers?tocId=nYnZqAenFFZmB75MQrZwuA">Supported Cleansers</a></span>
2.0
Improper Output Neutralization for Logs ('CRLF Injection') [VID:69] - **Filename:** RemoveAccountCommand.java **Line:** 46 **CWE:** 117 (Improper Output Neutralization for Logs ('CRLF Injection')) <span>This call to org.apache.log4j.Category.info() could result in a log forging attack. Writing untrusted data into a log file allows an attacker to forge log entries or inject malicious content into log files. Corrupted log files can be used to cover an attacker's tracks or as a delivery mechanism for an attack on a log viewing or processing utility. For example, if a web administrator uses a browser-based utility to review logs, a cross-site scripting attack might be possible. The first argument to info() contains tainted data from the variable sqlQuery. The tainted data originated from earlier calls to AnnotationVirtualController.vc_annotation_entry, and java.sql.Statement.executeQuery.</span> <span>Avoid directly embedding user input in log files when possible. Sanitize untrusted data used to construct log entries by using a safe logging mechanism such as the OWASP ESAPI Logger, which will automatically remove unexpected carriage returns and line feeds and can be configured to use HTML entity encoding for non-alphanumeric data. Alternatively, some of the XSS escaping functions from the OWASP Java Encoder project will also sanitize CRLF sequences. Only create a custom blocklist when absolutely necessary. Always validate untrusted input to ensure that it conforms to the expected format, using centralized data validation routines when possible.</span> <span>References: <a href="https://cwe.mitre.org/data/definitions/117.html">CWE</a> <a href="https://owasp.org/www-community/attacks/Log_Injection">OWASP</a> <a href="https://docs.veracode.com/r/review_cleansers?tocId=nYnZqAenFFZmB75MQrZwuA">Supported Cleansers</a></span>
non_test
improper output neutralization for logs crlf injection filename removeaccountcommand java line cwe improper output neutralization for logs crlf injection this call to org apache category info could result in a log forging attack writing untrusted data into a log file allows an attacker to forge log entries or inject malicious content into log files corrupted log files can be used to cover an attacker s tracks or as a delivery mechanism for an attack on a log viewing or processing utility for example if a web administrator uses a browser based utility to review logs a cross site scripting attack might be possible the first argument to info contains tainted data from the variable sqlquery the tainted data originated from earlier calls to annotationvirtualcontroller vc annotation entry and java sql statement executequery avoid directly embedding user input in log files when possible sanitize untrusted data used to construct log entries by using a safe logging mechanism such as the owasp esapi logger which will automatically remove unexpected carriage returns and line feeds and can be configured to use html entity encoding for non alphanumeric data alternatively some of the xss escaping functions from the owasp java encoder project will also sanitize crlf sequences only create a custom blocklist when absolutely necessary always validate untrusted input to ensure that it conforms to the expected format using centralized data validation routines when possible references
0
123,673
16,524,147,648
IssuesEvent
2021-05-26 17:46:59
SAP/fundamental-ngx
https://api.github.com/repos/SAP/fundamental-ngx
closed
Switch: It's missing a variant with on/off label
Medium bug core design_team duplicate
It’s missing switch example with label on/off inside: Visual core: https://wiki.wdf.sap.corp/wiki/pages/viewpage.action?pageId=2033867295
1.0
Switch: It's missing a variant with on/off label - It’s missing switch example with label on/off inside: Visual core: https://wiki.wdf.sap.corp/wiki/pages/viewpage.action?pageId=2033867295
non_test
switch it s missing a variant with on off label it’s missing switch example with label on off inside visual core
0
790,870
27,839,683,832
IssuesEvent
2023-03-20 11:58:03
crc-org/crc
https://api.github.com/repos/crc-org/crc
closed
[Enhance] Remove tray from installation payload
kind/enhancement priority/major os/windows os/macos
The current installer for CRC includes the tray which has not been maintained for quite some time due to the introduction of Podman Desktop. Since effort will now be shifted to focus on the CRC extension, we have to consider removing the tray payload from the installer. For the following platforms. * [x] Windows (merged: 2023-03-01: https://github.com/crc-org/crc/pull/3512) * [x] macOS This will have a side-effect for our onboarding experience. We need to decide if for the time being we will deliver two installers; with tray and onboarding and one without the tray, or just the installer without.
1.0
[Enhance] Remove tray from installation payload - The current installer for CRC includes the tray which has not been maintained for quite some time due to the introduction of Podman Desktop. Since effort will now be shifted to focus on the CRC extension, we have to consider removing the tray payload from the installer. For the following platforms. * [x] Windows (merged: 2023-03-01: https://github.com/crc-org/crc/pull/3512) * [x] macOS This will have a side-effect for our onboarding experience. We need to decide if for the time being we will deliver two installers; with tray and onboarding and one without the tray, or just the installer without.
non_test
remove tray from installation payload the current installer for crc includes the tray which has not been maintained for quite some time due to the introduction of podman desktop since effort will now be shifted to focus on the crc extension we have to consider removing the tray payload from the installer for the following platforms windows merged macos this will have a side effect for our onboarding experience we need to decide if for the time being we will deliver two installers with tray and onboarding and one without the tray or just the installer without
0
116,465
17,370,030,884
IssuesEvent
2021-07-30 12:49:32
lukebroganws/Java-Demo
https://api.github.com/repos/lukebroganws/Java-Demo
opened
CVE-2014-0107 (Medium) detected in xalan-2.7.0.jar
security vulnerability
## CVE-2014-0107 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xalan-2.7.0.jar</b></p></summary> <p></p> <p>Path to dependency file: Java-Demo/pom.xml</p> <p>Path to vulnerable library: Java-Demo/target/easybuggy-1-SNAPSHOT/WEB-INF/lib/xalan-2.7.0.jar,/home/wss-scanner/.m2/repository/xalan/xalan/2.7.0/xalan-2.7.0.jar</p> <p> Dependency Hierarchy: - :x: **xalan-2.7.0.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/lukebroganws/Java-Demo/commit/d73a27e2fea07f94b9c092744aef285ec88e27c4">d73a27e2fea07f94b9c092744aef285ec88e27c4</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The TransformerFactory in Apache Xalan-Java before 2.7.2 does not properly restrict access to certain properties when FEATURE_SECURE_PROCESSING is enabled, which allows remote attackers to bypass expected restrictions and load arbitrary classes or access external resources via a crafted (1) xalan:content-header, (2) xalan:entities, (3) xslt:content-header, or (4) xslt:entities property, or a Java property that is bound to the XSLT 1.0 system-property function. <p>Publish Date: 2014-04-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-0107>CVE-2014-0107</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: N/A - Attack Complexity: N/A - Privileges Required: N/A - User Interaction: N/A - Scope: N/A - Impact Metrics: - Confidentiality Impact: N/A - Integrity Impact: N/A - Availability Impact: N/A </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0107">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0107</a></p> <p>Release Date: 2014-04-15</p> <p>Fix Resolution: 2.7.2</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"xalan","packageName":"xalan","packageVersion":"2.7.0","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"xalan:xalan:2.7.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.7.2"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2014-0107","vulnerabilityDetails":"The TransformerFactory in Apache Xalan-Java before 2.7.2 does not properly restrict access to certain properties when FEATURE_SECURE_PROCESSING is enabled, which allows remote attackers to bypass expected restrictions and load arbitrary classes or access external resources via a crafted (1) xalan:content-header, (2) xalan:entities, (3) xslt:content-header, or (4) xslt:entities property, or a Java property that is bound to the XSLT 1.0 system-property function.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-0107","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> -->
True
CVE-2014-0107 (Medium) detected in xalan-2.7.0.jar - ## CVE-2014-0107 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xalan-2.7.0.jar</b></p></summary> <p></p> <p>Path to dependency file: Java-Demo/pom.xml</p> <p>Path to vulnerable library: Java-Demo/target/easybuggy-1-SNAPSHOT/WEB-INF/lib/xalan-2.7.0.jar,/home/wss-scanner/.m2/repository/xalan/xalan/2.7.0/xalan-2.7.0.jar</p> <p> Dependency Hierarchy: - :x: **xalan-2.7.0.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/lukebroganws/Java-Demo/commit/d73a27e2fea07f94b9c092744aef285ec88e27c4">d73a27e2fea07f94b9c092744aef285ec88e27c4</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The TransformerFactory in Apache Xalan-Java before 2.7.2 does not properly restrict access to certain properties when FEATURE_SECURE_PROCESSING is enabled, which allows remote attackers to bypass expected restrictions and load arbitrary classes or access external resources via a crafted (1) xalan:content-header, (2) xalan:entities, (3) xslt:content-header, or (4) xslt:entities property, or a Java property that is bound to the XSLT 1.0 system-property function. <p>Publish Date: 2014-04-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-0107>CVE-2014-0107</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: N/A - Attack Complexity: N/A - Privileges Required: N/A - User Interaction: N/A - Scope: N/A - Impact Metrics: - Confidentiality Impact: N/A - Integrity Impact: N/A - Availability Impact: N/A </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0107">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0107</a></p> <p>Release Date: 2014-04-15</p> <p>Fix Resolution: 2.7.2</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"xalan","packageName":"xalan","packageVersion":"2.7.0","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"xalan:xalan:2.7.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.7.2"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2014-0107","vulnerabilityDetails":"The TransformerFactory in Apache Xalan-Java before 2.7.2 does not properly restrict access to certain properties when FEATURE_SECURE_PROCESSING is enabled, which allows remote attackers to bypass expected restrictions and load arbitrary classes or access external resources via a crafted (1) xalan:content-header, (2) xalan:entities, (3) xslt:content-header, or (4) xslt:entities property, or a Java property that is bound to the XSLT 1.0 system-property function.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-0107","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> -->
non_test
cve medium detected in xalan jar cve medium severity vulnerability vulnerable library xalan jar path to dependency file java demo pom xml path to vulnerable library java demo target easybuggy snapshot web inf lib xalan jar home wss scanner repository xalan xalan xalan jar dependency hierarchy x xalan jar vulnerable library found in head commit a href found in base branch main vulnerability details the transformerfactory in apache xalan java before does not properly restrict access to certain properties when feature secure processing is enabled which allows remote attackers to bypass expected restrictions and load arbitrary classes or access external resources via a crafted xalan content header xalan entities xslt content header or xslt entities property or a java property that is bound to the xslt system property function publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree xalan xalan isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails the transformerfactory in apache xalan java before does not properly restrict access to certain properties when feature secure processing is enabled which allows remote attackers to bypass expected restrictions and load arbitrary classes or access external resources via a crafted xalan content header xalan entities xslt content header or xslt entities property or a java property that is bound to the xslt system property function vulnerabilityurl
0
57,614
15,882,772,184
IssuesEvent
2021-04-09 16:28:13
ankitpokhrel/jira-cli
https://api.github.com/repos/ankitpokhrel/jira-cli
closed
Can't build, undefined error with github.com/yuin/goldmark
defect
➜ jira_go go get github.com/ankitpokhrel/jira-cli/cmd/jira go: found github.com/ankitpokhrel/jira-cli/cmd/jira in github.com/ankitpokhrel/jira-cli v0.0.0-20210323162451-7ea7fb0ad495 # github.com/charmbracelet/glamour/ansi ../../go/pkg/mod/github.com/charmbracelet/glamour@v0.2.0/ansi/renderer.go:75:15: undefined: "github.com/yuin/goldmark/extension/ast".KindFootnoteBackLink ➜ jira_go go version go version go1.15.7 darwin/amd64 Tried to work around error by installing goldmark with go get, but still receiving the same error.
1.0
Can't build, undefined error with github.com/yuin/goldmark - ➜ jira_go go get github.com/ankitpokhrel/jira-cli/cmd/jira go: found github.com/ankitpokhrel/jira-cli/cmd/jira in github.com/ankitpokhrel/jira-cli v0.0.0-20210323162451-7ea7fb0ad495 # github.com/charmbracelet/glamour/ansi ../../go/pkg/mod/github.com/charmbracelet/glamour@v0.2.0/ansi/renderer.go:75:15: undefined: "github.com/yuin/goldmark/extension/ast".KindFootnoteBackLink ➜ jira_go go version go version go1.15.7 darwin/amd64 Tried to work around error by installing goldmark with go get, but still receiving the same error.
non_test
can t build undefined error with github com yuin goldmark ➜ jira go go get github com ankitpokhrel jira cli cmd jira go found github com ankitpokhrel jira cli cmd jira in github com ankitpokhrel jira cli github com charmbracelet glamour ansi go pkg mod github com charmbracelet glamour ansi renderer go undefined github com yuin goldmark extension ast kindfootnotebacklink ➜ jira go go version go version darwin tried to work around error by installing goldmark with go get but still receiving the same error
0
105,949
9,104,822,994
IssuesEvent
2019-02-20 19:10:35
ethersphere/go-ethereum
https://api.github.com/repos/ethersphere/go-ethereum
opened
network/stream: investigate TestDeliveryFromNodes timeouts
area:stability test
From @zelig: > this test really should not time out, it is either set to too small timeout duration or theres a bug. **How to reproduce** *TODO*
1.0
network/stream: investigate TestDeliveryFromNodes timeouts - From @zelig: > this test really should not time out, it is either set to too small timeout duration or theres a bug. **How to reproduce** *TODO*
test
network stream investigate testdeliveryfromnodes timeouts from zelig this test really should not time out it is either set to too small timeout duration or theres a bug how to reproduce todo
1
136,894
18,751,500,840
IssuesEvent
2021-11-05 02:59:26
Dima2022/Resiliency-Studio
https://api.github.com/repos/Dima2022/Resiliency-Studio
closed
CVE-2018-3823 (Medium) detected in elasticsearch-2.3.1.jar - autoclosed
security vulnerability
## CVE-2018-3823 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elasticsearch-2.3.1.jar</b></p></summary> <p>Elasticsearch - Open Source, Distributed, RESTful Search Engine</p> <p>Path to dependency file: Resiliency-Studio/resiliency-studio-service/pom.xml</p> <p>Path to vulnerable library: ory/org/elasticsearch/elasticsearch/2.3.1/elasticsearch-2.3.1.jar</p> <p> Dependency Hierarchy: - :x: **elasticsearch-2.3.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Dima2022/Resiliency-Studio/commit/9809d9b7bfdc114eafb0a14d86667f3a76a014e8">9809d9b7bfdc114eafb0a14d86667f3a76a014e8</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> X-Pack Machine Learning versions before 6.2.4 and 5.6.9 had a cross-site scripting (XSS) vulnerability. Users with manage_ml permissions could create jobs containing malicious data as part of their configuration that could allow the attacker to obtain sensitive information from or perform destructive actions on behalf of other ML users viewing the results of the jobs. <p>Publish Date: 2018-09-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3823>CVE-2018-3823</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://discuss.elastic.co/t/elastic-stack-6-2-4-and-5-6-9-security-update/128422">https://discuss.elastic.co/t/elastic-stack-6-2-4-and-5-6-9-security-update/128422</a></p> <p>Release Date: 2018-09-19</p> <p>Fix Resolution: org.elasticsearch:elasticsearch:5.6.9,org.elasticsearch:elasticsearch:6.2.4</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.elasticsearch","packageName":"elasticsearch","packageVersion":"2.3.1","packageFilePaths":["/resiliency-studio-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.elasticsearch:elasticsearch:2.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.elasticsearch:elasticsearch:5.6.9,org.elasticsearch:elasticsearch:6.2.4"}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2018-3823","vulnerabilityDetails":"X-Pack Machine Learning versions before 6.2.4 and 5.6.9 had a cross-site scripting (XSS) vulnerability. Users with manage_ml permissions could create jobs containing malicious data as part of their configuration that could allow the attacker to obtain sensitive information from or perform destructive actions on behalf of other ML users viewing the results of the jobs.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3823","cvss3Severity":"medium","cvss3Score":"5.4","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
True
CVE-2018-3823 (Medium) detected in elasticsearch-2.3.1.jar - autoclosed - ## CVE-2018-3823 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elasticsearch-2.3.1.jar</b></p></summary> <p>Elasticsearch - Open Source, Distributed, RESTful Search Engine</p> <p>Path to dependency file: Resiliency-Studio/resiliency-studio-service/pom.xml</p> <p>Path to vulnerable library: ory/org/elasticsearch/elasticsearch/2.3.1/elasticsearch-2.3.1.jar</p> <p> Dependency Hierarchy: - :x: **elasticsearch-2.3.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Dima2022/Resiliency-Studio/commit/9809d9b7bfdc114eafb0a14d86667f3a76a014e8">9809d9b7bfdc114eafb0a14d86667f3a76a014e8</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> X-Pack Machine Learning versions before 6.2.4 and 5.6.9 had a cross-site scripting (XSS) vulnerability. Users with manage_ml permissions could create jobs containing malicious data as part of their configuration that could allow the attacker to obtain sensitive information from or perform destructive actions on behalf of other ML users viewing the results of the jobs. <p>Publish Date: 2018-09-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3823>CVE-2018-3823</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://discuss.elastic.co/t/elastic-stack-6-2-4-and-5-6-9-security-update/128422">https://discuss.elastic.co/t/elastic-stack-6-2-4-and-5-6-9-security-update/128422</a></p> <p>Release Date: 2018-09-19</p> <p>Fix Resolution: org.elasticsearch:elasticsearch:5.6.9,org.elasticsearch:elasticsearch:6.2.4</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.elasticsearch","packageName":"elasticsearch","packageVersion":"2.3.1","packageFilePaths":["/resiliency-studio-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.elasticsearch:elasticsearch:2.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.elasticsearch:elasticsearch:5.6.9,org.elasticsearch:elasticsearch:6.2.4"}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2018-3823","vulnerabilityDetails":"X-Pack Machine Learning versions before 6.2.4 and 5.6.9 had a cross-site scripting (XSS) vulnerability. Users with manage_ml permissions could create jobs containing malicious data as part of their configuration that could allow the attacker to obtain sensitive information from or perform destructive actions on behalf of other ML users viewing the results of the jobs.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3823","cvss3Severity":"medium","cvss3Score":"5.4","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
non_test
cve medium detected in elasticsearch jar autoclosed cve medium severity vulnerability vulnerable library elasticsearch jar elasticsearch open source distributed restful search engine path to dependency file resiliency studio resiliency studio service pom xml path to vulnerable library ory org elasticsearch elasticsearch elasticsearch jar dependency hierarchy x elasticsearch jar vulnerable library found in head commit a href vulnerability details x pack machine learning versions before and had a cross site scripting xss vulnerability users with manage ml permissions could create jobs containing malicious data as part of their configuration that could allow the attacker to obtain sensitive information from or perform destructive actions on behalf of other ml users viewing the results of the jobs publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org elasticsearch elasticsearch org elasticsearch elasticsearch rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org elasticsearch elasticsearch isminimumfixversionavailable true minimumfixversion org elasticsearch elasticsearch org elasticsearch elasticsearch basebranches vulnerabilityidentifier cve vulnerabilitydetails x pack machine learning versions before and had a cross site scripting xss vulnerability users with manage ml permissions could create jobs containing malicious data as part of their configuration that could allow the attacker to obtain sensitive information from or perform destructive actions on behalf of other ml users viewing the results of the jobs vulnerabilityurl
0
237,875
19,682,094,792
IssuesEvent
2022-01-11 17:46:10
opensearch-project/opensearch-build
https://api.github.com/repos/opensearch-project/opensearch-build
closed
Nightly bundle builds do not include full bundle until T-14 days to release.
ci-test-automation
This is a suggestion to improve the release template and release cycle. While performance testing for the 1.2 release we found a [performance degradation](https://github.com/opensearch-project/OpenSearch/issues/1560) in the bundle. To try and diagnose the issue we pulled older nightly builds at various times and re-ran perf tests to identify when a degradation may have been introduced. Ideally we would have these performance tests running nightly. However, even when that is completed we will unfortunately be running against different configurations of the bundle because components are added to the manifest only [T-14 days](https://github.com/opensearch-project/opensearch-build/blob/main/.github/ISSUE_TEMPLATE/release_template.md#cicd---ends-release-minus-14-days) before release. For example - right now for [2.0](https://github.com/opensearch-project/opensearch-build/blob/main/manifests/2.0.0/opensearch-2.0.0.yml) and [1.3](https://github.com/opensearch-project/opensearch-build/blob/main/manifests/1.3.0/opensearch-1.3.0.yml) releases, we have no components being built nightly other than OpenSearch. After every release we would have a functioning manifest for the next major and minor release that includes the known components at that time. This will help us identify when an issue was introduced and will further aid us when we are running nightly benchmarks against the full bundle. I have opened #1139 as a suggestion. An alternative here is we cut the release issue for the next release immediately on release day and ensure the manifest is updated right away.
1.0
Nightly bundle builds do not include full bundle until T-14 days to release. - This is a suggestion to improve the release template and release cycle. While performance testing for the 1.2 release we found a [performance degradation](https://github.com/opensearch-project/OpenSearch/issues/1560) in the bundle. To try and diagnose the issue we pulled older nightly builds at various times and re-ran perf tests to identify when a degradation may have been introduced. Ideally we would have these performance tests running nightly. However, even when that is completed we will unfortunately be running against different configurations of the bundle because components are added to the manifest only [T-14 days](https://github.com/opensearch-project/opensearch-build/blob/main/.github/ISSUE_TEMPLATE/release_template.md#cicd---ends-release-minus-14-days) before release. For example - right now for [2.0](https://github.com/opensearch-project/opensearch-build/blob/main/manifests/2.0.0/opensearch-2.0.0.yml) and [1.3](https://github.com/opensearch-project/opensearch-build/blob/main/manifests/1.3.0/opensearch-1.3.0.yml) releases, we have no components being built nightly other than OpenSearch. After every release we would have a functioning manifest for the next major and minor release that includes the known components at that time. This will help us identify when an issue was introduced and will further aid us when we are running nightly benchmarks against the full bundle. I have opened #1139 as a suggestion. An alternative here is we cut the release issue for the next release immediately on release day and ensure the manifest is updated right away.
test
nightly bundle builds do not include full bundle until t days to release this is a suggestion to improve the release template and release cycle while performance testing for the release we found a in the bundle to try and diagnose the issue we pulled older nightly builds at various times and re ran perf tests to identify when a degradation may have been introduced ideally we would have these performance tests running nightly however even when that is completed we will unfortunately be running against different configurations of the bundle because components are added to the manifest only before release for example right now for and releases we have no components being built nightly other than opensearch after every release we would have a functioning manifest for the next major and minor release that includes the known components at that time this will help us identify when an issue was introduced and will further aid us when we are running nightly benchmarks against the full bundle i have opened as a suggestion an alternative here is we cut the release issue for the next release immediately on release day and ensure the manifest is updated right away
1
171,637
13,242,031,811
IssuesEvent
2020-08-19 09:07:51
INL/corpus-frontend
https://api.github.com/repos/INL/corpus-frontend
closed
Spelling Case- and diacritics-sensitive
status: finished/testing
De correcte spelling is met een streepje. Dus Case- and diacritics-sensitive.
1.0
Spelling Case- and diacritics-sensitive - De correcte spelling is met een streepje. Dus Case- and diacritics-sensitive.
test
spelling case and diacritics sensitive de correcte spelling is met een streepje dus case and diacritics sensitive
1
36,061
4,712,353,334
IssuesEvent
2016-10-14 16:29:13
MozillaFoundation/Mozfest2016_production
https://api.github.com/repos/MozillaFoundation/Mozfest2016_production
closed
Swag - Lanyards
Design Suppliers swag
2000 lanyards required for the entire weekend. - [x] Check to source internally - [x] Supplier Found - [x] Delivery Date Agreed
1.0
Swag - Lanyards - 2000 lanyards required for the entire weekend. - [x] Check to source internally - [x] Supplier Found - [x] Delivery Date Agreed
non_test
swag lanyards lanyards required for the entire weekend check to source internally supplier found delivery date agreed
0
137,712
11,151,243,196
IssuesEvent
2019-12-24 03:09:09
a2000-erp-team/WEBERP
https://api.github.com/repos/a2000-erp-team/WEBERP
closed
Procurement->Reporting->Purchase Analysis reports->Purchase analysis buyer wise. The total should be in 2 decimal places.
WEB ERP Testing By Katrina
![image.png](https://images.zenhubusercontent.com/5cf8a04f2e4fe4691d7f073e/80486a99-c11e-458b-be33-f32aafb0b903)
1.0
Procurement->Reporting->Purchase Analysis reports->Purchase analysis buyer wise. The total should be in 2 decimal places. - ![image.png](https://images.zenhubusercontent.com/5cf8a04f2e4fe4691d7f073e/80486a99-c11e-458b-be33-f32aafb0b903)
test
procurement reporting purchase analysis reports purchase analysis buyer wise the total should be in decimal places
1
13,351
3,329,849,387
IssuesEvent
2015-11-11 05:59:51
connolly/desc
https://api.github.com/repos/connolly/desc
opened
WL2-DC1-SW4:T2
DC1 DC1 SW: Two-point null test pipeline SW Systematic errors framework wl
Define null tests that can be performed given the LSST cadence, deep drilling fields, colors, and ancillary data sets.
1.0
WL2-DC1-SW4:T2 - Define null tests that can be performed given the LSST cadence, deep drilling fields, colors, and ancillary data sets.
test
define null tests that can be performed given the lsst cadence deep drilling fields colors and ancillary data sets
1
74,668
7,435,248,136
IssuesEvent
2018-03-26 13:41:57
Microsoft/vscode
https://api.github.com/repos/Microsoft/vscode
opened
Test global error navigation
testplan-item
Test for #14783 Complexity: 1 - [ ] Any OS - @xxxx - [ ] Any OS - @xxxx The F8-feature has been changes so that it visits all errors/warnings/info (instead of staying inside a file). Test that * you can navigate to all (known) markers * you visit errors, then warnings, then infos per file but not *all* errors, then *all* warnings etc * you can start inside a file without diagnostics
1.0
Test global error navigation - Test for #14783 Complexity: 1 - [ ] Any OS - @xxxx - [ ] Any OS - @xxxx The F8-feature has been changes so that it visits all errors/warnings/info (instead of staying inside a file). Test that * you can navigate to all (known) markers * you visit errors, then warnings, then infos per file but not *all* errors, then *all* warnings etc * you can start inside a file without diagnostics
test
test global error navigation test for complexity any os xxxx any os xxxx the feature has been changes so that it visits all errors warnings info instead of staying inside a file test that you can navigate to all known markers you visit errors then warnings then infos per file but not all errors then all warnings etc you can start inside a file without diagnostics
1
318,424
27,303,284,486
IssuesEvent
2023-02-24 05:14:47
wpfoodmanager/wp-food-manager
https://api.github.com/repos/wpfoodmanager/wp-food-manager
closed
Food thumbnail can be added from front side
In Testing Issue Resolved
If any organizer create a food from front side then not able to add different thumbnail and banner image.. What if organizer want to display two different image same as admin. From admin can create food thumbnail and banner separately. ![image](https://user-images.githubusercontent.com/121149500/217168816-9d97c6b6-7c64-437f-bff9-e5bdd9af696b.png) ![image](https://user-images.githubusercontent.com/121149500/217168859-bc2b08ac-dd3d-4503-9408-aafa5ddc88df.png)
1.0
Food thumbnail can be added from front side - If any organizer create a food from front side then not able to add different thumbnail and banner image.. What if organizer want to display two different image same as admin. From admin can create food thumbnail and banner separately. ![image](https://user-images.githubusercontent.com/121149500/217168816-9d97c6b6-7c64-437f-bff9-e5bdd9af696b.png) ![image](https://user-images.githubusercontent.com/121149500/217168859-bc2b08ac-dd3d-4503-9408-aafa5ddc88df.png)
test
food thumbnail can be added from front side if any organizer create a food from front side then not able to add different thumbnail and banner image what if organizer want to display two different image same as admin from admin can create food thumbnail and banner separately
1
30,663
11,842,017,212
IssuesEvent
2020-03-23 22:01:41
Mohib-hub/karate
https://api.github.com/repos/Mohib-hub/karate
opened
CVE-2018-12023 (High) detected in jackson-databind-2.8.5.jar, jackson-databind-2.8.8.jar
security vulnerability
## CVE-2018-12023 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.8.5.jar</b>, <b>jackson-databind-2.8.8.jar</b></p></summary> <p> <details><summary><b>jackson-databind-2.8.5.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /tmp/ws-scm/karate/examples/jobserver/build.gradle</p> <p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.5/b3035f37e674c04dafe36a660c3815cc59f764e2/jackson-databind-2.8.5.jar,/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.5/b3035f37e674c04dafe36a660c3815cc59f764e2/jackson-databind-2.8.5.jar</p> <p> Dependency Hierarchy: - cucumber-reporting-3.8.0.jar (Root Library) - :x: **jackson-databind-2.8.5.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.8.8.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.8/bf88c7b27e95cbadce4e7c316a56c3efffda8026/jackson-databind-2.8.8.jar,/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.8/bf88c7b27e95cbadce4e7c316a56c3efffda8026/jackson-databind-2.8.8.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-1.5.3.RELEASE.jar (Root Library) - :x: **jackson-databind-2.8.8.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/Mohib-hub/karate/commit/c8766c8277306046ef9c6f01148b98b0d2bafe02">c8766c8277306046ef9c6f01148b98b0d2bafe02</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in FasterXML jackson-databind prior to 2.7.9.4, 2.8.11.2, and 2.9.6. When Default Typing is enabled (either globally or for a specific property), the service has the Oracle JDBC jar in the classpath, and an attacker can provide an LDAP service to access, it is possible to make the service execute a malicious payload. <p>Publish Date: 2019-03-21 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-12023>CVE-2018-12023</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-12022">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-12022</a></p> <p>Release Date: 2019-03-21</p> <p>Fix Resolution: 2.7.9.4, 2.8.11.2, 2.9.6</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.5","isTransitiveDependency":true,"dependencyTree":"net.masterthought:cucumber-reporting:3.8.0;com.fasterxml.jackson.core:jackson-databind:2.8.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.7.9.4, 2.8.11.2, 2.9.6"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.8","isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:1.5.3.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.8.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.7.9.4, 2.8.11.2, 2.9.6"}],"vulnerabilityIdentifier":"CVE-2018-12023","vulnerabilityDetails":"An issue was discovered in FasterXML jackson-databind prior to 2.7.9.4, 2.8.11.2, and 2.9.6. When Default Typing is enabled (either globally or for a specific property), the service has the Oracle JDBC jar in the classpath, and an attacker can provide an LDAP service to access, it is possible to make the service execute a malicious payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-12023","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2018-12023 (High) detected in jackson-databind-2.8.5.jar, jackson-databind-2.8.8.jar - ## CVE-2018-12023 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.8.5.jar</b>, <b>jackson-databind-2.8.8.jar</b></p></summary> <p> <details><summary><b>jackson-databind-2.8.5.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /tmp/ws-scm/karate/examples/jobserver/build.gradle</p> <p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.5/b3035f37e674c04dafe36a660c3815cc59f764e2/jackson-databind-2.8.5.jar,/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.5/b3035f37e674c04dafe36a660c3815cc59f764e2/jackson-databind-2.8.5.jar</p> <p> Dependency Hierarchy: - cucumber-reporting-3.8.0.jar (Root Library) - :x: **jackson-databind-2.8.5.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.8.8.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.8/bf88c7b27e95cbadce4e7c316a56c3efffda8026/jackson-databind-2.8.8.jar,/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.8/bf88c7b27e95cbadce4e7c316a56c3efffda8026/jackson-databind-2.8.8.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-1.5.3.RELEASE.jar (Root Library) - :x: **jackson-databind-2.8.8.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/Mohib-hub/karate/commit/c8766c8277306046ef9c6f01148b98b0d2bafe02">c8766c8277306046ef9c6f01148b98b0d2bafe02</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in FasterXML jackson-databind prior to 2.7.9.4, 2.8.11.2, and 2.9.6. When Default Typing is enabled (either globally or for a specific property), the service has the Oracle JDBC jar in the classpath, and an attacker can provide an LDAP service to access, it is possible to make the service execute a malicious payload. <p>Publish Date: 2019-03-21 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-12023>CVE-2018-12023</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-12022">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-12022</a></p> <p>Release Date: 2019-03-21</p> <p>Fix Resolution: 2.7.9.4, 2.8.11.2, 2.9.6</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.5","isTransitiveDependency":true,"dependencyTree":"net.masterthought:cucumber-reporting:3.8.0;com.fasterxml.jackson.core:jackson-databind:2.8.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.7.9.4, 2.8.11.2, 2.9.6"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.8","isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:1.5.3.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.8.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.7.9.4, 2.8.11.2, 2.9.6"}],"vulnerabilityIdentifier":"CVE-2018-12023","vulnerabilityDetails":"An issue was discovered in FasterXML jackson-databind prior to 2.7.9.4, 2.8.11.2, and 2.9.6. When Default Typing is enabled (either globally or for a specific property), the service has the Oracle JDBC jar in the classpath, and an attacker can provide an LDAP service to access, it is possible to make the service execute a malicious payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-12023","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_test
cve high detected in jackson databind jar jackson databind jar cve high severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws scm karate examples jobserver build gradle path to vulnerable library root gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar root gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy cucumber reporting jar root library x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library root gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar root gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library x jackson databind jar vulnerable library found in head commit a href vulnerability details an issue was discovered in fasterxml jackson databind prior to and when default typing is enabled either globally or for a specific property the service has the oracle jdbc jar in the classpath and an attacker can provide an ldap service to access it is possible to make the service execute a malicious payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails an issue was discovered in fasterxml jackson databind prior to and when default typing is enabled either globally or for a specific property the service has the oracle jdbc jar in the classpath and an attacker can provide an ldap service to access it is possible to make the service execute a malicious payload vulnerabilityurl
0
84,507
7,924,332,835
IssuesEvent
2018-07-05 16:35:11
spring-cloud/spring-cloud-dataflow-acceptance-tests
https://api.github.com/repos/spring-cloud/spring-cloud-dataflow-acceptance-tests
opened
Revisit hardcoded server and skipper versions in the scripts
test-coverage-ready
As a developer, I'd like to review and refactor the hardcoded version dependencies and have them resolved from the supplied configuration properties instead.
1.0
Revisit hardcoded server and skipper versions in the scripts - As a developer, I'd like to review and refactor the hardcoded version dependencies and have them resolved from the supplied configuration properties instead.
test
revisit hardcoded server and skipper versions in the scripts as a developer i d like to review and refactor the hardcoded version dependencies and have them resolved from the supplied configuration properties instead
1
209,642
16,047,402,013
IssuesEvent
2021-04-22 15:02:41
ballerina-platform/ballerina-lang
https://api.github.com/repos/ballerina-platform/ballerina-lang
closed
[Testerina] Mocked HTTP services not initialize when `http:OAuth2ClientCredentialsGrantConfig` is used for HTTP clients
Area/Testerina Priority/Blocker Team/TestFramework Type/Bug
**Description:** When I use `http:OAuth2ClientCredentialsGrantConfig` in HTTP client and testing the project, the mocking services are not getting initialized and gives connection refused error. ``` import ballerina/http; import ballerina/io; http:Client logAnalyticsEp = check new("http://localhost:9096/v1/workspaces", { auth: { tokenUrl: "http://localhost:9095/azureoauth2/token", clientId: "clid124", clientSecret: "clsc972" } }); service /svc on new http:Listener(9090) { resource function get path(http:Caller caller, http:Request request) { var logAnalyticsStatus = logAnalyticsEp->post("/query", {query: "print 1"}); error? result = caller->respond("Hello Ballerina!"); if (result is error) { io:println("Error in responding: ", result); } } } ``` error: ``` time = 2021-04-07 18:53:35,783 level = ERROR module = ballerina/oauth2 message = "Failed to call the token endpoint." error = "Failed to send the request to the endpoint. Connection refused" error: error Error ("Failed to call the token endpoint.",error("Failed to send the request to the endpoint. Connection refused")) error: there are test failures ``` **Steps to reproduce:** 1. Write an HTTP client which uses `http:OAuth2ClientCredentialsGrantConfig` (or you can clone https://github.com/madushajg/ballerina-http-client-oauth2) 2.run `bal test` **Affected Versions:** Swan Lake Alpha 3 **Related Issues:** https://github.com/ballerina-platform/ballerina-lang/issues/29900
2.0
[Testerina] Mocked HTTP services not initialize when `http:OAuth2ClientCredentialsGrantConfig` is used for HTTP clients - **Description:** When I use `http:OAuth2ClientCredentialsGrantConfig` in HTTP client and testing the project, the mocking services are not getting initialized and gives connection refused error. ``` import ballerina/http; import ballerina/io; http:Client logAnalyticsEp = check new("http://localhost:9096/v1/workspaces", { auth: { tokenUrl: "http://localhost:9095/azureoauth2/token", clientId: "clid124", clientSecret: "clsc972" } }); service /svc on new http:Listener(9090) { resource function get path(http:Caller caller, http:Request request) { var logAnalyticsStatus = logAnalyticsEp->post("/query", {query: "print 1"}); error? result = caller->respond("Hello Ballerina!"); if (result is error) { io:println("Error in responding: ", result); } } } ``` error: ``` time = 2021-04-07 18:53:35,783 level = ERROR module = ballerina/oauth2 message = "Failed to call the token endpoint." error = "Failed to send the request to the endpoint. Connection refused" error: error Error ("Failed to call the token endpoint.",error("Failed to send the request to the endpoint. Connection refused")) error: there are test failures ``` **Steps to reproduce:** 1. Write an HTTP client which uses `http:OAuth2ClientCredentialsGrantConfig` (or you can clone https://github.com/madushajg/ballerina-http-client-oauth2) 2.run `bal test` **Affected Versions:** Swan Lake Alpha 3 **Related Issues:** https://github.com/ballerina-platform/ballerina-lang/issues/29900
test
mocked http services not initialize when http is used for http clients description when i use http in http client and testing the project the mocking services are not getting initialized and gives connection refused error import ballerina http import ballerina io http client loganalyticsep check new auth tokenurl clientid clientsecret service svc on new http listener resource function get path http caller caller http request request var loganalyticsstatus loganalyticsep post query query print error result caller respond hello ballerina if result is error io println error in responding result error time level error module ballerina message failed to call the token endpoint error failed to send the request to the endpoint connection refused error error error failed to call the token endpoint error failed to send the request to the endpoint connection refused error there are test failures steps to reproduce write an http client which uses http or you can clone run bal test affected versions swan lake alpha related issues
1
69,523
7,137,487,098
IssuesEvent
2018-01-23 11:10:28
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
skylark_git_repository_test fails under standalone
On hold: more data needed P2 category: extensibility > external repositories category: misc > testing external-repos-triaged type: bug
### Description of the problem / feature request: Bazel's `//src/test/shell/bazel:skylark_git_repository_test` relies on execution under `test --spawn_strategy=sandboxed`. Running it with `test --spawn_strategy=standalone` causes the test to fail with errors during `git_repository`, failing to locate the revision for reset. This appears to be less of a correctness issue for the test, than a minor environmental difference that isn't reflected in standalone execution, but should probably work without sandboxing on. Caveat for swapping between these two options: test results will be cached, and not invalidated based on the switch. Prefer test runs with `--nocache_test_results`. This failure is also unreproducible without invoking the *test* itself with `--spawn_strategy=standalone`; if only the offending `bazel run` invocations are put under `standalone`, the tests do not fail. ### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible. `bazel test --spawn_strategy=sandboxed //src/test/shell/bazel:skylark_git_repository_test` ### What operating system are you running Bazel on? linux ### What's the output of `bazel info release`? 0.9.0 ### Have you found anything relevant by searching the web? No ### Any other information, logs, or outputs that you want to share? Log files from execution of 0.9.0 under standalone strategy are attached: [skylark_git_repository.log](https://github.com/bazelbuild/bazel/files/1613723/skylark_git_repository.log)
1.0
skylark_git_repository_test fails under standalone - ### Description of the problem / feature request: Bazel's `//src/test/shell/bazel:skylark_git_repository_test` relies on execution under `test --spawn_strategy=sandboxed`. Running it with `test --spawn_strategy=standalone` causes the test to fail with errors during `git_repository`, failing to locate the revision for reset. This appears to be less of a correctness issue for the test, than a minor environmental difference that isn't reflected in standalone execution, but should probably work without sandboxing on. Caveat for swapping between these two options: test results will be cached, and not invalidated based on the switch. Prefer test runs with `--nocache_test_results`. This failure is also unreproducible without invoking the *test* itself with `--spawn_strategy=standalone`; if only the offending `bazel run` invocations are put under `standalone`, the tests do not fail. ### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible. `bazel test --spawn_strategy=sandboxed //src/test/shell/bazel:skylark_git_repository_test` ### What operating system are you running Bazel on? linux ### What's the output of `bazel info release`? 0.9.0 ### Have you found anything relevant by searching the web? No ### Any other information, logs, or outputs that you want to share? Log files from execution of 0.9.0 under standalone strategy are attached: [skylark_git_repository.log](https://github.com/bazelbuild/bazel/files/1613723/skylark_git_repository.log)
test
skylark git repository test fails under standalone description of the problem feature request bazel s src test shell bazel skylark git repository test relies on execution under test spawn strategy sandboxed running it with test spawn strategy standalone causes the test to fail with errors during git repository failing to locate the revision for reset this appears to be less of a correctness issue for the test than a minor environmental difference that isn t reflected in standalone execution but should probably work without sandboxing on caveat for swapping between these two options test results will be cached and not invalidated based on the switch prefer test runs with nocache test results this failure is also unreproducible without invoking the test itself with spawn strategy standalone if only the offending bazel run invocations are put under standalone the tests do not fail bugs what s the simplest easiest way to reproduce this bug please provide a minimal example if possible bazel test spawn strategy sandboxed src test shell bazel skylark git repository test what operating system are you running bazel on linux what s the output of bazel info release have you found anything relevant by searching the web no any other information logs or outputs that you want to share log files from execution of under standalone strategy are attached
1
313,065
23,454,538,674
IssuesEvent
2022-08-16 07:40:55
cardano-community/koios-artifacts
https://api.github.com/repos/cardano-community/koios-artifacts
closed
Update tx_info specs for babbage tx formats
documentation
**Describe the bug** Corresponding specs updates for cardano-community/guild-operators#1464
1.0
Update tx_info specs for babbage tx formats - **Describe the bug** Corresponding specs updates for cardano-community/guild-operators#1464
non_test
update tx info specs for babbage tx formats describe the bug corresponding specs updates for cardano community guild operators
0
289,328
24,980,993,934
IssuesEvent
2022-11-02 11:41:10
benoitkugler/maths-online
https://api.github.com/repos/benoitkugler/maths-online
closed
[eleve] Trivial : message pendant la phase d'attente de fin de tour
enhancement Accepté A tester
Afficher "X doit lancer le dé" ou "X réfléchit encore à la question" sur l'écran du joueur Y dès que celui-ci met "suivant" et se retrouve sur le plateau de jeu
1.0
[eleve] Trivial : message pendant la phase d'attente de fin de tour - Afficher "X doit lancer le dé" ou "X réfléchit encore à la question" sur l'écran du joueur Y dès que celui-ci met "suivant" et se retrouve sur le plateau de jeu
test
trivial message pendant la phase d attente de fin de tour afficher x doit lancer le dé ou x réfléchit encore à la question sur l écran du joueur y dès que celui ci met suivant et se retrouve sur le plateau de jeu
1
21,238
6,132,496,581
IssuesEvent
2017-06-25 02:56:23
ganeti/ganeti
https://api.github.com/repos/ganeti/ganeti
opened
gnt-instance reinstall doesn't update config.data
imported_from_google_code Status:Fixed
Originally reported of Google Code with ID 1193. ``` What software version are you running? Please provide the output of "gnt- cluster --version", "gnt-cluster version", and "hspace --version". root@m:~/ganeti# gnt-cluster version Software version: 2.18.0~alpha1 Internode protocol: 2180000 Configuration format: 2180000 OS api version: 20 Export interface: 0 VCS version: (ganeti) version v2.17.0beta1-252-g8ff1ce9 root@m:~/ganeti# gnt-cluster --version gnt-cluster (ganeti v2.17.0beta1-252-g8ff1ce9) 2.18.0~alpha1 root@m:~/ganeti# hspace --version hspace (ganeti) version v2.17.0beta1-252-g8ff1ce9 compiled with ghc 7.6 running on linux x86_64 <b>What distribution are you using?</b> root@snf-728370:~/ganeti# lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 8.6 (jessie) Release: 8.6 Codename: jessie <b>What steps will reproduce the problem?</b> 1. gnt-instance add -o snf-image+default --os-parameters img_passwd=passw0rd,img_format=diskdump,img_id=debian_base-8.0-x86_64 -t plain --disk 0:size=2G --no-name-check --no-ip-check -n dio.test.gnt.grnet.gr tsiou 2. gnt-instance shutdown --timeout=0 tsiou 3. gnt-instance reinstall -O img_passwd=f00 tsiou <b>What is the expected output? What do you see instead?</b> Expected output: root@dio:~# gnt-instance info early_change2 - Instance name: early_change2 UUID: a6921462-81c4-48e7-ae9d-38ac28aaf937 Serial number: 7 Creation time: 2016-11-09 17:16:16 Modification time: 2016-11-09 17:19:59 State: configured to be down, actual state is down Nodes: - primary: dio.test.gnt.grnet.gr group: default (UUID a034f01f-90b8-4b4a-9934-fa515135abd4) - secondaries: Operating system: snf-image+default Operating system parameters: img_format: diskdump img_id: debian_base-8.0-x86_64 img_passwd: f00 ... However, I see: root@dio:~# gnt-instance info early_change2 - Instance name: early_change2 UUID: a6921462-81c4-48e7-ae9d-38ac28aaf937 Serial number: 7 Creation time: 2016-11-09 17:16:16 Modification time: 2016-11-09 17:19:59 State: configured to be down, actual state is down Nodes: - primary: dio.test.gnt.grnet.gr group: default (UUID a034f01f-90b8-4b4a-9934-fa515135abd4) - secondaries: Operating system: snf-image+default Operating system parameters: img_format: diskdump img_id: debian_base-8.0-x86_64 img_passwd: passw0rd ... The password was changed successfully (checked by logging in through a VNC console) but the OS params reported by gnt-instance info remain unchanged. <b>Please provide any additional information below.</b> Even worse: Trying to reinstall an instance twice might result in unexpected behavior: 1. gnt-instance add -o snf-image+default --os-parameters img_passwd=passw0rd,img_format=diskdump,img_id=debian_base-8.0-x86_64 -t plain --disk 0:size=2G --no-name-check --no-ip-check -n dio.test.gnt.grnet.gr tsiou2 2. gnt-instance reinstall -O img_id=ubuntu_desktop-16.04-x86_64 3. gnt-instance reinstall The second reinstall installs debian_base-8.0-x86_64 back because that's what the config.data knows about the img_id OS param! Note: The OS provider doesn't really make any difference; I had snf-image "ready" so that's what I used. ``` Originally added on 2016-11-15 09:39:11 +0000 UTC.
1.0
gnt-instance reinstall doesn't update config.data - Originally reported of Google Code with ID 1193. ``` What software version are you running? Please provide the output of "gnt- cluster --version", "gnt-cluster version", and "hspace --version". root@m:~/ganeti# gnt-cluster version Software version: 2.18.0~alpha1 Internode protocol: 2180000 Configuration format: 2180000 OS api version: 20 Export interface: 0 VCS version: (ganeti) version v2.17.0beta1-252-g8ff1ce9 root@m:~/ganeti# gnt-cluster --version gnt-cluster (ganeti v2.17.0beta1-252-g8ff1ce9) 2.18.0~alpha1 root@m:~/ganeti# hspace --version hspace (ganeti) version v2.17.0beta1-252-g8ff1ce9 compiled with ghc 7.6 running on linux x86_64 <b>What distribution are you using?</b> root@snf-728370:~/ganeti# lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 8.6 (jessie) Release: 8.6 Codename: jessie <b>What steps will reproduce the problem?</b> 1. gnt-instance add -o snf-image+default --os-parameters img_passwd=passw0rd,img_format=diskdump,img_id=debian_base-8.0-x86_64 -t plain --disk 0:size=2G --no-name-check --no-ip-check -n dio.test.gnt.grnet.gr tsiou 2. gnt-instance shutdown --timeout=0 tsiou 3. gnt-instance reinstall -O img_passwd=f00 tsiou <b>What is the expected output? What do you see instead?</b> Expected output: root@dio:~# gnt-instance info early_change2 - Instance name: early_change2 UUID: a6921462-81c4-48e7-ae9d-38ac28aaf937 Serial number: 7 Creation time: 2016-11-09 17:16:16 Modification time: 2016-11-09 17:19:59 State: configured to be down, actual state is down Nodes: - primary: dio.test.gnt.grnet.gr group: default (UUID a034f01f-90b8-4b4a-9934-fa515135abd4) - secondaries: Operating system: snf-image+default Operating system parameters: img_format: diskdump img_id: debian_base-8.0-x86_64 img_passwd: f00 ... However, I see: root@dio:~# gnt-instance info early_change2 - Instance name: early_change2 UUID: a6921462-81c4-48e7-ae9d-38ac28aaf937 Serial number: 7 Creation time: 2016-11-09 17:16:16 Modification time: 2016-11-09 17:19:59 State: configured to be down, actual state is down Nodes: - primary: dio.test.gnt.grnet.gr group: default (UUID a034f01f-90b8-4b4a-9934-fa515135abd4) - secondaries: Operating system: snf-image+default Operating system parameters: img_format: diskdump img_id: debian_base-8.0-x86_64 img_passwd: passw0rd ... The password was changed successfully (checked by logging in through a VNC console) but the OS params reported by gnt-instance info remain unchanged. <b>Please provide any additional information below.</b> Even worse: Trying to reinstall an instance twice might result in unexpected behavior: 1. gnt-instance add -o snf-image+default --os-parameters img_passwd=passw0rd,img_format=diskdump,img_id=debian_base-8.0-x86_64 -t plain --disk 0:size=2G --no-name-check --no-ip-check -n dio.test.gnt.grnet.gr tsiou2 2. gnt-instance reinstall -O img_id=ubuntu_desktop-16.04-x86_64 3. gnt-instance reinstall The second reinstall installs debian_base-8.0-x86_64 back because that's what the config.data knows about the img_id OS param! Note: The OS provider doesn't really make any difference; I had snf-image "ready" so that's what I used. ``` Originally added on 2016-11-15 09:39:11 +0000 UTC.
non_test
gnt instance reinstall doesn t update config data originally reported of google code with id what software version are you running please provide the output of gnt cluster version gnt cluster version and hspace version root m ganeti gnt cluster version software version internode protocol configuration format os api version export interface vcs version ganeti version root m ganeti gnt cluster version gnt cluster ganeti root m ganeti hspace version hspace ganeti version compiled with ghc running on linux what distribution are you using root snf ganeti lsb release a no lsb modules are available distributor id debian description debian gnu linux jessie release codename jessie what steps will reproduce the problem gnt instance add o snf image default os parameters img passwd img format diskdump img id debian base t plain disk size no name check no ip check n dio test gnt grnet gr tsiou gnt instance shutdown timeout tsiou gnt instance reinstall o img passwd tsiou what is the expected output what do you see instead expected output root dio gnt instance info early instance name early uuid serial number creation time modification time state configured to be down actual state is down nodes primary dio test gnt grnet gr group default uuid secondaries operating system snf image default operating system parameters img format diskdump img id debian base img passwd however i see root dio gnt instance info early instance name early uuid serial number creation time modification time state configured to be down actual state is down nodes primary dio test gnt grnet gr group default uuid secondaries operating system snf image default operating system parameters img format diskdump img id debian base img passwd the password was changed successfully checked by logging in through a vnc console but the os params reported by gnt instance info remain unchanged please provide any additional information below even worse trying to reinstall an instance twice might result in unexpected behavior gnt instance add o snf image default os parameters img passwd img format diskdump img id debian base t plain disk size no name check no ip check n dio test gnt grnet gr gnt instance reinstall o img id ubuntu desktop gnt instance reinstall the second reinstall installs debian base back because that s what the config data knows about the img id os param note the os provider doesn t really make any difference i had snf image ready so that s what i used originally added on utc
0
296,173
25,534,439,367
IssuesEvent
2022-11-29 10:52:11
open-metadata/OpenMetadata
https://api.github.com/repos/open-metadata/OpenMetadata
closed
E2E Bigquery Testing
testing Ingestion
Bigquery CLI tests - [ ] Normal workflow test - [ ] Add different project_id than the one used to fetch tables ( creating another ticket to tackle this ) - [ ] add tags from different projects
1.0
E2E Bigquery Testing - Bigquery CLI tests - [ ] Normal workflow test - [ ] Add different project_id than the one used to fetch tables ( creating another ticket to tackle this ) - [ ] add tags from different projects
test
bigquery testing bigquery cli tests normal workflow test add different project id than the one used to fetch tables creating another ticket to tackle this add tags from different projects
1
60,810
8,467,883,912
IssuesEvent
2018-10-23 18:11:10
Synergex/HarmonyCore
https://api.github.com/repos/Synergex/HarmonyCore
closed
Create quick start guide for setting up a codegen workflow inside visual studio
Documentation
The sample projects use a handcrafted regen.bat and a custom menu item inside visual studio to support easily working with codegen. We need to flesh this process out and provide a simple way for users to get started using some of the templates included in this repository.
1.0
Create quick start guide for setting up a codegen workflow inside visual studio - The sample projects use a handcrafted regen.bat and a custom menu item inside visual studio to support easily working with codegen. We need to flesh this process out and provide a simple way for users to get started using some of the templates included in this repository.
non_test
create quick start guide for setting up a codegen workflow inside visual studio the sample projects use a handcrafted regen bat and a custom menu item inside visual studio to support easily working with codegen we need to flesh this process out and provide a simple way for users to get started using some of the templates included in this repository
0
171,925
13,253,380,059
IssuesEvent
2020-08-20 07:29:33
zeebe-io/zeebe
https://api.github.com/repos/zeebe-io/zeebe
reopened
BrokerTest.shouldStartAndStopBroker
Release: 0.24.2 Status: Needs Priority Type: Unstable Test
**Summary** - How often does the test fail? at least once so far - Does it block your work? no - Do we suspect that it is a real failure? unknown **Failures** <details><summary>Example assertion failure</summary> <pre> io.zeebe.util.exception.UncheckedExecutionException: Failed to start broker at io.zeebe.broker.Broker.internalStart(Broker.java:142) at io.zeebe.util.LogUtil.doWithMDC(LogUtil.java:21) at io.zeebe.broker.Broker.start(Broker.java:115) at io.zeebe.broker.test.EmbeddedBrokerRule.startBroker(EmbeddedBrokerRule.java:226) at io.zeebe.broker.test.EmbeddedBrokerRule.before(EmbeddedBrokerRule.java:130) at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:50) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75) at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:157) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548) Caused by: java.io.UncheckedIOException: Gateway was not able to start at io.zeebe.broker.system.EmbeddedGatewayService.startGateway(EmbeddedGatewayService.java:47) at io.zeebe.broker.system.EmbeddedGatewayService.<init>(EmbeddedGatewayService.java:29) at io.zeebe.broker.Broker.lambda$initStart$4(Broker.java:173) at io.zeebe.broker.bootstrap.StartProcess.lambda$startStepByStep$2(StartProcess.java:60) at io.zeebe.broker.bootstrap.StartProcess.takeDuration(StartProcess.java:88) at io.zeebe.broker.bootstrap.StartProcess.startStepByStep(StartProcess.java:58) at io.zeebe.broker.bootstrap.StartProcess.takeDuration(StartProcess.java:88) at io.zeebe.broker.bootstrap.StartProcess.start(StartProcess.java:43) at io.zeebe.broker.Broker.internalStart(Broker.java:135) ... 36 more Caused by: java.io.IOException: Failed to bind at io.grpc.netty.NettyServer.start(NettyServer.java:264) at io.grpc.internal.ServerImpl.start(ServerImpl.java:183) at io.grpc.internal.ServerImpl.start(ServerImpl.java:90) at io.zeebe.gateway.Gateway.start(Gateway.java:130) at io.zeebe.broker.system.EmbeddedGatewayService.startGateway(EmbeddedGatewayService.java:45) ... 44 more Caused by: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use </pre> </details> **Hypotheses** **Logs** <details><summary>Logs</summary> <pre> [18:47:07.678 [] [main] INFO io.zeebe.test - Test started: shouldStartAndStopBroker(io.zeebe.broker.BrokerTest) 18:47:07.783 [] [main] INFO io.zeebe.test.util.SocketUtil - Starting socket assignment with testForkNumber 12 and testMavenId 1 18:47:07.857 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 37700 for test fork 12 with range PortRange{host='localhost', basePort=37700, maxOffset=100, currentOffset=1} 18:47:07.859 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 37701 for test fork 12 with range PortRange{host='localhost', basePort=37700, maxOffset=100, currentOffset=2} 18:47:07.860 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 37702 for test fork 12 with range PortRange{host='localhost', basePort=37700, maxOffset=100, currentOffset=3} 18:47:07.860 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 37703 for test fork 12 with range PortRange{host='localhost', basePort=37700, maxOffset=100, currentOffset=4} 18:47:07.861 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 37704 for test fork 12 with range PortRange{host='localhost', basePort=37700, maxOffset=100, currentOffset=5} 18:47:07.873 [] [main] DEBUG io.zeebe.broker.system - Initializing system with base path /tmp/041b82a2-bdf7-4ef7-b3ff-ec41b364a0d6 18:47:07.907 [] [main] INFO io.zeebe.broker.system - Version: 0.25.0-SNAPSHOT 18:47:08.474 [] [main] INFO io.zeebe.broker.system - Starting broker 0 with configuration { "network" : { "host" : "0.0.0.0", "portOffset" : 0, "maxMessageSize" : "4MB", "advertisedHost" : "0.0.0.0", "commandApi" : { "host" : "0.0.0.0", "port" : 37702, "advertisedHost" : "0.0.0.0", "advertisedPort" : 37702, "address" : "0.0.0.0:37702", "advertisedAddress" : "0.0.0.0:37702" }, "internalApi" : { "host" : "0.0.0.0", "port" : 37703, "advertisedHost" : "0.0.0.0", "advertisedPort" : 37703, "address" : "0.0.0.0:37703", "advertisedAddress" : "0.0.0.0:37703" }, "monitoringApi" : { "host" : "0.0.0.0", "port" : 37704, "advertisedHost" : "0.0.0.0", "advertisedPort" : 37704, "address" : "0.0.0.0:37704", "advertisedAddress" : "0.0.0.0:37704" }, "maxMessageSizeInBytes" : 4194304 }, "cluster" : { "initialContactPoints" : [ ], "partitionIds" : [ 1 ], "nodeId" : 0, "partitionsCount" : 1, "replicationFactor" : 1, "clusterSize" : 1, "clusterName" : "zeebe-cluster", "membership" : { "broadcastUpdates" : false, "broadcastDisputes" : true, "notifySuspect" : false, "gossipInterval" : "PT0.25S", "gossipFanout" : 2, "probeInterval" : "PT1S", "probeTimeout" : "PT2S", "suspectProbes" : 3, "failureTimeout" : "PT10S", "syncInterval" : "PT10S" } }, "threads" : { "cpuThreadCount" : 2, "ioThreadCount" : 2 }, "data" : { "directories" : [ "/tmp/041b82a2-bdf7-4ef7-b3ff-ec41b364a0d6/data" ], "logSegmentSize" : "512MB", "snapshotPeriod" : "PT15M", "logIndexDensity" : 100, "diskUsageReplicationWatermark" : 0.9, "diskUsageCommandWatermark" : 0.8, "diskUsageMonitoringInterval" : "PT1S", "freeDiskSpaceCommandWatermark" : 0, "freeDiskSpaceReplicationWatermark" : 0, "logSegmentSizeInBytes" : 536870912, "atomixStorageLevel" : "DISK" }, "exporters" : { "test-recorder" : { "jarPath" : null, "className" : "io.zeebe.test.util.record.RecordingExporter", "args" : null, "external" : false } }, "gateway" : { "network" : { "host" : "0.0.0.0", "port" : 37700, "minKeepAliveInterval" : "PT30S" }, "cluster" : { "contactPoint" : "0.0.0.0:37703", "requestTimeout" : "PT15S", "clusterName" : "zeebe-cluster", "memberId" : "gateway", "host" : "0.0.0.0", "port" : 37701, "membership" : { "broadcastUpdates" : false, "broadcastDisputes" : true, "notifySuspect" : false, "gossipInterval" : "PT0.25S", "gossipFanout" : 2, "probeInterval" : "PT1S", "probeTimeout" : "PT2S", "suspectProbes" : 3, "failureTimeout" : "PT10S", "syncInterval" : "PT10S" } }, "threads" : { "managementThreads" : 1 }, "monitoring" : { "enabled" : false, "host" : "0.0.0.0", "port" : 9600 }, "security" : { "enabled" : false, "certificateChainPath" : null, "privateKeyPath" : null }, "longPolling" : { "enabled" : true }, "initialized" : true, "enable" : true }, "backpressure" : { "enabled" : true, "algorithm" : "VEGAS", "aimd" : { "requestTimeout" : "PT1S", "initialLimit" : 100, "minLimit" : 1, "maxLimit" : 1000, "backoffRatio" : 0.9 }, "fixedLimit" : { "limit" : 20 }, "vegas" : { "alpha" : 3, "beta" : 6, "initialLimit" : 20 }, "gradient" : { "minLimit" : 10, "initialLimit" : 20, "rttTolerance" : 2.0 }, "gradient2" : { "minLimit" : 10, "initialLimit" : 20, "rttTolerance" : 2.0, "longWindow" : 600 } }, "stepTimeout" : "PT5M", "executionMetricsExporterEnabled" : false } 18:47:08.580 [] [main] INFO io.zeebe.broker.system - Bootstrap Broker-0 [1/13]: actor scheduler 18:47:08.647 [] [main] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [1/13]: actor scheduler started in 2 ms 18:47:08.649 [] [main] INFO io.zeebe.broker.system - Bootstrap Broker-0 [2/13]: membership and replication protocol 18:47:14.472 [] [main] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [2/13]: membership and replication protocol started in 5822 ms 18:47:14.472 [] [main] INFO io.zeebe.broker.system - Bootstrap Broker-0 [3/13]: command api transport 18:47:15.057 [] [main] DEBUG io.zeebe.broker.system - Bound command API to 0.0.0.0:37702 18:47:15.071 [] [main] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [3/13]: command api transport started in 599 ms 18:47:15.072 [] [main] INFO io.zeebe.broker.system - Bootstrap Broker-0 [4/13]: command api handler 18:47:15.247 [] [main] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [4/13]: command api handler started in 175 ms 18:47:15.247 [] [main] INFO io.zeebe.broker.system - Bootstrap Broker-0 [5/13]: subscription api 18:47:15.349 [] [main] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [5/13]: subscription api started in 101 ms 18:47:15.349 [] [main] INFO io.zeebe.broker.system - Bootstrap Broker-0 [6/13]: embedded gateway 18:47:15.358 [] [main] INFO io.zeebe.gateway - Version: 0.25.0-SNAPSHOT 18:47:15.360 [] [main] INFO io.zeebe.gateway - Starting gateway with configuration { "network" : { "host" : "0.0.0.0", "port" : 37700, "minKeepAliveInterval" : "PT30S" }, "cluster" : { "contactPoint" : "0.0.0.0:37703", "requestTimeout" : "PT15S", "clusterName" : "zeebe-cluster", "memberId" : "gateway", "host" : "0.0.0.0", "port" : 37701, "membership" : { "broadcastUpdates" : false, "broadcastDisputes" : true, "notifySuspect" : false, "gossipInterval" : "PT0.25S", "gossipFanout" : 2, "probeInterval" : "PT1S", "probeTimeout" : "PT2S", "suspectProbes" : 3, "failureTimeout" : "PT10S", "syncInterval" : "PT10S" } }, "threads" : { "managementThreads" : 1 }, "monitoring" : { "enabled" : false, "host" : "0.0.0.0", "port" : 9600 }, "security" : { "enabled" : false, "certificateChainPath" : null, "privateKeyPath" : null }, "longPolling" : { "enabled" : true }, "initialized" : true, "enable" : true } 18:47:15.849 [] [main] INFO io.zeebe.broker.system - Bootstrap Broker-0 [6/13]: embedded gateway failed with unexpected exception. java.io.UncheckedIOException: Gateway was not able to start at io.zeebe.broker.system.EmbeddedGatewayService.startGateway(EmbeddedGatewayService.java:47) ~[classes/:?] at io.zeebe.broker.system.EmbeddedGatewayService.<init>(EmbeddedGatewayService.java:29) ~[classes/:?] at io.zeebe.broker.Broker.lambda$initStart$4(Broker.java:173) ~[classes/:?] at io.zeebe.broker.bootstrap.StartProcess.lambda$startStepByStep$2(StartProcess.java:60) ~[classes/:?] at io.zeebe.broker.bootstrap.StartProcess.takeDuration(StartProcess.java:88) ~[classes/:?] at io.zeebe.broker.bootstrap.StartProcess.startStepByStep(StartProcess.java:58) ~[classes/:?] at io.zeebe.broker.bootstrap.StartProcess.takeDuration(StartProcess.java:88) ~[classes/:?] at io.zeebe.broker.bootstrap.StartProcess.start(StartProcess.java:43) ~[classes/:?] at io.zeebe.broker.Broker.internalStart(Broker.java:135) ~[classes/:?] at io.zeebe.util.LogUtil.doWithMDC(LogUtil.java:21) [zeebe-util-0.25.0-SNAPSHOT.jar:0.25.0-SNAPSHOT] at io.zeebe.broker.Broker.start(Broker.java:115) [classes/:?] at io.zeebe.broker.test.EmbeddedBrokerRule.startBroker(EmbeddedBrokerRule.java:226) [test-classes/:?] at io.zeebe.broker.test.EmbeddedBrokerRule.before(EmbeddedBrokerRule.java:130) [test-classes/:?] at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:50) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) [junit-4.13.jar:4.13] at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) [junit-4.13.jar:4.13] at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) [junit-4.13.jar:4.13] at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.run(ParentRunner.java:413) [junit-4.13.jar:4.13] at org.junit.runners.Suite.runChild(Suite.java:128) [junit-4.13.jar:4.13] at org.junit.runners.Suite.runChild(Suite.java:27) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.run(ParentRunner.java:413) [junit-4.13.jar:4.13] at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:157) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428) [surefire-booter-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162) [surefire-booter-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562) [surefire-booter-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548) [surefire-booter-3.0.0-M5.jar:3.0.0-M5] Caused by: java.io.IOException: Failed to bind at io.grpc.netty.NettyServer.start(NettyServer.java:264) ~[grpc-netty-1.30.2.jar:1.30.2] at io.grpc.internal.ServerImpl.start(ServerImpl.java:183) ~[grpc-core-1.30.2.jar:1.30.2] at io.grpc.internal.ServerImpl.start(ServerImpl.java:90) ~[grpc-core-1.30.2.jar:1.30.2] at io.zeebe.gateway.Gateway.start(Gateway.java:130) ~[zeebe-gateway-0.25.0-SNAPSHOT.jar:0.25.0-SNAPSHOT] at io.zeebe.broker.system.EmbeddedGatewayService.startGateway(EmbeddedGatewayService.java:45) ~[classes/:?] ... 44 more Caused by: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use 18:47:15.859 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [1/5]: subscription api 18:47:15.862 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [1/5]: subscription api closed in 2 ms 18:47:15.862 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [2/5]: command api handler 18:47:15.864 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [2/5]: command api handler closed in 2 ms 18:47:15.864 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [3/5]: command api transport 18:47:17.952 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [3/5]: command api transport closed in 2088 ms 18:47:17.952 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [4/5]: membership and replication protocol 18:47:17.959 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [4/5]: membership and replication protocol closed in 7 ms 18:47:17.959 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [5/5]: actor scheduler 18:47:17.961 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [5/5]: actor scheduler closed in 2 ms 18:47:17.961 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 succeeded. Closed 5 steps in 2102 ms. 18:47:17.961 [] [main] ERROR io.zeebe.broker.system - Failed to start broker 0! java.io.UncheckedIOException: Gateway was not able to start at io.zeebe.broker.system.EmbeddedGatewayService.startGateway(EmbeddedGatewayService.java:47) ~[classes/:?] at io.zeebe.broker.system.EmbeddedGatewayService.<init>(EmbeddedGatewayService.java:29) ~[classes/:?] at io.zeebe.broker.Broker.lambda$initStart$4(Broker.java:173) ~[classes/:?] at io.zeebe.broker.bootstrap.StartProcess.lambda$startStepByStep$2(StartProcess.java:60) ~[classes/:?] at io.zeebe.broker.bootstrap.StartProcess.takeDuration(StartProcess.java:88) ~[classes/:?] at io.zeebe.broker.bootstrap.StartProcess.startStepByStep(StartProcess.java:58) ~[classes/:?] at io.zeebe.broker.bootstrap.StartProcess.takeDuration(StartProcess.java:88) ~[classes/:?] at io.zeebe.broker.bootstrap.StartProcess.start(StartProcess.java:43) ~[classes/:?] at io.zeebe.broker.Broker.internalStart(Broker.java:135) ~[classes/:?] at io.zeebe.util.LogUtil.doWithMDC(LogUtil.java:21) [zeebe-util-0.25.0-SNAPSHOT.jar:0.25.0-SNAPSHOT] at io.zeebe.broker.Broker.start(Broker.java:115) [classes/:?] at io.zeebe.broker.test.EmbeddedBrokerRule.startBroker(EmbeddedBrokerRule.java:226) [test-classes/:?] at io.zeebe.broker.test.EmbeddedBrokerRule.before(EmbeddedBrokerRule.java:130) [test-classes/:?] at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:50) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) [junit-4.13.jar:4.13] at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) [junit-4.13.jar:4.13] at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) [junit-4.13.jar:4.13] at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.run(ParentRunner.java:413) [junit-4.13.jar:4.13] at org.junit.runners.Suite.runChild(Suite.java:128) [junit-4.13.jar:4.13] at org.junit.runners.Suite.runChild(Suite.java:27) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.run(ParentRunner.java:413) [junit-4.13.jar:4.13] at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:157) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428) [surefire-booter-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162) [surefire-booter-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562) [surefire-booter-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548) [surefire-booter-3.0.0-M5.jar:3.0.0-M5] Caused by: java.io.IOException: Failed to bind at io.grpc.netty.NettyServer.start(NettyServer.java:264) ~[grpc-netty-1.30.2.jar:1.30.2] at io.grpc.internal.ServerImpl.start(ServerImpl.java:183) ~[grpc-core-1.30.2.jar:1.30.2] at io.grpc.internal.ServerImpl.start(ServerImpl.java:90) ~[grpc-core-1.30.2.jar:1.30.2] at io.zeebe.gateway.Gateway.start(Gateway.java:130) ~[zeebe-gateway-0.25.0-SNAPSHOT.jar:0.25.0-SNAPSHOT] at io.zeebe.broker.system.EmbeddedGatewayService.startGateway(EmbeddedGatewayService.java:45) ~[classes/:?] ... 44 more Caused by: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use </pre> </details>
1.0
BrokerTest.shouldStartAndStopBroker - **Summary** - How often does the test fail? at least once so far - Does it block your work? no - Do we suspect that it is a real failure? unknown **Failures** <details><summary>Example assertion failure</summary> <pre> io.zeebe.util.exception.UncheckedExecutionException: Failed to start broker at io.zeebe.broker.Broker.internalStart(Broker.java:142) at io.zeebe.util.LogUtil.doWithMDC(LogUtil.java:21) at io.zeebe.broker.Broker.start(Broker.java:115) at io.zeebe.broker.test.EmbeddedBrokerRule.startBroker(EmbeddedBrokerRule.java:226) at io.zeebe.broker.test.EmbeddedBrokerRule.before(EmbeddedBrokerRule.java:130) at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:50) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75) at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:157) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548) Caused by: java.io.UncheckedIOException: Gateway was not able to start at io.zeebe.broker.system.EmbeddedGatewayService.startGateway(EmbeddedGatewayService.java:47) at io.zeebe.broker.system.EmbeddedGatewayService.<init>(EmbeddedGatewayService.java:29) at io.zeebe.broker.Broker.lambda$initStart$4(Broker.java:173) at io.zeebe.broker.bootstrap.StartProcess.lambda$startStepByStep$2(StartProcess.java:60) at io.zeebe.broker.bootstrap.StartProcess.takeDuration(StartProcess.java:88) at io.zeebe.broker.bootstrap.StartProcess.startStepByStep(StartProcess.java:58) at io.zeebe.broker.bootstrap.StartProcess.takeDuration(StartProcess.java:88) at io.zeebe.broker.bootstrap.StartProcess.start(StartProcess.java:43) at io.zeebe.broker.Broker.internalStart(Broker.java:135) ... 36 more Caused by: java.io.IOException: Failed to bind at io.grpc.netty.NettyServer.start(NettyServer.java:264) at io.grpc.internal.ServerImpl.start(ServerImpl.java:183) at io.grpc.internal.ServerImpl.start(ServerImpl.java:90) at io.zeebe.gateway.Gateway.start(Gateway.java:130) at io.zeebe.broker.system.EmbeddedGatewayService.startGateway(EmbeddedGatewayService.java:45) ... 44 more Caused by: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use </pre> </details> **Hypotheses** **Logs** <details><summary>Logs</summary> <pre> [18:47:07.678 [] [main] INFO io.zeebe.test - Test started: shouldStartAndStopBroker(io.zeebe.broker.BrokerTest) 18:47:07.783 [] [main] INFO io.zeebe.test.util.SocketUtil - Starting socket assignment with testForkNumber 12 and testMavenId 1 18:47:07.857 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 37700 for test fork 12 with range PortRange{host='localhost', basePort=37700, maxOffset=100, currentOffset=1} 18:47:07.859 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 37701 for test fork 12 with range PortRange{host='localhost', basePort=37700, maxOffset=100, currentOffset=2} 18:47:07.860 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 37702 for test fork 12 with range PortRange{host='localhost', basePort=37700, maxOffset=100, currentOffset=3} 18:47:07.860 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 37703 for test fork 12 with range PortRange{host='localhost', basePort=37700, maxOffset=100, currentOffset=4} 18:47:07.861 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 37704 for test fork 12 with range PortRange{host='localhost', basePort=37700, maxOffset=100, currentOffset=5} 18:47:07.873 [] [main] DEBUG io.zeebe.broker.system - Initializing system with base path /tmp/041b82a2-bdf7-4ef7-b3ff-ec41b364a0d6 18:47:07.907 [] [main] INFO io.zeebe.broker.system - Version: 0.25.0-SNAPSHOT 18:47:08.474 [] [main] INFO io.zeebe.broker.system - Starting broker 0 with configuration { "network" : { "host" : "0.0.0.0", "portOffset" : 0, "maxMessageSize" : "4MB", "advertisedHost" : "0.0.0.0", "commandApi" : { "host" : "0.0.0.0", "port" : 37702, "advertisedHost" : "0.0.0.0", "advertisedPort" : 37702, "address" : "0.0.0.0:37702", "advertisedAddress" : "0.0.0.0:37702" }, "internalApi" : { "host" : "0.0.0.0", "port" : 37703, "advertisedHost" : "0.0.0.0", "advertisedPort" : 37703, "address" : "0.0.0.0:37703", "advertisedAddress" : "0.0.0.0:37703" }, "monitoringApi" : { "host" : "0.0.0.0", "port" : 37704, "advertisedHost" : "0.0.0.0", "advertisedPort" : 37704, "address" : "0.0.0.0:37704", "advertisedAddress" : "0.0.0.0:37704" }, "maxMessageSizeInBytes" : 4194304 }, "cluster" : { "initialContactPoints" : [ ], "partitionIds" : [ 1 ], "nodeId" : 0, "partitionsCount" : 1, "replicationFactor" : 1, "clusterSize" : 1, "clusterName" : "zeebe-cluster", "membership" : { "broadcastUpdates" : false, "broadcastDisputes" : true, "notifySuspect" : false, "gossipInterval" : "PT0.25S", "gossipFanout" : 2, "probeInterval" : "PT1S", "probeTimeout" : "PT2S", "suspectProbes" : 3, "failureTimeout" : "PT10S", "syncInterval" : "PT10S" } }, "threads" : { "cpuThreadCount" : 2, "ioThreadCount" : 2 }, "data" : { "directories" : [ "/tmp/041b82a2-bdf7-4ef7-b3ff-ec41b364a0d6/data" ], "logSegmentSize" : "512MB", "snapshotPeriod" : "PT15M", "logIndexDensity" : 100, "diskUsageReplicationWatermark" : 0.9, "diskUsageCommandWatermark" : 0.8, "diskUsageMonitoringInterval" : "PT1S", "freeDiskSpaceCommandWatermark" : 0, "freeDiskSpaceReplicationWatermark" : 0, "logSegmentSizeInBytes" : 536870912, "atomixStorageLevel" : "DISK" }, "exporters" : { "test-recorder" : { "jarPath" : null, "className" : "io.zeebe.test.util.record.RecordingExporter", "args" : null, "external" : false } }, "gateway" : { "network" : { "host" : "0.0.0.0", "port" : 37700, "minKeepAliveInterval" : "PT30S" }, "cluster" : { "contactPoint" : "0.0.0.0:37703", "requestTimeout" : "PT15S", "clusterName" : "zeebe-cluster", "memberId" : "gateway", "host" : "0.0.0.0", "port" : 37701, "membership" : { "broadcastUpdates" : false, "broadcastDisputes" : true, "notifySuspect" : false, "gossipInterval" : "PT0.25S", "gossipFanout" : 2, "probeInterval" : "PT1S", "probeTimeout" : "PT2S", "suspectProbes" : 3, "failureTimeout" : "PT10S", "syncInterval" : "PT10S" } }, "threads" : { "managementThreads" : 1 }, "monitoring" : { "enabled" : false, "host" : "0.0.0.0", "port" : 9600 }, "security" : { "enabled" : false, "certificateChainPath" : null, "privateKeyPath" : null }, "longPolling" : { "enabled" : true }, "initialized" : true, "enable" : true }, "backpressure" : { "enabled" : true, "algorithm" : "VEGAS", "aimd" : { "requestTimeout" : "PT1S", "initialLimit" : 100, "minLimit" : 1, "maxLimit" : 1000, "backoffRatio" : 0.9 }, "fixedLimit" : { "limit" : 20 }, "vegas" : { "alpha" : 3, "beta" : 6, "initialLimit" : 20 }, "gradient" : { "minLimit" : 10, "initialLimit" : 20, "rttTolerance" : 2.0 }, "gradient2" : { "minLimit" : 10, "initialLimit" : 20, "rttTolerance" : 2.0, "longWindow" : 600 } }, "stepTimeout" : "PT5M", "executionMetricsExporterEnabled" : false } 18:47:08.580 [] [main] INFO io.zeebe.broker.system - Bootstrap Broker-0 [1/13]: actor scheduler 18:47:08.647 [] [main] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [1/13]: actor scheduler started in 2 ms 18:47:08.649 [] [main] INFO io.zeebe.broker.system - Bootstrap Broker-0 [2/13]: membership and replication protocol 18:47:14.472 [] [main] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [2/13]: membership and replication protocol started in 5822 ms 18:47:14.472 [] [main] INFO io.zeebe.broker.system - Bootstrap Broker-0 [3/13]: command api transport 18:47:15.057 [] [main] DEBUG io.zeebe.broker.system - Bound command API to 0.0.0.0:37702 18:47:15.071 [] [main] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [3/13]: command api transport started in 599 ms 18:47:15.072 [] [main] INFO io.zeebe.broker.system - Bootstrap Broker-0 [4/13]: command api handler 18:47:15.247 [] [main] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [4/13]: command api handler started in 175 ms 18:47:15.247 [] [main] INFO io.zeebe.broker.system - Bootstrap Broker-0 [5/13]: subscription api 18:47:15.349 [] [main] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [5/13]: subscription api started in 101 ms 18:47:15.349 [] [main] INFO io.zeebe.broker.system - Bootstrap Broker-0 [6/13]: embedded gateway 18:47:15.358 [] [main] INFO io.zeebe.gateway - Version: 0.25.0-SNAPSHOT 18:47:15.360 [] [main] INFO io.zeebe.gateway - Starting gateway with configuration { "network" : { "host" : "0.0.0.0", "port" : 37700, "minKeepAliveInterval" : "PT30S" }, "cluster" : { "contactPoint" : "0.0.0.0:37703", "requestTimeout" : "PT15S", "clusterName" : "zeebe-cluster", "memberId" : "gateway", "host" : "0.0.0.0", "port" : 37701, "membership" : { "broadcastUpdates" : false, "broadcastDisputes" : true, "notifySuspect" : false, "gossipInterval" : "PT0.25S", "gossipFanout" : 2, "probeInterval" : "PT1S", "probeTimeout" : "PT2S", "suspectProbes" : 3, "failureTimeout" : "PT10S", "syncInterval" : "PT10S" } }, "threads" : { "managementThreads" : 1 }, "monitoring" : { "enabled" : false, "host" : "0.0.0.0", "port" : 9600 }, "security" : { "enabled" : false, "certificateChainPath" : null, "privateKeyPath" : null }, "longPolling" : { "enabled" : true }, "initialized" : true, "enable" : true } 18:47:15.849 [] [main] INFO io.zeebe.broker.system - Bootstrap Broker-0 [6/13]: embedded gateway failed with unexpected exception. java.io.UncheckedIOException: Gateway was not able to start at io.zeebe.broker.system.EmbeddedGatewayService.startGateway(EmbeddedGatewayService.java:47) ~[classes/:?] at io.zeebe.broker.system.EmbeddedGatewayService.<init>(EmbeddedGatewayService.java:29) ~[classes/:?] at io.zeebe.broker.Broker.lambda$initStart$4(Broker.java:173) ~[classes/:?] at io.zeebe.broker.bootstrap.StartProcess.lambda$startStepByStep$2(StartProcess.java:60) ~[classes/:?] at io.zeebe.broker.bootstrap.StartProcess.takeDuration(StartProcess.java:88) ~[classes/:?] at io.zeebe.broker.bootstrap.StartProcess.startStepByStep(StartProcess.java:58) ~[classes/:?] at io.zeebe.broker.bootstrap.StartProcess.takeDuration(StartProcess.java:88) ~[classes/:?] at io.zeebe.broker.bootstrap.StartProcess.start(StartProcess.java:43) ~[classes/:?] at io.zeebe.broker.Broker.internalStart(Broker.java:135) ~[classes/:?] at io.zeebe.util.LogUtil.doWithMDC(LogUtil.java:21) [zeebe-util-0.25.0-SNAPSHOT.jar:0.25.0-SNAPSHOT] at io.zeebe.broker.Broker.start(Broker.java:115) [classes/:?] at io.zeebe.broker.test.EmbeddedBrokerRule.startBroker(EmbeddedBrokerRule.java:226) [test-classes/:?] at io.zeebe.broker.test.EmbeddedBrokerRule.before(EmbeddedBrokerRule.java:130) [test-classes/:?] at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:50) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) [junit-4.13.jar:4.13] at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) [junit-4.13.jar:4.13] at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) [junit-4.13.jar:4.13] at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.run(ParentRunner.java:413) [junit-4.13.jar:4.13] at org.junit.runners.Suite.runChild(Suite.java:128) [junit-4.13.jar:4.13] at org.junit.runners.Suite.runChild(Suite.java:27) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.run(ParentRunner.java:413) [junit-4.13.jar:4.13] at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:157) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428) [surefire-booter-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162) [surefire-booter-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562) [surefire-booter-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548) [surefire-booter-3.0.0-M5.jar:3.0.0-M5] Caused by: java.io.IOException: Failed to bind at io.grpc.netty.NettyServer.start(NettyServer.java:264) ~[grpc-netty-1.30.2.jar:1.30.2] at io.grpc.internal.ServerImpl.start(ServerImpl.java:183) ~[grpc-core-1.30.2.jar:1.30.2] at io.grpc.internal.ServerImpl.start(ServerImpl.java:90) ~[grpc-core-1.30.2.jar:1.30.2] at io.zeebe.gateway.Gateway.start(Gateway.java:130) ~[zeebe-gateway-0.25.0-SNAPSHOT.jar:0.25.0-SNAPSHOT] at io.zeebe.broker.system.EmbeddedGatewayService.startGateway(EmbeddedGatewayService.java:45) ~[classes/:?] ... 44 more Caused by: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use 18:47:15.859 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [1/5]: subscription api 18:47:15.862 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [1/5]: subscription api closed in 2 ms 18:47:15.862 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [2/5]: command api handler 18:47:15.864 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [2/5]: command api handler closed in 2 ms 18:47:15.864 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [3/5]: command api transport 18:47:17.952 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [3/5]: command api transport closed in 2088 ms 18:47:17.952 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [4/5]: membership and replication protocol 18:47:17.959 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [4/5]: membership and replication protocol closed in 7 ms 18:47:17.959 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [5/5]: actor scheduler 18:47:17.961 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [5/5]: actor scheduler closed in 2 ms 18:47:17.961 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 succeeded. Closed 5 steps in 2102 ms. 18:47:17.961 [] [main] ERROR io.zeebe.broker.system - Failed to start broker 0! java.io.UncheckedIOException: Gateway was not able to start at io.zeebe.broker.system.EmbeddedGatewayService.startGateway(EmbeddedGatewayService.java:47) ~[classes/:?] at io.zeebe.broker.system.EmbeddedGatewayService.<init>(EmbeddedGatewayService.java:29) ~[classes/:?] at io.zeebe.broker.Broker.lambda$initStart$4(Broker.java:173) ~[classes/:?] at io.zeebe.broker.bootstrap.StartProcess.lambda$startStepByStep$2(StartProcess.java:60) ~[classes/:?] at io.zeebe.broker.bootstrap.StartProcess.takeDuration(StartProcess.java:88) ~[classes/:?] at io.zeebe.broker.bootstrap.StartProcess.startStepByStep(StartProcess.java:58) ~[classes/:?] at io.zeebe.broker.bootstrap.StartProcess.takeDuration(StartProcess.java:88) ~[classes/:?] at io.zeebe.broker.bootstrap.StartProcess.start(StartProcess.java:43) ~[classes/:?] at io.zeebe.broker.Broker.internalStart(Broker.java:135) ~[classes/:?] at io.zeebe.util.LogUtil.doWithMDC(LogUtil.java:21) [zeebe-util-0.25.0-SNAPSHOT.jar:0.25.0-SNAPSHOT] at io.zeebe.broker.Broker.start(Broker.java:115) [classes/:?] at io.zeebe.broker.test.EmbeddedBrokerRule.startBroker(EmbeddedBrokerRule.java:226) [test-classes/:?] at io.zeebe.broker.test.EmbeddedBrokerRule.before(EmbeddedBrokerRule.java:130) [test-classes/:?] at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:50) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) [junit-4.13.jar:4.13] at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) [junit-4.13.jar:4.13] at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) [junit-4.13.jar:4.13] at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.run(ParentRunner.java:413) [junit-4.13.jar:4.13] at org.junit.runners.Suite.runChild(Suite.java:128) [junit-4.13.jar:4.13] at org.junit.runners.Suite.runChild(Suite.java:27) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) [junit-4.13.jar:4.13] at org.junit.runners.ParentRunner.run(ParentRunner.java:413) [junit-4.13.jar:4.13] at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:157) [surefire-junit47-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428) [surefire-booter-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162) [surefire-booter-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562) [surefire-booter-3.0.0-M5.jar:3.0.0-M5] at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548) [surefire-booter-3.0.0-M5.jar:3.0.0-M5] Caused by: java.io.IOException: Failed to bind at io.grpc.netty.NettyServer.start(NettyServer.java:264) ~[grpc-netty-1.30.2.jar:1.30.2] at io.grpc.internal.ServerImpl.start(ServerImpl.java:183) ~[grpc-core-1.30.2.jar:1.30.2] at io.grpc.internal.ServerImpl.start(ServerImpl.java:90) ~[grpc-core-1.30.2.jar:1.30.2] at io.zeebe.gateway.Gateway.start(Gateway.java:130) ~[zeebe-gateway-0.25.0-SNAPSHOT.jar:0.25.0-SNAPSHOT] at io.zeebe.broker.system.EmbeddedGatewayService.startGateway(EmbeddedGatewayService.java:45) ~[classes/:?] ... 44 more Caused by: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use </pre> </details>
test
brokertest shouldstartandstopbroker summary how often does the test fail at least once so far does it block your work no do we suspect that it is a real failure unknown failures example assertion failure io zeebe util exception uncheckedexecutionexception failed to start broker at io zeebe broker broker internalstart broker java at io zeebe util logutil dowithmdc logutil java at io zeebe broker broker start broker java at io zeebe broker test embeddedbrokerrule startbroker embeddedbrokerrule java at io zeebe broker test embeddedbrokerrule before embeddedbrokerrule java at org junit rules externalresource evaluate externalresource java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runners suite runchild suite java at org junit runners suite runchild suite java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore junitcore run junitcore java at org apache maven surefire junitcore junitcorewrapper createrequestandrun junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper executelazy junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcoreprovider invoke junitcoreprovider java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java caused by java io uncheckedioexception gateway was not able to start at io zeebe broker system embeddedgatewayservice startgateway embeddedgatewayservice java at io zeebe broker system embeddedgatewayservice embeddedgatewayservice java at io zeebe broker broker lambda initstart broker java at io zeebe broker bootstrap startprocess lambda startstepbystep startprocess java at io zeebe broker bootstrap startprocess takeduration startprocess java at io zeebe broker bootstrap startprocess startstepbystep startprocess java at io zeebe broker bootstrap startprocess takeduration startprocess java at io zeebe broker bootstrap startprocess start startprocess java at io zeebe broker broker internalstart broker java more caused by java io ioexception failed to bind at io grpc netty nettyserver start nettyserver java at io grpc internal serverimpl start serverimpl java at io grpc internal serverimpl start serverimpl java at io zeebe gateway gateway start gateway java at io zeebe broker system embeddedgatewayservice startgateway embeddedgatewayservice java more caused by io netty channel unix errors nativeioexception bind failed address already in use hypotheses logs logs info io zeebe test test started shouldstartandstopbroker io zeebe broker brokertest info io zeebe test util socketutil starting socket assignment with testforknumber and testmavenid info io zeebe test util socketutil choosing next port for test fork with range portrange host localhost baseport maxoffset currentoffset info io zeebe test util socketutil choosing next port for test fork with range portrange host localhost baseport maxoffset currentoffset info io zeebe test util socketutil choosing next port for test fork with range portrange host localhost baseport maxoffset currentoffset info io zeebe test util socketutil choosing next port for test fork with range portrange host localhost baseport maxoffset currentoffset info io zeebe test util socketutil choosing next port for test fork with range portrange host localhost baseport maxoffset currentoffset debug io zeebe broker system initializing system with base path tmp info io zeebe broker system version snapshot info io zeebe broker system starting broker with configuration network host portoffset maxmessagesize advertisedhost commandapi host port advertisedhost advertisedport address advertisedaddress internalapi host port advertisedhost advertisedport address advertisedaddress monitoringapi host port advertisedhost advertisedport address advertisedaddress maxmessagesizeinbytes cluster initialcontactpoints partitionids nodeid partitionscount replicationfactor clustersize clustername zeebe cluster membership broadcastupdates false broadcastdisputes true notifysuspect false gossipinterval gossipfanout probeinterval probetimeout suspectprobes failuretimeout syncinterval threads cputhreadcount iothreadcount data directories logsegmentsize snapshotperiod logindexdensity diskusagereplicationwatermark diskusagecommandwatermark diskusagemonitoringinterval freediskspacecommandwatermark freediskspacereplicationwatermark logsegmentsizeinbytes atomixstoragelevel disk exporters test recorder jarpath null classname io zeebe test util record recordingexporter args null external false gateway network host port minkeepaliveinterval cluster contactpoint requesttimeout clustername zeebe cluster memberid gateway host port membership broadcastupdates false broadcastdisputes true notifysuspect false gossipinterval gossipfanout probeinterval probetimeout suspectprobes failuretimeout syncinterval threads managementthreads monitoring enabled false host port security enabled false certificatechainpath null privatekeypath null longpolling enabled true initialized true enable true backpressure enabled true algorithm vegas aimd requesttimeout initiallimit minlimit maxlimit backoffratio fixedlimit limit vegas alpha beta initiallimit gradient minlimit initiallimit rtttolerance minlimit initiallimit rtttolerance longwindow steptimeout executionmetricsexporterenabled false info io zeebe broker system bootstrap broker actor scheduler debug io zeebe broker system bootstrap broker actor scheduler started in ms info io zeebe broker system bootstrap broker membership and replication protocol debug io zeebe broker system bootstrap broker membership and replication protocol started in ms info io zeebe broker system bootstrap broker command api transport debug io zeebe broker system bound command api to debug io zeebe broker system bootstrap broker command api transport started in ms info io zeebe broker system bootstrap broker command api handler debug io zeebe broker system bootstrap broker command api handler started in ms info io zeebe broker system bootstrap broker subscription api debug io zeebe broker system bootstrap broker subscription api started in ms info io zeebe broker system bootstrap broker embedded gateway info io zeebe gateway version snapshot info io zeebe gateway starting gateway with configuration network host port minkeepaliveinterval cluster contactpoint requesttimeout clustername zeebe cluster memberid gateway host port membership broadcastupdates false broadcastdisputes true notifysuspect false gossipinterval gossipfanout probeinterval probetimeout suspectprobes failuretimeout syncinterval threads managementthreads monitoring enabled false host port security enabled false certificatechainpath null privatekeypath null longpolling enabled true initialized true enable true info io zeebe broker system bootstrap broker embedded gateway failed with unexpected exception java io uncheckedioexception gateway was not able to start at io zeebe broker system embeddedgatewayservice startgateway embeddedgatewayservice java at io zeebe broker system embeddedgatewayservice embeddedgatewayservice java at io zeebe broker broker lambda initstart broker java at io zeebe broker bootstrap startprocess lambda startstepbystep startprocess java at io zeebe broker bootstrap startprocess takeduration startprocess java at io zeebe broker bootstrap startprocess startstepbystep startprocess java at io zeebe broker bootstrap startprocess takeduration startprocess java at io zeebe broker bootstrap startprocess start startprocess java at io zeebe broker broker internalstart broker java at io zeebe util logutil dowithmdc logutil java at io zeebe broker broker start broker java at io zeebe broker test embeddedbrokerrule startbroker embeddedbrokerrule java at io zeebe broker test embeddedbrokerrule before embeddedbrokerrule java at org junit rules externalresource evaluate externalresource java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runners suite runchild suite java at org junit runners suite runchild suite java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore junitcore run junitcore java at org apache maven surefire junitcore junitcorewrapper createrequestandrun junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper executelazy junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcoreprovider invoke junitcoreprovider java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java caused by java io ioexception failed to bind at io grpc netty nettyserver start nettyserver java at io grpc internal serverimpl start serverimpl java at io grpc internal serverimpl start serverimpl java at io zeebe gateway gateway start gateway java at io zeebe broker system embeddedgatewayservice startgateway embeddedgatewayservice java more caused by io netty channel unix errors nativeioexception bind failed address already in use info io zeebe broker system closing broker subscription api debug io zeebe broker system closing broker subscription api closed in ms info io zeebe broker system closing broker command api handler debug io zeebe broker system closing broker command api handler closed in ms info io zeebe broker system closing broker command api transport debug io zeebe broker system closing broker command api transport closed in ms info io zeebe broker system closing broker membership and replication protocol debug io zeebe broker system closing broker membership and replication protocol closed in ms info io zeebe broker system closing broker actor scheduler debug io zeebe broker system closing broker actor scheduler closed in ms info io zeebe broker system closing broker succeeded closed steps in ms error io zeebe broker system failed to start broker java io uncheckedioexception gateway was not able to start at io zeebe broker system embeddedgatewayservice startgateway embeddedgatewayservice java at io zeebe broker system embeddedgatewayservice embeddedgatewayservice java at io zeebe broker broker lambda initstart broker java at io zeebe broker bootstrap startprocess lambda startstepbystep startprocess java at io zeebe broker bootstrap startprocess takeduration startprocess java at io zeebe broker bootstrap startprocess startstepbystep startprocess java at io zeebe broker bootstrap startprocess takeduration startprocess java at io zeebe broker bootstrap startprocess start startprocess java at io zeebe broker broker internalstart broker java at io zeebe util logutil dowithmdc logutil java at io zeebe broker broker start broker java at io zeebe broker test embeddedbrokerrule startbroker embeddedbrokerrule java at io zeebe broker test embeddedbrokerrule before embeddedbrokerrule java at org junit rules externalresource evaluate externalresource java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runners suite runchild suite java at org junit runners suite runchild suite java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore junitcore run junitcore java at org apache maven surefire junitcore junitcorewrapper createrequestandrun junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper executelazy junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcoreprovider invoke junitcoreprovider java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java caused by java io ioexception failed to bind at io grpc netty nettyserver start nettyserver java at io grpc internal serverimpl start serverimpl java at io grpc internal serverimpl start serverimpl java at io zeebe gateway gateway start gateway java at io zeebe broker system embeddedgatewayservice startgateway embeddedgatewayservice java more caused by io netty channel unix errors nativeioexception bind failed address already in use
1
6,984
2,869,970,449
IssuesEvent
2015-06-06 18:07:04
rust-lang/cargo
https://api.github.com/repos/rust-lang/cargo
closed
dep/dev-dep cycle causes doc tests to fail
C-test
``` . ├── Cargo.toml ├── src │   └── lib.rs └── sub1 ├── Cargo.toml └── src └── lib.rs 3 directories, 4 files ``` ## Cargo.toml ```toml [package] name = "recursive" version = "0.1.0" authors = ["Huon Wilson <dbau.pp@gmail.com>"] [dev-dependencies] sub1 = { path = "sub1" } ``` ## src/lib.rs ```rust //! ```rust //! extern crate sub1; //! ``` #[cfg(test)]extern crate sub1; ``` ## sub1/Cargo.toml ```toml [package] name = "sub1" version = "0.1.0" authors = ["Huon Wilson <dbau.pp@gmail.com>"] [dependencies] "recursive" = { path = ".." } ``` ## sub1/src/lib.rs ```rust extern crate recursive; ``` --- Running `cargo test` inside `sub1` gives: ``` Compiling recursive v0.1.0 (file:///home/huon/projects/test-rust/recursive) Compiling sub1 v0.1.0 (file:///home/huon/projects/test-rust/recursive) Running target/debug/recursive-bdc6bf679eadc42f running 0 tests test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured Doc-tests recursive running 1 test test _0 ... FAILED failures: ---- _0 stdout ---- <anon>:2:5: 2:23 error: can't find crate for `recursive` which `sub1` depends on <anon>:2 extern crate sub1; ^~~~~~~~~~~~~~~~~~ error: aborting due to previous error thread '_0' panicked at 'Box<Any>', /home/rustbuild/src/rust-buildbot/slave/nightly-dist-rustc-linux/build/src/libsyntax/diagnostic.rs:211 failures: _0 test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured ``` That is, only the doc test is failing.
1.0
dep/dev-dep cycle causes doc tests to fail - ``` . ├── Cargo.toml ├── src │   └── lib.rs └── sub1 ├── Cargo.toml └── src └── lib.rs 3 directories, 4 files ``` ## Cargo.toml ```toml [package] name = "recursive" version = "0.1.0" authors = ["Huon Wilson <dbau.pp@gmail.com>"] [dev-dependencies] sub1 = { path = "sub1" } ``` ## src/lib.rs ```rust //! ```rust //! extern crate sub1; //! ``` #[cfg(test)]extern crate sub1; ``` ## sub1/Cargo.toml ```toml [package] name = "sub1" version = "0.1.0" authors = ["Huon Wilson <dbau.pp@gmail.com>"] [dependencies] "recursive" = { path = ".." } ``` ## sub1/src/lib.rs ```rust extern crate recursive; ``` --- Running `cargo test` inside `sub1` gives: ``` Compiling recursive v0.1.0 (file:///home/huon/projects/test-rust/recursive) Compiling sub1 v0.1.0 (file:///home/huon/projects/test-rust/recursive) Running target/debug/recursive-bdc6bf679eadc42f running 0 tests test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured Doc-tests recursive running 1 test test _0 ... FAILED failures: ---- _0 stdout ---- <anon>:2:5: 2:23 error: can't find crate for `recursive` which `sub1` depends on <anon>:2 extern crate sub1; ^~~~~~~~~~~~~~~~~~ error: aborting due to previous error thread '_0' panicked at 'Box<Any>', /home/rustbuild/src/rust-buildbot/slave/nightly-dist-rustc-linux/build/src/libsyntax/diagnostic.rs:211 failures: _0 test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured ``` That is, only the doc test is failing.
test
dep dev dep cycle causes doc tests to fail ├── cargo toml ├── src │   └── lib rs └── ├── cargo toml └── src └── lib rs directories files cargo toml toml name recursive version authors path src lib rs rust rust extern crate extern crate cargo toml toml name version authors recursive path src lib rs rust extern crate recursive running cargo test inside gives compiling recursive file home huon projects test rust recursive compiling file home huon projects test rust recursive running target debug recursive running tests test result ok passed failed ignored measured doc tests recursive running test test failed failures stdout error can t find crate for recursive which depends on extern crate error aborting due to previous error thread panicked at box home rustbuild src rust buildbot slave nightly dist rustc linux build src libsyntax diagnostic rs failures test result failed passed failed ignored measured that is only the doc test is failing
1
43,438
5,538,535,661
IssuesEvent
2017-03-22 02:07:24
xcat2/xcat-core
https://api.github.com/repos/xcat2/xcat-core
closed
[DEV] All duplicate cases in bundles(rhels7.2_ppc64.bundle/rhels7.2_ppc64le.bundle/rhels7.3_ppc64le.bundle)
component:test
There are duplicate cases in rhels7.2_ppc64.bundle/rhels7.2_ppc64le.bundle/rhels7.3_ppc64le.bundle [root@ bundle]# for i in `ls`;do echo $i; sort $i| uniq -c | grep -v ' 1 '; done .... rhels7.2_ppc64.bundle 2 makehosts_h 2 makehosts_help 2 makehosts_n 2 makehosts_n_noderange rhels7.2_ppc64le.bundle 2 makehosts_h 2 makehosts_help 2 makehosts_n 2 makehosts_n_noderange ...... rhels7.3_ppc64le.bundle 2 makehosts_h 2 makehosts_help 2 makehosts_n 2 makehosts_n_noderange ......
1.0
[DEV] All duplicate cases in bundles(rhels7.2_ppc64.bundle/rhels7.2_ppc64le.bundle/rhels7.3_ppc64le.bundle) - There are duplicate cases in rhels7.2_ppc64.bundle/rhels7.2_ppc64le.bundle/rhels7.3_ppc64le.bundle [root@ bundle]# for i in `ls`;do echo $i; sort $i| uniq -c | grep -v ' 1 '; done .... rhels7.2_ppc64.bundle 2 makehosts_h 2 makehosts_help 2 makehosts_n 2 makehosts_n_noderange rhels7.2_ppc64le.bundle 2 makehosts_h 2 makehosts_help 2 makehosts_n 2 makehosts_n_noderange ...... rhels7.3_ppc64le.bundle 2 makehosts_h 2 makehosts_help 2 makehosts_n 2 makehosts_n_noderange ......
test
all duplicate cases in bundles bundle bundle bundle there are duplicate cases in bundle bundle bundle for i in ls do echo i sort i uniq c grep v done bundle makehosts h makehosts help makehosts n makehosts n noderange bundle makehosts h makehosts help makehosts n makehosts n noderange bundle makehosts h makehosts help makehosts n makehosts n noderange
1
138,997
11,224,082,849
IssuesEvent
2020-01-08 00:56:34
rancher/rancher
https://api.github.com/repos/rancher/rancher
closed
Air-gapped cluster provisioning fails when private registry requires authentication
[zube]: To Test area/cluster area/machine area/node-template internal kind/bug team/ca
<!-- Please search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase. --> **What kind of request is this (question/bug/enhancement/feature request):** Bug **Steps to reproduce (least amount of steps as possible):** 1. Install Rancher HA following the [HA air gapped installation docs](https://rancher.com/docs/rancher/v2.x/en/installation/air-gap-high-availability/). 2. In [step 2](https://rancher.com/docs/rancher/v2.x/en/installation/air-gap-high-availability/prepare-private-registry/), publish the images to a private registry (say, `private.registry.com`) that requires authentication to push/pull images. 3. Note how in [Step 5](https://rancher.com/docs/rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-for-private-reg) one configures Rancher to `use the private registry in order to provision any Rancher launched Kubernetes clusters`. 4. Once Rancher is installed, create a vSphere node template 5. Create a cluster using the vSphere node template, and configuring the private registry from step 2 as "default system registry" while providing credentials allowing to pull the images. **Result:** Cluster creation fails. After nodes were provisioned the following error was logged: ``` Unable to find image 'private.registry.com/rancher/rancher-agent:v2.2.2' locally docker: Error response from daemon: Get https://private.registry.com/v2/: unauthorized: authentication required ``` **Other details that may be helpful:** While Rancher uses the credentials configured for the cluster private registry when pulling the RKE system images, it does not use the credentials when pulling the initial Rancher Agent image via the docker-machine ssh connection. **Environment information** - Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): v2.2.2 - Installation option (single install/HA): HA <!-- If the reported issue is regarding a created cluster, please provide requested info below --> **Cluster information** - Cluster type (Hosted/Infrastructure Provider/Custom/Imported): - Machine type (cloud/VM/metal) and specifications (CPU/memory): - Kubernetes version (use `kubectl version`): ``` (paste the output here) ``` - Docker version (use `docker version`): ``` (paste the output here) ```
1.0
Air-gapped cluster provisioning fails when private registry requires authentication - <!-- Please search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase. --> **What kind of request is this (question/bug/enhancement/feature request):** Bug **Steps to reproduce (least amount of steps as possible):** 1. Install Rancher HA following the [HA air gapped installation docs](https://rancher.com/docs/rancher/v2.x/en/installation/air-gap-high-availability/). 2. In [step 2](https://rancher.com/docs/rancher/v2.x/en/installation/air-gap-high-availability/prepare-private-registry/), publish the images to a private registry (say, `private.registry.com`) that requires authentication to push/pull images. 3. Note how in [Step 5](https://rancher.com/docs/rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-for-private-reg) one configures Rancher to `use the private registry in order to provision any Rancher launched Kubernetes clusters`. 4. Once Rancher is installed, create a vSphere node template 5. Create a cluster using the vSphere node template, and configuring the private registry from step 2 as "default system registry" while providing credentials allowing to pull the images. **Result:** Cluster creation fails. After nodes were provisioned the following error was logged: ``` Unable to find image 'private.registry.com/rancher/rancher-agent:v2.2.2' locally docker: Error response from daemon: Get https://private.registry.com/v2/: unauthorized: authentication required ``` **Other details that may be helpful:** While Rancher uses the credentials configured for the cluster private registry when pulling the RKE system images, it does not use the credentials when pulling the initial Rancher Agent image via the docker-machine ssh connection. **Environment information** - Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): v2.2.2 - Installation option (single install/HA): HA <!-- If the reported issue is regarding a created cluster, please provide requested info below --> **Cluster information** - Cluster type (Hosted/Infrastructure Provider/Custom/Imported): - Machine type (cloud/VM/metal) and specifications (CPU/memory): - Kubernetes version (use `kubectl version`): ``` (paste the output here) ``` - Docker version (use `docker version`): ``` (paste the output here) ```
test
air gapped cluster provisioning fails when private registry requires authentication please search for existing issues first then read to see what we expect in an issue for security issues please email security rancher com instead of posting a public issue in github you may but are not required to use the gpg key located on keybase what kind of request is this question bug enhancement feature request bug steps to reproduce least amount of steps as possible install rancher ha following the in publish the images to a private registry say private registry com that requires authentication to push pull images note how in one configures rancher to use the private registry in order to provision any rancher launched kubernetes clusters once rancher is installed create a vsphere node template create a cluster using the vsphere node template and configuring the private registry from step as default system registry while providing credentials allowing to pull the images result cluster creation fails after nodes were provisioned the following error was logged unable to find image private registry com rancher rancher agent locally docker error response from daemon get unauthorized authentication required other details that may be helpful while rancher uses the credentials configured for the cluster private registry when pulling the rke system images it does not use the credentials when pulling the initial rancher agent image via the docker machine ssh connection environment information rancher version rancher rancher rancher server image tag or shown bottom left in the ui installation option single install ha ha if the reported issue is regarding a created cluster please provide requested info below cluster information cluster type hosted infrastructure provider custom imported machine type cloud vm metal and specifications cpu memory kubernetes version use kubectl version paste the output here docker version use docker version paste the output here
1
346,253
30,879,584,976
IssuesEvent
2023-08-03 16:31:39
confidential-containers/cloud-api-adaptor
https://api.github.com/repos/confidential-containers/cloud-api-adaptor
closed
Azure Provisioner does not have support to provide custom CAA container image support
CI provider/azure e2e-test
Right now the provisioner deploys the default image from the kustomization file. We need a way to provide a custom container image that is built to run the e2e test.
1.0
Azure Provisioner does not have support to provide custom CAA container image support - Right now the provisioner deploys the default image from the kustomization file. We need a way to provide a custom container image that is built to run the e2e test.
test
azure provisioner does not have support to provide custom caa container image support right now the provisioner deploys the default image from the kustomization file we need a way to provide a custom container image that is built to run the test
1
282,181
30,889,210,165
IssuesEvent
2023-08-04 02:23:57
madhans23/linux-4.1.15
https://api.github.com/repos/madhans23/linux-4.1.15
reopened
CVE-2021-45868 (Medium) detected in linux-stable-rtv4.1.33
Mend: dependency security vulnerability
## CVE-2021-45868 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/quota/quota_tree.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/quota/quota_tree.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> In the Linux kernel before 5.15.3, fs/quota/quota_tree.c does not validate the block number in the quota tree (on disk). This can, for example, lead to a kernel/locking/rwsem.c use-after-free if there is a corrupted quota file. <p>Publish Date: 2022-03-18 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-45868>CVE-2021-45868</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-45868">https://www.linuxkernelcves.com/cves/CVE-2021-45868</a></p> <p>Release Date: 2022-03-18</p> <p>Fix Resolution: v4.4.293,v4.9.291,v4.14.256,v4.19.218,v5.4.160,v5.10.80,v5.14.19,v5.15.3,v5.16-rc1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-45868 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2021-45868 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/quota/quota_tree.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/quota/quota_tree.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> In the Linux kernel before 5.15.3, fs/quota/quota_tree.c does not validate the block number in the quota tree (on disk). This can, for example, lead to a kernel/locking/rwsem.c use-after-free if there is a corrupted quota file. <p>Publish Date: 2022-03-18 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-45868>CVE-2021-45868</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-45868">https://www.linuxkernelcves.com/cves/CVE-2021-45868</a></p> <p>Release Date: 2022-03-18</p> <p>Fix Resolution: v4.4.293,v4.9.291,v4.14.256,v4.19.218,v5.4.160,v5.10.80,v5.14.19,v5.15.3,v5.16-rc1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in base branch master vulnerable source files fs quota quota tree c fs quota quota tree c vulnerability details in the linux kernel before fs quota quota tree c does not validate the block number in the quota tree on disk this can for example lead to a kernel locking rwsem c use after free if there is a corrupted quota file publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
276,917
24,031,709,075
IssuesEvent
2022-09-15 15:30:24
elastic/kibana
https://api.github.com/repos/elastic/kibana
closed
[APM] Column width is inconsistent in the traces table in backend operation view
Team:APM apm:test-plan-regression 8.5 candidate
Column width is inconsistent in the traces table in backend operation view. https://user-images.githubusercontent.com/5831975/182563009-98b79bf4-f1b2-4fc9-b724-1d77f3e356d2.mov
1.0
[APM] Column width is inconsistent in the traces table in backend operation view - Column width is inconsistent in the traces table in backend operation view. https://user-images.githubusercontent.com/5831975/182563009-98b79bf4-f1b2-4fc9-b724-1d77f3e356d2.mov
test
column width is inconsistent in the traces table in backend operation view column width is inconsistent in the traces table in backend operation view
1
206,978
15,785,792,283
IssuesEvent
2021-04-01 16:49:22
iterative/dvc
https://api.github.com/repos/iterative/dvc
closed
tests: conflicting test requirements/deps
enhancement p3-nice-to-have testing ui
**Please provide information about your setup** ``` $ dvc --version 0.86.5+ffa8fe ``` I'm running Ubuntu 19.10 and installed via pip from source. I tried following the instructions in the contribution guidelines but got a fatal error (issue opened [here](https://github.com/iterative/dvc.org/issues/1016)). After working around that, I managed to build, but got two non-fatal errors: ``` ERROR: python-dev-tools 2020.2.5 has requirement pydocstyle==5.0.2, but you'll have pydocstyle 3.0.0 which is incompatible. ERROR: moto 1.3.14.dev464 has requirement idna<2.9,>=2.5, but you'll have idna 2.9 which is incompatible. ```
1.0
tests: conflicting test requirements/deps - **Please provide information about your setup** ``` $ dvc --version 0.86.5+ffa8fe ``` I'm running Ubuntu 19.10 and installed via pip from source. I tried following the instructions in the contribution guidelines but got a fatal error (issue opened [here](https://github.com/iterative/dvc.org/issues/1016)). After working around that, I managed to build, but got two non-fatal errors: ``` ERROR: python-dev-tools 2020.2.5 has requirement pydocstyle==5.0.2, but you'll have pydocstyle 3.0.0 which is incompatible. ERROR: moto 1.3.14.dev464 has requirement idna<2.9,>=2.5, but you'll have idna 2.9 which is incompatible. ```
test
tests conflicting test requirements deps please provide information about your setup dvc version i m running ubuntu and installed via pip from source i tried following the instructions in the contribution guidelines but got a fatal error issue opened after working around that i managed to build but got two non fatal errors error python dev tools has requirement pydocstyle but you ll have pydocstyle which is incompatible error moto has requirement idna but you ll have idna which is incompatible
1
16,876
23,232,540,105
IssuesEvent
2022-08-03 08:54:37
pingcap/tidb
https://api.github.com/repos/pingcap/tidb
closed
The default value of timestamp inconsistent with MySQL
compatibility-mysql8
## Bug Report Please answer these questions before submitting your issue. Thanks! ### 1. Minimal reproduce step (Required) ``` create table t(a timestamp default '2010-09-12 12:23:00'); select COLUMN_DEFAULT, COLUMN_TYPE from information_schema.columns where table_name='t' and column_name='a'; ``` ### 2. What did you expect to see? (Required) ``` mysql> select COLUMN_DEFAULT, COLUMN_TYPE from information_schema.columns where table_name='t' and column_name='a'; +---------------------+-------------+ | COLUMN_DEFAULT | COLUMN_TYPE | +---------------------+-------------+ | 2010-09-12 12:23:00 | timestamp | +---------------------+-------------+ 1 row in set (0.00 sec) ``` ### 3. What did you see instead (Required) ``` mysql> select COLUMN_DEFAULT, COLUMN_TYPE from information_schema.columns where table_name='t' and column_name='a'; +---------------------+-------------+ | COLUMN_DEFAULT | COLUMN_TYPE | +---------------------+-------------+ | 2010-09-12 04:23:00 | timestamp | +---------------------+-------------+ 1 row in set (0.00 sec) ``` ### 4. What is your TiDB version? (Required) ``` mysql> select tidb_version(); +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | tidb_version() | +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Release Version: v6.2.0-alpha-73-ga523d767f Edition: Community Git Commit Hash: a523d767f88e3fbcff1c3f138c8461dc0af5cd5e Git Branch: master UTC Build Time: 2022-06-07 09:24:17 GoVersion: go1.18 Race Enabled: false TiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306 Check Table Before Drop: false Store: unistore | +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.00 sec) ```
True
The default value of timestamp inconsistent with MySQL - ## Bug Report Please answer these questions before submitting your issue. Thanks! ### 1. Minimal reproduce step (Required) ``` create table t(a timestamp default '2010-09-12 12:23:00'); select COLUMN_DEFAULT, COLUMN_TYPE from information_schema.columns where table_name='t' and column_name='a'; ``` ### 2. What did you expect to see? (Required) ``` mysql> select COLUMN_DEFAULT, COLUMN_TYPE from information_schema.columns where table_name='t' and column_name='a'; +---------------------+-------------+ | COLUMN_DEFAULT | COLUMN_TYPE | +---------------------+-------------+ | 2010-09-12 12:23:00 | timestamp | +---------------------+-------------+ 1 row in set (0.00 sec) ``` ### 3. What did you see instead (Required) ``` mysql> select COLUMN_DEFAULT, COLUMN_TYPE from information_schema.columns where table_name='t' and column_name='a'; +---------------------+-------------+ | COLUMN_DEFAULT | COLUMN_TYPE | +---------------------+-------------+ | 2010-09-12 04:23:00 | timestamp | +---------------------+-------------+ 1 row in set (0.00 sec) ``` ### 4. What is your TiDB version? (Required) ``` mysql> select tidb_version(); +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | tidb_version() | +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Release Version: v6.2.0-alpha-73-ga523d767f Edition: Community Git Commit Hash: a523d767f88e3fbcff1c3f138c8461dc0af5cd5e Git Branch: master UTC Build Time: 2022-06-07 09:24:17 GoVersion: go1.18 Race Enabled: false TiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306 Check Table Before Drop: false Store: unistore | +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.00 sec) ```
non_test
the default value of timestamp inconsistent with mysql bug report please answer these questions before submitting your issue thanks minimal reproduce step required create table t a timestamp default select column default column type from information schema columns where table name t and column name a what did you expect to see required mysql select column default column type from information schema columns where table name t and column name a column default column type timestamp row in set sec what did you see instead required mysql select column default column type from information schema columns where table name t and column name a column default column type timestamp row in set sec what is your tidb version required mysql select tidb version tidb version release version alpha edition community git commit hash git branch master utc build time goversion race enabled false tikv min version check table before drop false store unistore row in set sec
0
288,813
24,938,310,986
IssuesEvent
2022-10-31 16:46:55
bethlakshmi/gbe-divio-djangocms-python2.7
https://api.github.com/repos/bethlakshmi/gbe-divio-djangocms-python2.7
closed
subway map for bidding
low PR-ready On Test Server
Luna's gonna give me a design. I (@bethlakshmi) am gonna look at good implementation patterns.
1.0
subway map for bidding - Luna's gonna give me a design. I (@bethlakshmi) am gonna look at good implementation patterns.
test
subway map for bidding luna s gonna give me a design i bethlakshmi am gonna look at good implementation patterns
1
325,777
27,962,047,921
IssuesEvent
2023-03-24 16:21:43
unifyai/ivy
https://api.github.com/repos/unifyai/ivy
closed
Fix jax_numpy_math.test_jax_numpy_inner
JAX Frontend Sub Task Failing Test
| | | |---|---| |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4505811912/jobs/7931947619" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/4505811912/jobs/7931947619" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4505811912/jobs/7931947619" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/4505811912/jobs/7931947619" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
1.0
Fix jax_numpy_math.test_jax_numpy_inner - | | | |---|---| |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4505811912/jobs/7931947619" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/4505811912/jobs/7931947619" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4505811912/jobs/7931947619" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/4505811912/jobs/7931947619" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
test
fix jax numpy math test jax numpy inner tensorflow img src torch img src numpy img src jax img src
1
349,304
31,791,723,948
IssuesEvent
2023-09-13 04:17:22
python/cpython
https://api.github.com/repos/python/cpython
closed
Python test suite is unable to re-run some tests (NO TESTS RAN)
type-bug tests
# Bug report ### Checklist - [X] I am confident this is a bug in CPython, not a bug in a third-party project - [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc), and am confident this bug has not been reported before ### CPython versions tested on: CPython main branch ### Operating systems tested on: Linux, Windows ### Output from running 'python -VV' on the command line: _No response_ ### A clear and concise description of the bug: When some Python tests fail and are re-run in verbose mode, no tests are run. Example 1: GHA Windows x86: https://github.com/python/cpython/actions/runs/5956556239/job/16157568818?pr=108370 ``` 0:33:01 load avg: 0.01 [447/447/2] test_concurrent_futures crashed (Exit code 1) Timeout (0:20:00)! (...) 0:33:02 Re-running test_concurrent_futures in verbose mode ---------------------------------------------------------------------- Ran 0 tests in 0.001s NO TESTS RAN == Tests result: FAILURE then SUCCESS == ``` Example 2: PPC64 Fedora PR: https://buildbot.python.org/all/#/builders/256/builds/1206 ``` 0:38:47 load avg: 2.92 [186/447/2] test_math crashed (Exit code 1) -- running: test_io (1 min 2 sec) Timeout (0:15:00)! (...) 1:13:23 load avg: 1.21 Re-running test_math in verbose mode Ran 0 tests in 0.000s NO TESTS RAN (...) 1 test failed: test_cppext 2 re-run tests: test_cppext test_math 1 test run no tests: test_math ```
1.0
Python test suite is unable to re-run some tests (NO TESTS RAN) - # Bug report ### Checklist - [X] I am confident this is a bug in CPython, not a bug in a third-party project - [X] I have searched the [CPython issue tracker](https://github.com/python/cpython/issues?q=is%3Aissue+sort%3Acreated-desc), and am confident this bug has not been reported before ### CPython versions tested on: CPython main branch ### Operating systems tested on: Linux, Windows ### Output from running 'python -VV' on the command line: _No response_ ### A clear and concise description of the bug: When some Python tests fail and are re-run in verbose mode, no tests are run. Example 1: GHA Windows x86: https://github.com/python/cpython/actions/runs/5956556239/job/16157568818?pr=108370 ``` 0:33:01 load avg: 0.01 [447/447/2] test_concurrent_futures crashed (Exit code 1) Timeout (0:20:00)! (...) 0:33:02 Re-running test_concurrent_futures in verbose mode ---------------------------------------------------------------------- Ran 0 tests in 0.001s NO TESTS RAN == Tests result: FAILURE then SUCCESS == ``` Example 2: PPC64 Fedora PR: https://buildbot.python.org/all/#/builders/256/builds/1206 ``` 0:38:47 load avg: 2.92 [186/447/2] test_math crashed (Exit code 1) -- running: test_io (1 min 2 sec) Timeout (0:15:00)! (...) 1:13:23 load avg: 1.21 Re-running test_math in verbose mode Ran 0 tests in 0.000s NO TESTS RAN (...) 1 test failed: test_cppext 2 re-run tests: test_cppext test_math 1 test run no tests: test_math ```
test
python test suite is unable to re run some tests no tests ran bug report checklist i am confident this is a bug in cpython not a bug in a third party project i have searched the and am confident this bug has not been reported before cpython versions tested on cpython main branch operating systems tested on linux windows output from running python vv on the command line no response a clear and concise description of the bug when some python tests fail and are re run in verbose mode no tests are run example gha windows load avg test concurrent futures crashed exit code timeout re running test concurrent futures in verbose mode ran tests in no tests ran tests result failure then success example fedora pr load avg test math crashed exit code running test io min sec timeout load avg re running test math in verbose mode ran tests in no tests ran test failed test cppext re run tests test cppext test math test run no tests test math
1
2,112
3,516,672,675
IssuesEvent
2016-01-12 01:10:50
cga-harvard/cga-worldmap
https://api.github.com/repos/cga-harvard/cga-worldmap
closed
Move tomcats from /opt/ to /mnt/sdp/opt
infrastructure
To allow bigger downloads, because it uses a temp file for WFS requests.
1.0
Move tomcats from /opt/ to /mnt/sdp/opt - To allow bigger downloads, because it uses a temp file for WFS requests.
non_test
move tomcats from opt to mnt sdp opt to allow bigger downloads because it uses a temp file for wfs requests
0
105,649
13,203,947,237
IssuesEvent
2020-08-14 15:02:03
tektoncd/pipeline
https://api.github.com/repos/tektoncd/pipeline
reopened
feature request: running pipelines locally
design help wanted lifecycle/rotten
Hello from [kubernetes/test-infra](https://github.com/kubernetes/test-infra)! :wave: -- We think the pipeline is really interesting and we'd like to use it more, and we (I) have a feature request which we get a lot with our current "jobs" system: > how do I test this [job definition] on my machine and > how do I run this test job [to test code for some repo using the jobs for ci] We've [considered solving this for "ProwJobs"](https://github.com/kubernetes/test-infra/issues/6590) and while looking at prior art I found https://github.com/GoogleCloudPlatform/cloud-build-local, which has an implementation of this for GCB which looks a bit similar ... I'd like to start a discussion around how we could potentially accomplish this for the build pipeline. For the first case just being able to run it locally in some form is sufficient, for the second case users would likely want to be able to mount not-yet-pushed code to the job and build / test that. If you all are interested, I'd like to help with this. I think it could be a "killer" feature to be able to iterate on running and testing your containerized pipelines locally before submitting them to whatever CI you will use in production. I know [Prow](https://github.com/kubernetes/test-infra/tree/master/prow) users would love this. cc @bobcatfish
1.0
feature request: running pipelines locally - Hello from [kubernetes/test-infra](https://github.com/kubernetes/test-infra)! :wave: -- We think the pipeline is really interesting and we'd like to use it more, and we (I) have a feature request which we get a lot with our current "jobs" system: > how do I test this [job definition] on my machine and > how do I run this test job [to test code for some repo using the jobs for ci] We've [considered solving this for "ProwJobs"](https://github.com/kubernetes/test-infra/issues/6590) and while looking at prior art I found https://github.com/GoogleCloudPlatform/cloud-build-local, which has an implementation of this for GCB which looks a bit similar ... I'd like to start a discussion around how we could potentially accomplish this for the build pipeline. For the first case just being able to run it locally in some form is sufficient, for the second case users would likely want to be able to mount not-yet-pushed code to the job and build / test that. If you all are interested, I'd like to help with this. I think it could be a "killer" feature to be able to iterate on running and testing your containerized pipelines locally before submitting them to whatever CI you will use in production. I know [Prow](https://github.com/kubernetes/test-infra/tree/master/prow) users would love this. cc @bobcatfish
non_test
feature request running pipelines locally hello from wave we think the pipeline is really interesting and we d like to use it more and we i have a feature request which we get a lot with our current jobs system how do i test this on my machine and how do i run this test job we ve and while looking at prior art i found which has an implementation of this for gcb which looks a bit similar i d like to start a discussion around how we could potentially accomplish this for the build pipeline for the first case just being able to run it locally in some form is sufficient for the second case users would likely want to be able to mount not yet pushed code to the job and build test that if you all are interested i d like to help with this i think it could be a killer feature to be able to iterate on running and testing your containerized pipelines locally before submitting them to whatever ci you will use in production i know users would love this cc bobcatfish
0
67,483
7,048,623,210
IssuesEvent
2018-01-02 18:29:42
open-horizon/anax
https://api.github.com/repos/open-horizon/anax
closed
Agbot optionally cancel agreement if agreement in exchange disappears
feature test
- Or it could check the node's agreement status in the exchange. - Or if the node heartbeat in the exchange is very stale - Also provide an anax api to cancel/unregister a microservice or pattern (remove fields in exchange so agbots won't find it, then explicitly cancel the agreements with the agbot) - moved to issue #369 . This can be used as an alternative to data verification, or in addition to it.
1.0
Agbot optionally cancel agreement if agreement in exchange disappears - - Or it could check the node's agreement status in the exchange. - Or if the node heartbeat in the exchange is very stale - Also provide an anax api to cancel/unregister a microservice or pattern (remove fields in exchange so agbots won't find it, then explicitly cancel the agreements with the agbot) - moved to issue #369 . This can be used as an alternative to data verification, or in addition to it.
test
agbot optionally cancel agreement if agreement in exchange disappears or it could check the node s agreement status in the exchange or if the node heartbeat in the exchange is very stale also provide an anax api to cancel unregister a microservice or pattern remove fields in exchange so agbots won t find it then explicitly cancel the agreements with the agbot moved to issue this can be used as an alternative to data verification or in addition to it
1
193,390
14,652,293,079
IssuesEvent
2020-12-28 01:03:19
github-vet/rangeloop-pointer-findings
https://api.github.com/repos/github-vet/rangeloop-pointer-findings
closed
golang/go: src/encoding/base32/base32_test.go; 18 LoC
fresh small test
Found a possible issue in [golang/go](https://www.github.com/golang/go) at [src/encoding/base32/base32_test.go](https:%2F%2Fgithub.com%2Fgolang%2Fgo%2Fblob%2F1d78139128d6d839d7da0aeb10b3e51b6c7c0749%2Fsrc%2Fencoding%2Fbase32%2Fbase32_test.go%23L609-L626) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > range-loop variable chunks used in defer or goroutine at line 614 [Click here to see the code in its original context.](https:%2F%2Fgithub.com%2Fgolang%2Fgo%2Fblob%2F1d78139128d6d839d7da0aeb10b3e51b6c7c0749%2Fsrc%2Fencoding%2Fbase32%2Fbase32_test.go%23L609-L626) <details> <summary>Click here to show the 18 line(s) of Go which triggered the analyzer.</summary> ```go for _, chunks := range testcase.chunkCombinations { pr, pw := io.Pipe() // Write the encoded chunks into the pipe go func() { for _, chunk := range chunks { pw.Write([]byte(chunk)) } pw.Close() }() decoder := NewDecoder(StdEncoding, pr) _, err := io.ReadAll(decoder) if err != testcase.expected { t.Errorf("Expected %v, got %v; case %s %+v", testcase.expected, err, testcase.prefix, chunks) } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 1d78139128d6d839d7da0aeb10b3e51b6c7c0749
1.0
golang/go: src/encoding/base32/base32_test.go; 18 LoC - Found a possible issue in [golang/go](https://www.github.com/golang/go) at [src/encoding/base32/base32_test.go](https:%2F%2Fgithub.com%2Fgolang%2Fgo%2Fblob%2F1d78139128d6d839d7da0aeb10b3e51b6c7c0749%2Fsrc%2Fencoding%2Fbase32%2Fbase32_test.go%23L609-L626) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > range-loop variable chunks used in defer or goroutine at line 614 [Click here to see the code in its original context.](https:%2F%2Fgithub.com%2Fgolang%2Fgo%2Fblob%2F1d78139128d6d839d7da0aeb10b3e51b6c7c0749%2Fsrc%2Fencoding%2Fbase32%2Fbase32_test.go%23L609-L626) <details> <summary>Click here to show the 18 line(s) of Go which triggered the analyzer.</summary> ```go for _, chunks := range testcase.chunkCombinations { pr, pw := io.Pipe() // Write the encoded chunks into the pipe go func() { for _, chunk := range chunks { pw.Write([]byte(chunk)) } pw.Close() }() decoder := NewDecoder(StdEncoding, pr) _, err := io.ReadAll(decoder) if err != testcase.expected { t.Errorf("Expected %v, got %v; case %s %+v", testcase.expected, err, testcase.prefix, chunks) } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 1d78139128d6d839d7da0aeb10b3e51b6c7c0749
test
golang go src encoding test go loc found a possible issue in at https com test go below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message range loop variable chunks used in defer or goroutine at line https com test go click here to show the line s of go which triggered the analyzer go for chunks range testcase chunkcombinations pr pw io pipe write the encoded chunks into the pipe go func for chunk range chunks pw write byte chunk pw close decoder newdecoder stdencoding pr err io readall decoder if err testcase expected t errorf expected v got v case s v testcase expected err testcase prefix chunks leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
1
381,578
11,276,857,928
IssuesEvent
2020-01-15 00:39:30
EyeSeeTea/malariapp
https://api.github.com/repos/EyeSeeTea/malariapp
closed
1.4 Plus sign not clear in plan module
HNQIS complexity - med (1-5hr) priority - high type - maintenance
The plus sign in plan module is not clear in some devices. This is not specific to 1.5.1. Would appreciate if this is made darker. ![68012246-8d761500-fc9a-11e9-9ef8-4b941ad2d19c](https://user-images.githubusercontent.com/5593590/68568497-02e0a300-045c-11ea-9516-d1c37a119410.png)
1.0
1.4 Plus sign not clear in plan module - The plus sign in plan module is not clear in some devices. This is not specific to 1.5.1. Would appreciate if this is made darker. ![68012246-8d761500-fc9a-11e9-9ef8-4b941ad2d19c](https://user-images.githubusercontent.com/5593590/68568497-02e0a300-045c-11ea-9516-d1c37a119410.png)
non_test
plus sign not clear in plan module the plus sign in plan module is not clear in some devices this is not specific to would appreciate if this is made darker
0
208,804
15,934,524,251
IssuesEvent
2021-04-14 08:46:51
microsoft/vscode
https://api.github.com/repos/microsoft/vscode
closed
vscode API - env
integration-test-failure
``` 1) vscode API - env env.remoteName: AssertionError [ERR_ASSERTION] [ERR_ASSERTION]: The expression evaluated to a falsy value: assert.ok(knownUiAndWorkspaceExtension) at Context.<anonymous> (extensions/vscode-api-tests/src/singlefolder-tests/env.test.ts:44:11) at processImmediate (internal/timers.js:456:21) ``` https://monacotools.visualstudio.com/DefaultCollection/Monaco/_build/results?buildId=113384&view=logs&j=3792f238-f35e-5f82-0dbc-272432d9a0fb&t=fff0fe4e-512d-573e-9d0b-0e9f3e677ab7&l=695
1.0
vscode API - env - ``` 1) vscode API - env env.remoteName: AssertionError [ERR_ASSERTION] [ERR_ASSERTION]: The expression evaluated to a falsy value: assert.ok(knownUiAndWorkspaceExtension) at Context.<anonymous> (extensions/vscode-api-tests/src/singlefolder-tests/env.test.ts:44:11) at processImmediate (internal/timers.js:456:21) ``` https://monacotools.visualstudio.com/DefaultCollection/Monaco/_build/results?buildId=113384&view=logs&j=3792f238-f35e-5f82-0dbc-272432d9a0fb&t=fff0fe4e-512d-573e-9d0b-0e9f3e677ab7&l=695
test
vscode api env vscode api env env remotename assertionerror the expression evaluated to a falsy value assert ok knownuiandworkspaceextension at context extensions vscode api tests src singlefolder tests env test ts at processimmediate internal timers js
1
348,412
10,441,861,743
IssuesEvent
2019-09-18 11:51:03
threefoldfoundation/www_threefold.io_new
https://api.github.com/repos/threefoldfoundation/www_threefold.io_new
closed
Token Page: Title on smaller screens clashes with the 3 logo
priority_major state_inprogress
screen shot from @samtaggart 's laptop. ![photo_2019-09-18_10-51-12](https://user-images.githubusercontent.com/29431263/65133047-47266880-da02-11e9-8452-ee42f6fb8dce.jpg)
1.0
Token Page: Title on smaller screens clashes with the 3 logo - screen shot from @samtaggart 's laptop. ![photo_2019-09-18_10-51-12](https://user-images.githubusercontent.com/29431263/65133047-47266880-da02-11e9-8452-ee42f6fb8dce.jpg)
non_test
token page title on smaller screens clashes with the logo screen shot from samtaggart s laptop
0
51,019
12,641,724,144
IssuesEvent
2020-06-16 06:49:57
inspireui/support
https://api.github.com/repos/inspireui/support
closed
URL/Website not opening from Homescreen/JSON file
FluxBuilder FluxNews
**_Step 1 (require): describe detail issues & screenshots_** + Detail Issues: I used the Fluxbuilder to build my home screen. I used some banner images, which will then open an URL, but somehow instead of opening the URL it opens the blog category. + Product & version: Latest + Flutter (or React Native) version: Latest + Testing Device/Simulator: Android Studio iOS Simulator + Screenshot issues (drag the file to attach here): **_I am using Fluxnews._** This is how my code in the config_en.json looks like ![issue](https://user-images.githubusercontent.com/66735847/84463672-6ca08400-acad-11ea-900c-60e490e57d18.jpg) The banner image is showing correctly in the simulator, but once I clicked on it, it redirects me to a blog category instead of opening the URL. Also, will this URL open as webview, or will it open in the browser? I would like to show it as a webview. Thanks for your help and hopefully hear from you soon. **_Step 2 (require): submit proof of purchasing the license on http://verify.inspireui.com_** **_Important Note:_** - We will close the ticket if missing the proof of purchase on step 2. The screenshot purchase attaches here will be invalid. - Kindly create only *One Ticket* & includes all the issues, that would help us focus to resolve it better. - If your all ticket was resolved & closed, but want to create a new ticket, just do simple step by linking to the previous verified ID (you don't need to submit the form again), for example, #Ticket-ID Thank you so much for your time 😊
1.0
URL/Website not opening from Homescreen/JSON file - **_Step 1 (require): describe detail issues & screenshots_** + Detail Issues: I used the Fluxbuilder to build my home screen. I used some banner images, which will then open an URL, but somehow instead of opening the URL it opens the blog category. + Product & version: Latest + Flutter (or React Native) version: Latest + Testing Device/Simulator: Android Studio iOS Simulator + Screenshot issues (drag the file to attach here): **_I am using Fluxnews._** This is how my code in the config_en.json looks like ![issue](https://user-images.githubusercontent.com/66735847/84463672-6ca08400-acad-11ea-900c-60e490e57d18.jpg) The banner image is showing correctly in the simulator, but once I clicked on it, it redirects me to a blog category instead of opening the URL. Also, will this URL open as webview, or will it open in the browser? I would like to show it as a webview. Thanks for your help and hopefully hear from you soon. **_Step 2 (require): submit proof of purchasing the license on http://verify.inspireui.com_** **_Important Note:_** - We will close the ticket if missing the proof of purchase on step 2. The screenshot purchase attaches here will be invalid. - Kindly create only *One Ticket* & includes all the issues, that would help us focus to resolve it better. - If your all ticket was resolved & closed, but want to create a new ticket, just do simple step by linking to the previous verified ID (you don't need to submit the form again), for example, #Ticket-ID Thank you so much for your time 😊
non_test
url website not opening from homescreen json file step require describe detail issues screenshots detail issues i used the fluxbuilder to build my home screen i used some banner images which will then open an url but somehow instead of opening the url it opens the blog category product version latest flutter or react native version latest testing device simulator android studio ios simulator screenshot issues drag the file to attach here i am using fluxnews this is how my code in the config en json looks like the banner image is showing correctly in the simulator but once i clicked on it it redirects me to a blog category instead of opening the url also will this url open as webview or will it open in the browser i would like to show it as a webview thanks for your help and hopefully hear from you soon step require submit proof of purchasing the license on important note we will close the ticket if missing the proof of purchase on step the screenshot purchase attaches here will be invalid kindly create only one ticket includes all the issues that would help us focus to resolve it better if your all ticket was resolved closed but want to create a new ticket just do simple step by linking to the previous verified id you don t need to submit the form again for example ticket id thank you so much for your time 😊
0
343,744
30,687,074,111
IssuesEvent
2023-07-26 13:02:02
kiali/kiali
https://api.github.com/repos/kiali/kiali
closed
[flake] See service Traffic information
bug backlog test-cypress :robot:
Cypress test 'See service Traffic information' fails in CI. It expects to see 'istio-ingressgateway' in 'productpage' Service's Traffc tab, but it does not: ![Kiali Service Details page -- See service Traffic information (failed)](https://github.com/kiali/kiali/assets/604313/093782b2-4ff8-4543-8d5e-7233fb62d9d2)
1.0
[flake] See service Traffic information - Cypress test 'See service Traffic information' fails in CI. It expects to see 'istio-ingressgateway' in 'productpage' Service's Traffc tab, but it does not: ![Kiali Service Details page -- See service Traffic information (failed)](https://github.com/kiali/kiali/assets/604313/093782b2-4ff8-4543-8d5e-7233fb62d9d2)
test
see service traffic information cypress test see service traffic information fails in ci it expects to see istio ingressgateway in productpage service s traffc tab but it does not
1
10,623
3,131,182,985
IssuesEvent
2015-09-09 13:39:34
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
Test failure in CI build 7137
test-failure
The following test appears to have failed: [#7137](https://circleci.com/gh/cockroachdb/cockroach/7137): ``` tb6627f72 127.0.0.1:0 13:31:32.350564 5.128µs ·meta descriptor lookup kv/dist_sender.go:571 tb6627f72 127.0.0.1:0 13:31:32.350572 668.739µs ·sending RPC kv/dist_sender.go:477 tb6627f72 127.0.0.1:0 13:31:32.350600 0 ··sending to 127.0.0.1:52531 rpc/send.go:171 testdata/subquery:1: SELECT (SELECT 1) --- FAIL: TestLogic (4.82s) logic_test.go:392: testdata/subquery:1: expected success, but found sql/parser/eval.go:1044: eval: unexpected expression: *parser.Subquery === RUN TestPrivilege --- PASS: TestPrivilege (0.00s) === RUN TestPrivilegeValidate --- PASS: TestPrivilegeValidate (0.00s) === RUN TestSystemPrivilegeValidate --- PASS: TestSystemPrivilegeValidate (0.00s) === RUN TestAllocateIDs --- PASS: TestAllocateIDs (0.00s) === RUN TestValidateTableDesc --- PASS: TestValidateTableDesc (0.01s) === RUN TestColumnTypeSQLString --- PASS: TestColumnTypeSQLString (0.00s) FAIL FAIL github.com/cockroachdb/cockroach/sql 5.267s === RUN TestDatumString --- PASS: TestDatumString (0.00s) === RUN TestPlaceholders I0909 13:31:30.606894 857 base/context.go:155 setting up TLS from certificates directory: test_certs I0909 13:31:30.608070 857 base/context.go:117 setting up TLS from certificates directory: test_certs I0909 13:31:30.610750 857 rpc/clock_offset.go:155 monitoring cluster offset I0909 13:31:30.611050 857 multiraft/multiraft.go:447 node 100000001 starting I0909 13:31:30.611279 857 raft/raft.go:406 group 1 100000001 became follower at term 5 I0909 13:31:30.611382 857 raft/raft.go:219 group 1 newRaft 100000001 [peers: [100000001], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5] I0909 13:31:30.611517 857 raft/raft.go:485 group 1 100000001 is starting a new election at term 5 -- === RUN TestNormalizeTableName --- PASS: TestNormalizeTableName (0.00s) === RUN TestNormalizeColumnName --- PASS: TestNormalizeColumnName (0.00s) === RUN TestNormalizeExpr --- FAIL: TestNormalizeExpr (0.01s) normalize_test.go:90: (SELECT 1): sql/parser/eval.go:1044: eval: unexpected expression: *parser.Subquery === RUN TestNormalizeExprError --- PASS: TestNormalizeExprError (0.00s) === RUN TestParse --- PASS: TestParse (0.03s) === RUN TestParse2 --- PASS: TestParse2 (0.01s) === RUN TestParseSyntax --- PASS: TestParseSyntax (0.00s) === RUN TestParseError -- --- PASS: TestFillArgs (0.01s) === RUN TestFillArgsError --- PASS: TestFillArgsError (0.00s) === RUN TestWalkStmt --- PASS: TestWalkStmt (0.00s) FAIL FAIL github.com/cockroachdb/cockroach/sql/parser 0.102s === RUN TestPrivilegeDecode --- PASS: TestPrivilegeDecode (0.00s) PASS ok github.com/cockroachdb/cockroach/sql/privilege 0.012s === RUN TestUpdateRangeAddressing I0909 13:31:53.150960 1005 multiraft/multiraft.go:447 node 100000001 starting I0909 13:31:53.151183 1005 storage/replica.go:1191 gossiping cluster id from store 1, range 1 I0909 13:31:53.151465 1005 raft/raft.go:406 group 1 100000001 became follower at term 5 I0909 13:31:53.151564 1005 raft/raft.go:219 group 1 newRaft 100000001 [peers: [100000001], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5] I0909 13:31:53.151794 1005 raft/raft.go:485 group 1 100000001 is starting a new election at term 5 ``` Please assign, take a look and update the issue accordingly.
1.0
Test failure in CI build 7137 - The following test appears to have failed: [#7137](https://circleci.com/gh/cockroachdb/cockroach/7137): ``` tb6627f72 127.0.0.1:0 13:31:32.350564 5.128µs ·meta descriptor lookup kv/dist_sender.go:571 tb6627f72 127.0.0.1:0 13:31:32.350572 668.739µs ·sending RPC kv/dist_sender.go:477 tb6627f72 127.0.0.1:0 13:31:32.350600 0 ··sending to 127.0.0.1:52531 rpc/send.go:171 testdata/subquery:1: SELECT (SELECT 1) --- FAIL: TestLogic (4.82s) logic_test.go:392: testdata/subquery:1: expected success, but found sql/parser/eval.go:1044: eval: unexpected expression: *parser.Subquery === RUN TestPrivilege --- PASS: TestPrivilege (0.00s) === RUN TestPrivilegeValidate --- PASS: TestPrivilegeValidate (0.00s) === RUN TestSystemPrivilegeValidate --- PASS: TestSystemPrivilegeValidate (0.00s) === RUN TestAllocateIDs --- PASS: TestAllocateIDs (0.00s) === RUN TestValidateTableDesc --- PASS: TestValidateTableDesc (0.01s) === RUN TestColumnTypeSQLString --- PASS: TestColumnTypeSQLString (0.00s) FAIL FAIL github.com/cockroachdb/cockroach/sql 5.267s === RUN TestDatumString --- PASS: TestDatumString (0.00s) === RUN TestPlaceholders I0909 13:31:30.606894 857 base/context.go:155 setting up TLS from certificates directory: test_certs I0909 13:31:30.608070 857 base/context.go:117 setting up TLS from certificates directory: test_certs I0909 13:31:30.610750 857 rpc/clock_offset.go:155 monitoring cluster offset I0909 13:31:30.611050 857 multiraft/multiraft.go:447 node 100000001 starting I0909 13:31:30.611279 857 raft/raft.go:406 group 1 100000001 became follower at term 5 I0909 13:31:30.611382 857 raft/raft.go:219 group 1 newRaft 100000001 [peers: [100000001], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5] I0909 13:31:30.611517 857 raft/raft.go:485 group 1 100000001 is starting a new election at term 5 -- === RUN TestNormalizeTableName --- PASS: TestNormalizeTableName (0.00s) === RUN TestNormalizeColumnName --- PASS: TestNormalizeColumnName (0.00s) === RUN TestNormalizeExpr --- FAIL: TestNormalizeExpr (0.01s) normalize_test.go:90: (SELECT 1): sql/parser/eval.go:1044: eval: unexpected expression: *parser.Subquery === RUN TestNormalizeExprError --- PASS: TestNormalizeExprError (0.00s) === RUN TestParse --- PASS: TestParse (0.03s) === RUN TestParse2 --- PASS: TestParse2 (0.01s) === RUN TestParseSyntax --- PASS: TestParseSyntax (0.00s) === RUN TestParseError -- --- PASS: TestFillArgs (0.01s) === RUN TestFillArgsError --- PASS: TestFillArgsError (0.00s) === RUN TestWalkStmt --- PASS: TestWalkStmt (0.00s) FAIL FAIL github.com/cockroachdb/cockroach/sql/parser 0.102s === RUN TestPrivilegeDecode --- PASS: TestPrivilegeDecode (0.00s) PASS ok github.com/cockroachdb/cockroach/sql/privilege 0.012s === RUN TestUpdateRangeAddressing I0909 13:31:53.150960 1005 multiraft/multiraft.go:447 node 100000001 starting I0909 13:31:53.151183 1005 storage/replica.go:1191 gossiping cluster id from store 1, range 1 I0909 13:31:53.151465 1005 raft/raft.go:406 group 1 100000001 became follower at term 5 I0909 13:31:53.151564 1005 raft/raft.go:219 group 1 newRaft 100000001 [peers: [100000001], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5] I0909 13:31:53.151794 1005 raft/raft.go:485 group 1 100000001 is starting a new election at term 5 ``` Please assign, take a look and update the issue accordingly.
test
test failure in ci build the following test appears to have failed ·meta descriptor lookup kv dist sender go ·sending rpc kv dist sender go ··sending to rpc send go testdata subquery select select fail testlogic logic test go testdata subquery expected success but found sql parser eval go eval unexpected expression parser subquery run testprivilege pass testprivilege run testprivilegevalidate pass testprivilegevalidate run testsystemprivilegevalidate pass testsystemprivilegevalidate run testallocateids pass testallocateids run testvalidatetabledesc pass testvalidatetabledesc run testcolumntypesqlstring pass testcolumntypesqlstring fail fail github com cockroachdb cockroach sql run testdatumstring pass testdatumstring run testplaceholders base context go setting up tls from certificates directory test certs base context go setting up tls from certificates directory test certs rpc clock offset go monitoring cluster offset multiraft multiraft go node starting raft raft go group became follower at term raft raft go group newraft term commit applied lastindex lastterm raft raft go group is starting a new election at term run testnormalizetablename pass testnormalizetablename run testnormalizecolumnname pass testnormalizecolumnname run testnormalizeexpr fail testnormalizeexpr normalize test go select sql parser eval go eval unexpected expression parser subquery run testnormalizeexprerror pass testnormalizeexprerror run testparse pass testparse run pass run testparsesyntax pass testparsesyntax run testparseerror pass testfillargs run testfillargserror pass testfillargserror run testwalkstmt pass testwalkstmt fail fail github com cockroachdb cockroach sql parser run testprivilegedecode pass testprivilegedecode pass ok github com cockroachdb cockroach sql privilege run testupdaterangeaddressing multiraft multiraft go node starting storage replica go gossiping cluster id from store range raft raft go group became follower at term raft raft go group newraft term commit applied lastindex lastterm raft raft go group is starting a new election at term please assign take a look and update the issue accordingly
1
508,550
14,702,445,572
IssuesEvent
2021-01-04 13:36:52
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.google.com - site is not usable
browser-focus-geckoview engine-gecko ml-needsdiagnosis-false priority-critical
<!-- @browser: Firefox Mobile 84.0 --> <!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:84.0) Gecko/84.0 Firefox/84.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/64822 --> <!-- @extra_labels: browser-focus-geckoview --> **URL**: https://www.google.com/travel/hotels/sonfjällets wärdshus/entity/CgsIkeqJ-_jj6NjXARAB/writereview?g2lb=2502548,2503771,4258168,4306835,4317915,4328159,4371335,4401769,4419364,4428793,4429192,4431137,4463263,4463666,4464463,4474859,4480320,4482194,4482434,4482436,4270859,4284970 **Browser / Version**: Firefox Mobile 84.0 **Operating System**: Android **Tested Another Browser**: Yes Chrome **Problem type**: Site is not usable **Description**: Page not loading correctly **Steps to Reproduce**: <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.google.com - site is not usable - <!-- @browser: Firefox Mobile 84.0 --> <!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:84.0) Gecko/84.0 Firefox/84.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/64822 --> <!-- @extra_labels: browser-focus-geckoview --> **URL**: https://www.google.com/travel/hotels/sonfjällets wärdshus/entity/CgsIkeqJ-_jj6NjXARAB/writereview?g2lb=2502548,2503771,4258168,4306835,4317915,4328159,4371335,4401769,4419364,4428793,4429192,4431137,4463263,4463666,4464463,4474859,4480320,4482194,4482434,4482436,4270859,4284970 **Browser / Version**: Firefox Mobile 84.0 **Operating System**: Android **Tested Another Browser**: Yes Chrome **Problem type**: Site is not usable **Description**: Page not loading correctly **Steps to Reproduce**: <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_test
site is not usable url wärdshus entity cgsikeqj writereview browser version firefox mobile operating system android tested another browser yes chrome problem type site is not usable description page not loading correctly steps to reproduce browser configuration none from with ❤️
0
225,072
17,791,838,402
IssuesEvent
2021-08-31 17:06:06
saltstack/salt
https://api.github.com/repos/saltstack/salt
opened
[TEST FAILURE] Mac tests are hanging on tests/integration/modules/test_mac_assistive.py
Test-Failure
The mac tests are hanging on this test: tests/integration/modules/test_mac_assistive.py ``` 14:37:24 tests/integration/modules/test_mac_assistive.py::MacAssistiveTest::test_installed PASSED 14:37:24 ----------------------------- Processes Statistics ----------------------------- 14:37:24 .................... System - CPU: 23.80 % MEM: 38.90 % (Virtual Memory) 14:37:24 ............ Test Suite Run - CPU: 4.90 % MEM: 6.64 % (RSS) MEM SUM: 15.71 % (RSS) CHILD PROCS: 15 14:37:24 ... SaltMaster(id='master') - CPU: 0.00 % MEM: 0.75 % (RSS) MEM SUM: 7.33 % (RSS) CHILD PROCS: 11 14:37:24 ... SaltMinion(id='minion') - CPU: 1.70 % MEM: 1.17 % (RSS) MEM SUM: 1.74 % (RSS) CHILD PROCS: 2 ``` https://jenkins.saltproject.io/job/pr-macosx-mojave-x86_64-py3-pytest-slow/job/master/552/
1.0
[TEST FAILURE] Mac tests are hanging on tests/integration/modules/test_mac_assistive.py - The mac tests are hanging on this test: tests/integration/modules/test_mac_assistive.py ``` 14:37:24 tests/integration/modules/test_mac_assistive.py::MacAssistiveTest::test_installed PASSED 14:37:24 ----------------------------- Processes Statistics ----------------------------- 14:37:24 .................... System - CPU: 23.80 % MEM: 38.90 % (Virtual Memory) 14:37:24 ............ Test Suite Run - CPU: 4.90 % MEM: 6.64 % (RSS) MEM SUM: 15.71 % (RSS) CHILD PROCS: 15 14:37:24 ... SaltMaster(id='master') - CPU: 0.00 % MEM: 0.75 % (RSS) MEM SUM: 7.33 % (RSS) CHILD PROCS: 11 14:37:24 ... SaltMinion(id='minion') - CPU: 1.70 % MEM: 1.17 % (RSS) MEM SUM: 1.74 % (RSS) CHILD PROCS: 2 ``` https://jenkins.saltproject.io/job/pr-macosx-mojave-x86_64-py3-pytest-slow/job/master/552/
test
mac tests are hanging on tests integration modules test mac assistive py the mac tests are hanging on this test tests integration modules test mac assistive py tests integration modules test mac assistive py macassistivetest test installed    processes statistics  system cpu mem virtual memory test suite run cpu mem rss mem sum rss child procs saltmaster id master cpu mem rss mem sum rss child procs saltminion id minion cpu mem rss mem sum rss child procs
1
309,171
23,286,535,310
IssuesEvent
2022-08-05 17:08:03
apexlang/apexlang.io
https://api.github.com/repos/apexlang/apexlang.io
opened
Content for custom generator modules
documentation
Explanation of: - [ ] Generators - [ ] Templates - [ ] Definitions (mostly used for importing directives)
1.0
Content for custom generator modules - Explanation of: - [ ] Generators - [ ] Templates - [ ] Definitions (mostly used for importing directives)
non_test
content for custom generator modules explanation of generators templates definitions mostly used for importing directives
0
69,000
3,294,841,769
IssuesEvent
2015-10-31 12:07:32
BramVanroy/earley-interface
https://api.github.com/repos/BramVanroy/earley-interface
opened
Add split screen option
interface low priority
This is useful for higher-resolution screen sizes where a full-width table is not ideal
1.0
Add split screen option - This is useful for higher-resolution screen sizes where a full-width table is not ideal
non_test
add split screen option this is useful for higher resolution screen sizes where a full width table is not ideal
0
9,030
6,107,040,364
IssuesEvent
2017-06-21 06:55:00
apinf/open-api-designer
https://api.github.com/repos/apinf/open-api-designer
closed
Info text for Host field should be more intuitive
enhancement usability issue
# Issue The info message given for Host field is vauge. It doesn't clearly state the purpose of the field, how is it related to Base field and what example value should be given in it. # Solution Give intuitive text explaining the above description for Host field.
True
Info text for Host field should be more intuitive - # Issue The info message given for Host field is vauge. It doesn't clearly state the purpose of the field, how is it related to Base field and what example value should be given in it. # Solution Give intuitive text explaining the above description for Host field.
non_test
info text for host field should be more intuitive issue the info message given for host field is vauge it doesn t clearly state the purpose of the field how is it related to base field and what example value should be given in it solution give intuitive text explaining the above description for host field
0
394,714
11,647,849,396
IssuesEvent
2020-03-01 17:23:13
oral-health-and-disease-ontologies/ohd-ontology
https://api.github.com/repos/oral-health-and-disease-ontologies/ohd-ontology
opened
Date information in IRI for OBO Dashboard
Priority-High
Dashboard reports: ``` Version IRI does not have date information ``` Actually, we have this. It just isn't in the recommended format. See dashboard: http://obo-dashboard-test.ontodev.com/ohd/dashboard.html
1.0
Date information in IRI for OBO Dashboard - Dashboard reports: ``` Version IRI does not have date information ``` Actually, we have this. It just isn't in the recommended format. See dashboard: http://obo-dashboard-test.ontodev.com/ohd/dashboard.html
non_test
date information in iri for obo dashboard dashboard reports version iri does not have date information actually we have this it just isn t in the recommended format see dashboard
0
173,547
13,428,597,036
IssuesEvent
2020-09-06 22:30:06
medic/cht-core
https://api.github.com/repos/medic/cht-core
closed
Flaky test: token login should redirect the user to the app if already logged in
Testing Type: Technical issue
**Describe the issue** I've seen a [few builds](https://travis-ci.org/github/medic/cht-core/builds/719775504) fail with this error: ``` token login ✗ should redirect the user to the app if already logged in (3 secs) - Failed: stale element reference: element is not attached to the page document  (Session info: headless chrome=84.0.4147.135)  (Driver info: chromedriver=83.0.4103.39 (ccbf011cb2d2b19b506d844400483861342c20cd-refs/branch-heads/4103@{#416}),platform=Linux 4.15.0-1077-gcp x86_64) (Session info: headless chrome=84.0.4147.135) (Driver info: chromedriver=83.0.4103.39 (ccbf011cb2d2b19b506d844400483861342c20cd-refs/branch-heads/4103@{#416}),platform=Linux 4.15.0-1077-gcp x86_64) at Object.checkLegacyResponse (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/error.js:546:15) at parseHttpResponse (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/http.js:509:13) at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/http.js:441:30 at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:97:5) From: Task: WebElement.isDisplayed() at thenableWebDriverProxy.schedule (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/webdriver.js:807:17) at WebElement.schedule_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/webdriver.js:2010:25) at WebElement.isDisplayed (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/webdriver.js:2362:17) at actionFn (/home/travis/build/medic/cht-core/node_modules/protractor/built/element.js:89:44) at Array.map (<anonymous>) at /home/travis/build/medic/cht-core/node_modules/protractor/built/element.js:461:65 at ManagedPromise.invokeCallback_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:1376:14) at TaskQueue.execute_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:3084:14) at TaskQueue.executeNext_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:3067:27) at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2927:27 at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:668:7 at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:97:5)Error at ElementArrayFinder.applyAction_ (/home/travis/build/medic/cht-core/node_modules/protractor/built/element.js:459:27) at ElementArrayFinder.<computed> [as isDisplayed] (/home/travis/build/medic/cht-core/node_modules/protractor/built/element.js:91:29) at ElementFinder.<computed> [as isDisplayed] (/home/travis/build/medic/cht-core/node_modules/protractor/built/element.js:831:22) at /home/travis/build/medic/cht-core/tests/helper.js:235:10 at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/webdriver.js:938:14 at TaskQueue.execute_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:3084:14) at TaskQueue.executeNext_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:3067:27) at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2974:25 at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:668:7 at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:97:5) From: Task: <anonymous> at pollCondition (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2195:19) at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2191:7 at new ManagedPromise (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:1077:7) at ControlFlow.promise (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2505:12) at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2190:22 at TaskQueue.execute_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:3084:14) at TaskQueue.executeNext_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:3067:27) at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2927:27 at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:668:7 at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:97:5) From: Task: <anonymous wait> at scheduleWait (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2188:20) at ControlFlow.wait (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2517:12) at thenableWebDriverProxy.wait (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/webdriver.js:934:29) at run (/home/travis/build/medic/cht-core/node_modules/protractor/built/browser.js:59:33) at ProtractorBrowser.to.<computed> [as wait] (/home/travis/build/medic/cht-core/node_modules/protractor/built/browser.js:67:16) at Object.waitElementToDisappear (/home/travis/build/medic/cht-core/tests/helper.js:233:13) at waitForLoaderToDisappear (/home/travis/build/medic/cht-core/tests/e2e/login/token-login.spec.js:66:14) at UserContext.<anonymous> (/home/travis/build/medic/cht-core/tests/e2e/login/token-login.spec.js:76:5) at /home/travis/build/medic/cht-core/node_modules/jasminewd2/index.js:112:25 at new ManagedPromise (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:1077:7) at ControlFlow.promise (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2505:12) at schedulerExecute (/home/travis/build/medic/cht-core/node_modules/jasminewd2/index.js:95:18) at TaskQueue.execute_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:3084:14) at TaskQueue.executeNext_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:3067:27) at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2974:25 at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:668:7 at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:97:5) From: Task: Run it("should redirect the user to the app if already logged in") in control flow at UserContext.<anonymous> (/home/travis/build/medic/cht-core/node_modules/jasminewd2/index.js:94:19) at /home/travis/build/medic/cht-core/node_modules/jasminewd2/index.js:64:48 at ControlFlow.emit (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/events.js:62:21) at ControlFlow.shutdown_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2674:10) at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2599:53 at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2728:9 at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:668:7 at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:97:5) From asynchronous test: Error at Suite.<anonymous> (/home/travis/build/medic/cht-core/tests/e2e/login/token-login.spec.js:72:3) at Object.<anonymous> (/home/travis/build/medic/cht-core/tests/e2e/login/token-login.spec.js:45:1) at Module._compile (internal/modules/cjs/loader.js:1137:30) at Object.Module._extensions..js (internal/modules/cjs/loader.js:1157:10) at Module.load (/home/travis/build/medic/cht-core/node_modules/coffeescript/lib/coffee-script/register.js:45:36) at Function.Module._load (internal/modules/cjs/loader.js:878:14) at Module.require (internal/modules/cjs/loader.js:1025:19) at require (internal/modules/cjs/helpers.js:72:18) at Array.forEach (<anonymous>) at Function.promise (/home/travis/build/medic/cht-core/node_modules/q/q.js:682:9) at _fulfilled (/home/travis/build/medic/cht-core/node_modules/q/q.js:834:54) at /home/travis/build/medic/cht-core/node_modules/q/q.js:863:30 at Promise.promise.promiseDispatch (/home/travis/build/medic/cht-core/node_modules/q/q.js:796:13) at /home/travis/build/medic/cht-core/node_modules/q/q.js:604:44 at runSingle (/home/travis/build/medic/cht-core/node_modules/q/q.js:137:13) at flush (/home/travis/build/medic/cht-core/node_modules/q/q.js:125:13) at processTicksAndRejections (internal/process/task_queues.js:79:11) ✓ should display an error when token login is disabled (1 sec) ```
1.0
Flaky test: token login should redirect the user to the app if already logged in - **Describe the issue** I've seen a [few builds](https://travis-ci.org/github/medic/cht-core/builds/719775504) fail with this error: ``` token login ✗ should redirect the user to the app if already logged in (3 secs) - Failed: stale element reference: element is not attached to the page document  (Session info: headless chrome=84.0.4147.135)  (Driver info: chromedriver=83.0.4103.39 (ccbf011cb2d2b19b506d844400483861342c20cd-refs/branch-heads/4103@{#416}),platform=Linux 4.15.0-1077-gcp x86_64) (Session info: headless chrome=84.0.4147.135) (Driver info: chromedriver=83.0.4103.39 (ccbf011cb2d2b19b506d844400483861342c20cd-refs/branch-heads/4103@{#416}),platform=Linux 4.15.0-1077-gcp x86_64) at Object.checkLegacyResponse (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/error.js:546:15) at parseHttpResponse (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/http.js:509:13) at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/http.js:441:30 at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:97:5) From: Task: WebElement.isDisplayed() at thenableWebDriverProxy.schedule (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/webdriver.js:807:17) at WebElement.schedule_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/webdriver.js:2010:25) at WebElement.isDisplayed (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/webdriver.js:2362:17) at actionFn (/home/travis/build/medic/cht-core/node_modules/protractor/built/element.js:89:44) at Array.map (<anonymous>) at /home/travis/build/medic/cht-core/node_modules/protractor/built/element.js:461:65 at ManagedPromise.invokeCallback_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:1376:14) at TaskQueue.execute_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:3084:14) at TaskQueue.executeNext_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:3067:27) at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2927:27 at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:668:7 at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:97:5)Error at ElementArrayFinder.applyAction_ (/home/travis/build/medic/cht-core/node_modules/protractor/built/element.js:459:27) at ElementArrayFinder.<computed> [as isDisplayed] (/home/travis/build/medic/cht-core/node_modules/protractor/built/element.js:91:29) at ElementFinder.<computed> [as isDisplayed] (/home/travis/build/medic/cht-core/node_modules/protractor/built/element.js:831:22) at /home/travis/build/medic/cht-core/tests/helper.js:235:10 at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/webdriver.js:938:14 at TaskQueue.execute_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:3084:14) at TaskQueue.executeNext_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:3067:27) at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2974:25 at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:668:7 at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:97:5) From: Task: <anonymous> at pollCondition (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2195:19) at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2191:7 at new ManagedPromise (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:1077:7) at ControlFlow.promise (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2505:12) at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2190:22 at TaskQueue.execute_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:3084:14) at TaskQueue.executeNext_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:3067:27) at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2927:27 at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:668:7 at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:97:5) From: Task: <anonymous wait> at scheduleWait (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2188:20) at ControlFlow.wait (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2517:12) at thenableWebDriverProxy.wait (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/webdriver.js:934:29) at run (/home/travis/build/medic/cht-core/node_modules/protractor/built/browser.js:59:33) at ProtractorBrowser.to.<computed> [as wait] (/home/travis/build/medic/cht-core/node_modules/protractor/built/browser.js:67:16) at Object.waitElementToDisappear (/home/travis/build/medic/cht-core/tests/helper.js:233:13) at waitForLoaderToDisappear (/home/travis/build/medic/cht-core/tests/e2e/login/token-login.spec.js:66:14) at UserContext.<anonymous> (/home/travis/build/medic/cht-core/tests/e2e/login/token-login.spec.js:76:5) at /home/travis/build/medic/cht-core/node_modules/jasminewd2/index.js:112:25 at new ManagedPromise (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:1077:7) at ControlFlow.promise (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2505:12) at schedulerExecute (/home/travis/build/medic/cht-core/node_modules/jasminewd2/index.js:95:18) at TaskQueue.execute_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:3084:14) at TaskQueue.executeNext_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:3067:27) at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2974:25 at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:668:7 at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:97:5) From: Task: Run it("should redirect the user to the app if already logged in") in control flow at UserContext.<anonymous> (/home/travis/build/medic/cht-core/node_modules/jasminewd2/index.js:94:19) at /home/travis/build/medic/cht-core/node_modules/jasminewd2/index.js:64:48 at ControlFlow.emit (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/events.js:62:21) at ControlFlow.shutdown_ (/home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2674:10) at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2599:53 at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:2728:9 at /home/travis/build/medic/cht-core/node_modules/selenium-webdriver/lib/promise.js:668:7 at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:97:5) From asynchronous test: Error at Suite.<anonymous> (/home/travis/build/medic/cht-core/tests/e2e/login/token-login.spec.js:72:3) at Object.<anonymous> (/home/travis/build/medic/cht-core/tests/e2e/login/token-login.spec.js:45:1) at Module._compile (internal/modules/cjs/loader.js:1137:30) at Object.Module._extensions..js (internal/modules/cjs/loader.js:1157:10) at Module.load (/home/travis/build/medic/cht-core/node_modules/coffeescript/lib/coffee-script/register.js:45:36) at Function.Module._load (internal/modules/cjs/loader.js:878:14) at Module.require (internal/modules/cjs/loader.js:1025:19) at require (internal/modules/cjs/helpers.js:72:18) at Array.forEach (<anonymous>) at Function.promise (/home/travis/build/medic/cht-core/node_modules/q/q.js:682:9) at _fulfilled (/home/travis/build/medic/cht-core/node_modules/q/q.js:834:54) at /home/travis/build/medic/cht-core/node_modules/q/q.js:863:30 at Promise.promise.promiseDispatch (/home/travis/build/medic/cht-core/node_modules/q/q.js:796:13) at /home/travis/build/medic/cht-core/node_modules/q/q.js:604:44 at runSingle (/home/travis/build/medic/cht-core/node_modules/q/q.js:137:13) at flush (/home/travis/build/medic/cht-core/node_modules/q/q.js:125:13) at processTicksAndRejections (internal/process/task_queues.js:79:11) ✓ should display an error when token login is disabled (1 sec) ```
test
flaky test token login should redirect the user to the app if already logged in describe the issue i ve seen a fail with this error token login  — should redirect the user to the app if already logged in secs    stale element reference element is not attached to the page document  session info headless chrome   driver info chromedriver refs branch heads platform linux gcp  session info headless chrome driver info chromedriver refs branch heads platform linux gcp at object checklegacyresponse home travis build medic cht core node modules selenium webdriver lib error js at parsehttpresponse home travis build medic cht core node modules selenium webdriver lib http js at home travis build medic cht core node modules selenium webdriver lib http js at runmicrotasks at processticksandrejections internal process task queues js from task webelement isdisplayed at thenablewebdriverproxy schedule home travis build medic cht core node modules selenium webdriver lib webdriver js at webelement schedule home travis build medic cht core node modules selenium webdriver lib webdriver js at webelement isdisplayed home travis build medic cht core node modules selenium webdriver lib webdriver js at actionfn home travis build medic cht core node modules protractor built element js at array map at home travis build medic cht core node modules protractor built element js at managedpromise invokecallback home travis build medic cht core node modules selenium webdriver lib promise js at taskqueue execute home travis build medic cht core node modules selenium webdriver lib promise js at taskqueue executenext home travis build medic cht core node modules selenium webdriver lib promise js at home travis build medic cht core node modules selenium webdriver lib promise js at home travis build medic cht core node modules selenium webdriver lib promise js at runmicrotasks at processticksandrejections internal process task queues js error at elementarrayfinder applyaction home travis build medic cht core node modules protractor built element js at elementarrayfinder home travis build medic cht core node modules protractor built element js at elementfinder home travis build medic cht core node modules protractor built element js at home travis build medic cht core tests helper js at home travis build medic cht core node modules selenium webdriver lib webdriver js at taskqueue execute home travis build medic cht core node modules selenium webdriver lib promise js at taskqueue executenext home travis build medic cht core node modules selenium webdriver lib promise js at home travis build medic cht core node modules selenium webdriver lib promise js at home travis build medic cht core node modules selenium webdriver lib promise js at runmicrotasks at processticksandrejections internal process task queues js from task at pollcondition home travis build medic cht core node modules selenium webdriver lib promise js at home travis build medic cht core node modules selenium webdriver lib promise js at new managedpromise home travis build medic cht core node modules selenium webdriver lib promise js at controlflow promise home travis build medic cht core node modules selenium webdriver lib promise js at home travis build medic cht core node modules selenium webdriver lib promise js at taskqueue execute home travis build medic cht core node modules selenium webdriver lib promise js at taskqueue executenext home travis build medic cht core node modules selenium webdriver lib promise js at home travis build medic cht core node modules selenium webdriver lib promise js at home travis build medic cht core node modules selenium webdriver lib promise js at runmicrotasks at processticksandrejections internal process task queues js from task at schedulewait home travis build medic cht core node modules selenium webdriver lib promise js at controlflow wait home travis build medic cht core node modules selenium webdriver lib promise js at thenablewebdriverproxy wait home travis build medic cht core node modules selenium webdriver lib webdriver js at run home travis build medic cht core node modules protractor built browser js at protractorbrowser to home travis build medic cht core node modules protractor built browser js at object waitelementtodisappear home travis build medic cht core tests helper js at waitforloadertodisappear home travis build medic cht core tests login token login spec js at usercontext home travis build medic cht core tests login token login spec js at home travis build medic cht core node modules index js at new managedpromise home travis build medic cht core node modules selenium webdriver lib promise js at controlflow promise home travis build medic cht core node modules selenium webdriver lib promise js at schedulerexecute home travis build medic cht core node modules index js at taskqueue execute home travis build medic cht core node modules selenium webdriver lib promise js at taskqueue executenext home travis build medic cht core node modules selenium webdriver lib promise js at home travis build medic cht core node modules selenium webdriver lib promise js at home travis build medic cht core node modules selenium webdriver lib promise js at runmicrotasks at processticksandrejections internal process task queues js from task run it should redirect the user to the app if already logged in in control flow at usercontext home travis build medic cht core node modules index js at home travis build medic cht core node modules index js at controlflow emit home travis build medic cht core node modules selenium webdriver lib events js at controlflow shutdown home travis build medic cht core node modules selenium webdriver lib promise js at home travis build medic cht core node modules selenium webdriver lib promise js at home travis build medic cht core node modules selenium webdriver lib promise js at home travis build medic cht core node modules selenium webdriver lib promise js at runmicrotasks at processticksandrejections internal process task queues js from asynchronous test error at suite home travis build medic cht core tests login token login spec js at object home travis build medic cht core tests login token login spec js at module compile internal modules cjs loader js at object module extensions js internal modules cjs loader js at module load home travis build medic cht core node modules coffeescript lib coffee script register js at function module load internal modules cjs loader js at module require internal modules cjs loader js at require internal modules cjs helpers js at array foreach at function promise home travis build medic cht core node modules q q js at fulfilled home travis build medic cht core node modules q q js at home travis build medic cht core node modules q q js at promise promise promisedispatch home travis build medic cht core node modules q q js at home travis build medic cht core node modules q q js at runsingle home travis build medic cht core node modules q q js at flush home travis build medic cht core node modules q q js at processticksandrejections internal process task queues js  “ should display an error when token login is disabled sec
1
264,255
20,012,526,933
IssuesEvent
2022-02-01 08:38:02
gardener/gardener
https://api.github.com/repos/gardener/gardener
closed
Gardener API documentation is missing details that some fields/resources are immutable
kind/enhancement area/documentation priority/3
**How to categorize this issue?** <!-- Please select area, kind, and priority for this issue. This helps the community categorizing it. Replace below TODOs or exchange the existing identifiers with those that fit best in your opinion. If multiple identifiers make sense you can also state the commands multiple times, e.g. /area control-plane /area auto-scaling ... "/area" identifiers: audit-logging|auto-scaling|backup|certification|control-plane-migration|control-plane|cost|delivery|dev-productivity|disaster-recovery|documentation|high-availability|logging|metering|monitoring|networking|open-source|ops-productivity|os|performance|quality|robustness|scalability|security|storage|testing|usability|user-management "/kind" identifiers: api-change|bug|cleanup|discussion|enhancement|epic|impediment|poc|post-mortem|question|regression|task|technical-debt|test "/priority" identifiers: 1|2|3|4|5 (ordered from greatest to least) --> /area documentation /kind enhancement /priority 3 **What would you like to be added**: [Gardener API Documentation](https://gardener.cloud/docs/references/core/#core.gardener.cloud/v1beta1.Shoot) to be extended to mention "immutable" for all Gardener API fields that are immutable, like for example Shoot’s `spec.secretBindingName` and SecretBinding’s `spec.secretRef` **Why is this needed**: It is very important users of the Gardener project to be able to find this information in the Gardener API documentation (not via testing the Gardener API).
1.0
Gardener API documentation is missing details that some fields/resources are immutable - **How to categorize this issue?** <!-- Please select area, kind, and priority for this issue. This helps the community categorizing it. Replace below TODOs or exchange the existing identifiers with those that fit best in your opinion. If multiple identifiers make sense you can also state the commands multiple times, e.g. /area control-plane /area auto-scaling ... "/area" identifiers: audit-logging|auto-scaling|backup|certification|control-plane-migration|control-plane|cost|delivery|dev-productivity|disaster-recovery|documentation|high-availability|logging|metering|monitoring|networking|open-source|ops-productivity|os|performance|quality|robustness|scalability|security|storage|testing|usability|user-management "/kind" identifiers: api-change|bug|cleanup|discussion|enhancement|epic|impediment|poc|post-mortem|question|regression|task|technical-debt|test "/priority" identifiers: 1|2|3|4|5 (ordered from greatest to least) --> /area documentation /kind enhancement /priority 3 **What would you like to be added**: [Gardener API Documentation](https://gardener.cloud/docs/references/core/#core.gardener.cloud/v1beta1.Shoot) to be extended to mention "immutable" for all Gardener API fields that are immutable, like for example Shoot’s `spec.secretBindingName` and SecretBinding’s `spec.secretRef` **Why is this needed**: It is very important users of the Gardener project to be able to find this information in the Gardener API documentation (not via testing the Gardener API).
non_test
gardener api documentation is missing details that some fields resources are immutable how to categorize this issue please select area kind and priority for this issue this helps the community categorizing it replace below todos or exchange the existing identifiers with those that fit best in your opinion if multiple identifiers make sense you can also state the commands multiple times e g area control plane area auto scaling area identifiers audit logging auto scaling backup certification control plane migration control plane cost delivery dev productivity disaster recovery documentation high availability logging metering monitoring networking open source ops productivity os performance quality robustness scalability security storage testing usability user management kind identifiers api change bug cleanup discussion enhancement epic impediment poc post mortem question regression task technical debt test priority identifiers ordered from greatest to least area documentation kind enhancement priority what would you like to be added to be extended to mention immutable for all gardener api fields that are immutable like for example shoot’s spec secretbindingname and secretbinding’s spec secretref why is this needed it is very important users of the gardener project to be able to find this information in the gardener api documentation not via testing the gardener api
0
157,570
12,378,514,110
IssuesEvent
2020-05-19 10:49:35
aliasrobotics/RVD
https://api.github.com/repos/aliasrobotics/RVD
opened
(warning) Member variable 'Joint
bug cppcheck static analysis testing triage
```yaml { "id": 1, "title": "(warning) Member variable 'Joint", "type": "bug", "description": "[src/ros_control/controller_manager/test/hwi_switch_test.cpp:79]: (warning) Member variable 'Joint::dummy_' is not initialized in the constructor.", "cwe": "None", "cve": "None", "keywords": [ "cppcheck", "static analysis", "testing", "triage", "bug" ], "system": "src/ros_control/controller_manager/test/hwi_switch_test.cpp", "vendor": null, "severity": { "rvss-score": 0, "rvss-vector": "", "severity-description": "", "cvss-score": 0, "cvss-vector": "" }, "links": "", "flaw": { "phase": "testing", "specificity": "N/A", "architectural-location": "N/A", "application": "N/A", "subsystem": "N/A", "package": "N/A", "languages": "None", "date-detected": "2020-05-19 (10:49)", "detected-by": "Alias Robotics", "detected-by-method": "testing static", "date-reported": "2020-05-19 (10:49)", "reported-by": "Alias Robotics", "reported-by-relationship": "automatic", "issue": "", "reproducibility": "always", "trace": "", "reproduction": "See artifacts below (if available)", "reproduction-image": "" }, "exploitation": { "description": "", "exploitation-image": "", "exploitation-vector": "" }, "mitigation": { "description": "", "pull-request": "", "date-mitigation": "" } } ```
1.0
(warning) Member variable 'Joint - ```yaml { "id": 1, "title": "(warning) Member variable 'Joint", "type": "bug", "description": "[src/ros_control/controller_manager/test/hwi_switch_test.cpp:79]: (warning) Member variable 'Joint::dummy_' is not initialized in the constructor.", "cwe": "None", "cve": "None", "keywords": [ "cppcheck", "static analysis", "testing", "triage", "bug" ], "system": "src/ros_control/controller_manager/test/hwi_switch_test.cpp", "vendor": null, "severity": { "rvss-score": 0, "rvss-vector": "", "severity-description": "", "cvss-score": 0, "cvss-vector": "" }, "links": "", "flaw": { "phase": "testing", "specificity": "N/A", "architectural-location": "N/A", "application": "N/A", "subsystem": "N/A", "package": "N/A", "languages": "None", "date-detected": "2020-05-19 (10:49)", "detected-by": "Alias Robotics", "detected-by-method": "testing static", "date-reported": "2020-05-19 (10:49)", "reported-by": "Alias Robotics", "reported-by-relationship": "automatic", "issue": "", "reproducibility": "always", "trace": "", "reproduction": "See artifacts below (if available)", "reproduction-image": "" }, "exploitation": { "description": "", "exploitation-image": "", "exploitation-vector": "" }, "mitigation": { "description": "", "pull-request": "", "date-mitigation": "" } } ```
test
warning member variable joint yaml id title warning member variable joint type bug description warning member variable joint dummy is not initialized in the constructor cwe none cve none keywords cppcheck static analysis testing triage bug system src ros control controller manager test hwi switch test cpp vendor null severity rvss score rvss vector severity description cvss score cvss vector links flaw phase testing specificity n a architectural location n a application n a subsystem n a package n a languages none date detected detected by alias robotics detected by method testing static date reported reported by alias robotics reported by relationship automatic issue reproducibility always trace reproduction see artifacts below if available reproduction image exploitation description exploitation image exploitation vector mitigation description pull request date mitigation
1
375,563
26,168,275,925
IssuesEvent
2023-01-01 15:22:23
blackjax-devs/blackjax
https://api.github.com/repos/blackjax-devs/blackjax
opened
Add a tutorial for BNNs with Flax
documentation
We already have examples that use Flax, and it would be useful to explain *step-by-step* how to build a bayesian neural network with the library.
1.0
Add a tutorial for BNNs with Flax - We already have examples that use Flax, and it would be useful to explain *step-by-step* how to build a bayesian neural network with the library.
non_test
add a tutorial for bnns with flax we already have examples that use flax and it would be useful to explain step by step how to build a bayesian neural network with the library
0
155,111
24,405,841,142
IssuesEvent
2022-10-05 07:54:43
defaulterrr/hse-skitle
https://api.github.com/repos/defaulterrr/hse-skitle
closed
Customer Interview Plan
Advanced Software Design
Provide a plan/set of questions to ask customers Example of questions: - You have no electricity at home, what will you do? - A pipe in your bathroom is leaking, what will you do?
1.0
Customer Interview Plan - Provide a plan/set of questions to ask customers Example of questions: - You have no electricity at home, what will you do? - A pipe in your bathroom is leaking, what will you do?
non_test
customer interview plan provide a plan set of questions to ask customers example of questions you have no electricity at home what will you do a pipe in your bathroom is leaking what will you do
0
304,987
26,352,926,223
IssuesEvent
2023-01-11 07:14:35
Slimefun/Slimefun4
https://api.github.com/repos/Slimefun/Slimefun4
opened
Could not pass event FurnaceSmeltEvent
🐞 Bug Report 🎯 Needs testing
### ❗ Checklist - [X] I am using the official english version of Slimefun and did not modify the jar. - [X] I am using an up to date "DEV" (not "RC") version of Slimefun. - [X] I am aware that issues related to Slimefun addons need to be reported on their bug trackers and not here. - [X] I searched for similar open issues and could not find an existing bug report on this. ### 📍 Description N/A ### 📑 Reproduction Steps Unknown ### 💡 Expected Behavior N/A ### 📷 Screenshots / Videos _No response_ ### 📜 Server Log https://paste.gg/p/ConaII_/ce0b4a7588a34a96a7b233f099e54ec7 ### 📂 `/error-reports/` folder _No response_ ### 💻 Server Software Purpur ### 🎮 Minecraft Version 1.19.x ### ⭐ Slimefun version DEV - 1039 (git 3ae41d59) ### 🧭 Other plugins _No response_
1.0
Could not pass event FurnaceSmeltEvent - ### ❗ Checklist - [X] I am using the official english version of Slimefun and did not modify the jar. - [X] I am using an up to date "DEV" (not "RC") version of Slimefun. - [X] I am aware that issues related to Slimefun addons need to be reported on their bug trackers and not here. - [X] I searched for similar open issues and could not find an existing bug report on this. ### 📍 Description N/A ### 📑 Reproduction Steps Unknown ### 💡 Expected Behavior N/A ### 📷 Screenshots / Videos _No response_ ### 📜 Server Log https://paste.gg/p/ConaII_/ce0b4a7588a34a96a7b233f099e54ec7 ### 📂 `/error-reports/` folder _No response_ ### 💻 Server Software Purpur ### 🎮 Minecraft Version 1.19.x ### ⭐ Slimefun version DEV - 1039 (git 3ae41d59) ### 🧭 Other plugins _No response_
test
could not pass event furnacesmeltevent ❗ checklist i am using the official english version of slimefun and did not modify the jar i am using an up to date dev not rc version of slimefun i am aware that issues related to slimefun addons need to be reported on their bug trackers and not here i searched for similar open issues and could not find an existing bug report on this 📍 description n a 📑 reproduction steps unknown 💡 expected behavior n a 📷 screenshots videos no response 📜 server log 📂 error reports folder no response 💻 server software purpur 🎮 minecraft version x ⭐ slimefun version dev git 🧭 other plugins no response
1
138,359
11,199,506,027
IssuesEvent
2020-01-03 18:56:10
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
opened
DistributedDataParallelTest.test_accumulate_gradients_no_sync is flaky
module: distributed topic: flaky-tests triaged
https://app.circleci.com/jobs/github/pytorch/pytorch/4116989 ``` 16:36:00 ====================================================================== 16:36:00 ERROR: test_accumulate_gradients_no_sync (__main__.DistributedDataParallelTest) 16:36:00 ---------------------------------------------------------------------- 16:36:00 Traceback (most recent call last): 16:36:00 File "/var/lib/jenkins/workspace/test/common_distributed.py", line 130, in wrapper 16:36:00 self._join_processes(fn) 16:36:00 File "/var/lib/jenkins/workspace/test/common_distributed.py", line 211, in _join_processes 16:36:00 self._check_return_codes(elapsed_time) 16:36:00 File "/var/lib/jenkins/workspace/test/common_distributed.py", line 230, in _check_return_codes 16:36:00 raise RuntimeError('Process {} terminated or timed out after {} seconds'.format(i, elapsed_time)) 16:36:00 RuntimeError: Process 0 terminated or timed out after 100.10047364234924 seconds ```
1.0
DistributedDataParallelTest.test_accumulate_gradients_no_sync is flaky - https://app.circleci.com/jobs/github/pytorch/pytorch/4116989 ``` 16:36:00 ====================================================================== 16:36:00 ERROR: test_accumulate_gradients_no_sync (__main__.DistributedDataParallelTest) 16:36:00 ---------------------------------------------------------------------- 16:36:00 Traceback (most recent call last): 16:36:00 File "/var/lib/jenkins/workspace/test/common_distributed.py", line 130, in wrapper 16:36:00 self._join_processes(fn) 16:36:00 File "/var/lib/jenkins/workspace/test/common_distributed.py", line 211, in _join_processes 16:36:00 self._check_return_codes(elapsed_time) 16:36:00 File "/var/lib/jenkins/workspace/test/common_distributed.py", line 230, in _check_return_codes 16:36:00 raise RuntimeError('Process {} terminated or timed out after {} seconds'.format(i, elapsed_time)) 16:36:00 RuntimeError: Process 0 terminated or timed out after 100.10047364234924 seconds ```
test
distributeddataparalleltest test accumulate gradients no sync is flaky error test accumulate gradients no sync main distributeddataparalleltest traceback most recent call last file var lib jenkins workspace test common distributed py line in wrapper self join processes fn file var lib jenkins workspace test common distributed py line in join processes self check return codes elapsed time file var lib jenkins workspace test common distributed py line in check return codes raise runtimeerror process terminated or timed out after seconds format i elapsed time runtimeerror process terminated or timed out after seconds
1
32,800
7,603,497,894
IssuesEvent
2018-04-29 15:10:04
zeebe-io/zeebe
https://api.github.com/repos/zeebe-io/zeebe
closed
Client can define replication factor for new topic
broker client code feature raft topic partitions
* System topic interprets the creation request's replication factor and sizes the raft groups for the new partitions accordingly
1.0
Client can define replication factor for new topic - * System topic interprets the creation request's replication factor and sizes the raft groups for the new partitions accordingly
non_test
client can define replication factor for new topic system topic interprets the creation request s replication factor and sizes the raft groups for the new partitions accordingly
0
160,578
25,193,625,640
IssuesEvent
2022-11-12 08:01:50
nextcloud/text
https://api.github.com/repos/nextcloud/text
closed
Readme.md file editing in the header error - layout problems
enhancement design high needs triage: julien
**Describe the bug** Not sure if this is a Text problem or a Nextcloud problem. A Readme.md file contents in a folder gets displayed in the header area above the files list and below the breadcrumbs. Several issues: 1. There is a lot of whitespace between the breadcrumbs and the displayed text from the file. When you click on the file the Text toolbar appears here for editing. It is set to autohide when the text edit area is not focussed but it is just making it invisible rather than properly hiding it. (display:none;) 2. There seems to be a lot of confusion between the nest of divs about what heights are set and whether overflow scrollbars appear. Heights of 30vh and 50vh are both set on different divs. When you click on the text the height of the textarea changes causing the display below to jump around. Sometimes this means that if the user then clicks away to a file below the wrong file seems to get selected. It would be better by far if the whole area was a fixed max-height so that the display below didn't jump around 3. The vertical scrollbar if the text overflows only appears when editing. This should also be present if the text is not being edited and the text is longer than the available space (max-height) so that the user can scroll to see more (eg mouse wheel) without actually going into edit mode. 4. Whilst the ability to edit the readme text in the place where it is displayed might seem nice it causes problems, particularly in shared folders and group folders, where naive users can inadvertently click on the text and edit it by mistake. A good feature would be the ability to disable this in-place editing option - either globally, or per group (eg Admins only). If it were disabled you can still edit the text by clicking on the file anyway (which seems more logical - there is no other part of the interface where clicking on text allows you to edit it 5. To ameliorate issue 4 perhaps putting a colour wash background on the display area would help to make it clear that it was something different. **To Reproduce** Steps to reproduce the behavior: Create a Readme.md in folder and click on the text in the header area. **Expected behavior** A clear cue that the text is editable if clicked on The ability to have a setting to disable in-place editing No wasted white space above or below the display area Consistent height to the text area whatever the mode (without overwriting the recommendations area on the home AllFiles page)(whether absolute or viewport percentages) Scroll bars appearing if text from file is overflowing the available space whatever the mode. **Client details:** - OS: [e.g. iOS] OSx - Browser: [e.g. chrome, safari] chrome - Version: [e.g. 22] latest stable - Device: [e.g. iPhone6, desktop] macbook <details> <summary>Server details</summary> <!-- You can use the Issue Template application to prefill most of the required information: https://apps.nextcloud.com/apps/issuetemplate --> **Text app version:** (see Nextcloud apps page) **Operating system:** Linux Debian **Web server:** Apache **Database:** MySQL **PHP version:** 7.3 **Nextcloud version:** (see Nextcloud admin page) 18.0.1 </details> <details> <summary>Logs</summary> #### Nextcloud log (data/nextcloud.log) ``` Insert your Nextcloud log here ``` #### Browser log ``` Insert your browser log here, this could for example include: a) The javascript console log b) The network log c) ... ``` </details>
1.0
Readme.md file editing in the header error - layout problems - **Describe the bug** Not sure if this is a Text problem or a Nextcloud problem. A Readme.md file contents in a folder gets displayed in the header area above the files list and below the breadcrumbs. Several issues: 1. There is a lot of whitespace between the breadcrumbs and the displayed text from the file. When you click on the file the Text toolbar appears here for editing. It is set to autohide when the text edit area is not focussed but it is just making it invisible rather than properly hiding it. (display:none;) 2. There seems to be a lot of confusion between the nest of divs about what heights are set and whether overflow scrollbars appear. Heights of 30vh and 50vh are both set on different divs. When you click on the text the height of the textarea changes causing the display below to jump around. Sometimes this means that if the user then clicks away to a file below the wrong file seems to get selected. It would be better by far if the whole area was a fixed max-height so that the display below didn't jump around 3. The vertical scrollbar if the text overflows only appears when editing. This should also be present if the text is not being edited and the text is longer than the available space (max-height) so that the user can scroll to see more (eg mouse wheel) without actually going into edit mode. 4. Whilst the ability to edit the readme text in the place where it is displayed might seem nice it causes problems, particularly in shared folders and group folders, where naive users can inadvertently click on the text and edit it by mistake. A good feature would be the ability to disable this in-place editing option - either globally, or per group (eg Admins only). If it were disabled you can still edit the text by clicking on the file anyway (which seems more logical - there is no other part of the interface where clicking on text allows you to edit it 5. To ameliorate issue 4 perhaps putting a colour wash background on the display area would help to make it clear that it was something different. **To Reproduce** Steps to reproduce the behavior: Create a Readme.md in folder and click on the text in the header area. **Expected behavior** A clear cue that the text is editable if clicked on The ability to have a setting to disable in-place editing No wasted white space above or below the display area Consistent height to the text area whatever the mode (without overwriting the recommendations area on the home AllFiles page)(whether absolute or viewport percentages) Scroll bars appearing if text from file is overflowing the available space whatever the mode. **Client details:** - OS: [e.g. iOS] OSx - Browser: [e.g. chrome, safari] chrome - Version: [e.g. 22] latest stable - Device: [e.g. iPhone6, desktop] macbook <details> <summary>Server details</summary> <!-- You can use the Issue Template application to prefill most of the required information: https://apps.nextcloud.com/apps/issuetemplate --> **Text app version:** (see Nextcloud apps page) **Operating system:** Linux Debian **Web server:** Apache **Database:** MySQL **PHP version:** 7.3 **Nextcloud version:** (see Nextcloud admin page) 18.0.1 </details> <details> <summary>Logs</summary> #### Nextcloud log (data/nextcloud.log) ``` Insert your Nextcloud log here ``` #### Browser log ``` Insert your browser log here, this could for example include: a) The javascript console log b) The network log c) ... ``` </details>
non_test
readme md file editing in the header error layout problems describe the bug not sure if this is a text problem or a nextcloud problem a readme md file contents in a folder gets displayed in the header area above the files list and below the breadcrumbs several issues there is a lot of whitespace between the breadcrumbs and the displayed text from the file when you click on the file the text toolbar appears here for editing it is set to autohide when the text edit area is not focussed but it is just making it invisible rather than properly hiding it display none there seems to be a lot of confusion between the nest of divs about what heights are set and whether overflow scrollbars appear heights of and are both set on different divs when you click on the text the height of the textarea changes causing the display below to jump around sometimes this means that if the user then clicks away to a file below the wrong file seems to get selected it would be better by far if the whole area was a fixed max height so that the display below didn t jump around the vertical scrollbar if the text overflows only appears when editing this should also be present if the text is not being edited and the text is longer than the available space max height so that the user can scroll to see more eg mouse wheel without actually going into edit mode whilst the ability to edit the readme text in the place where it is displayed might seem nice it causes problems particularly in shared folders and group folders where naive users can inadvertently click on the text and edit it by mistake a good feature would be the ability to disable this in place editing option either globally or per group eg admins only if it were disabled you can still edit the text by clicking on the file anyway which seems more logical there is no other part of the interface where clicking on text allows you to edit it to ameliorate issue perhaps putting a colour wash background on the display area would help to make it clear that it was something different to reproduce steps to reproduce the behavior create a readme md in folder and click on the text in the header area expected behavior a clear cue that the text is editable if clicked on the ability to have a setting to disable in place editing no wasted white space above or below the display area consistent height to the text area whatever the mode without overwriting the recommendations area on the home allfiles page whether absolute or viewport percentages scroll bars appearing if text from file is overflowing the available space whatever the mode client details os osx browser chrome version latest stable device macbook server details you can use the issue template application to prefill most of the required information text app version see nextcloud apps page operating system linux debian web server apache database mysql php version nextcloud version see nextcloud admin page logs nextcloud log data nextcloud log insert your nextcloud log here browser log insert your browser log here this could for example include a the javascript console log b the network log c
0
112,524
14,263,474,392
IssuesEvent
2020-11-20 14:29:25
blockframes/blockframes
https://api.github.com/repos/blockframes/blockframes
closed
Go back to landing page on SIGN UP / LOG IN PAGE
Design Pending - UX decision
- [x] I realize there should be a way for users to go back to landing page if they want to when they're logging in / signing up
1.0
Go back to landing page on SIGN UP / LOG IN PAGE - - [x] I realize there should be a way for users to go back to landing page if they want to when they're logging in / signing up
non_test
go back to landing page on sign up log in page i realize there should be a way for users to go back to landing page if they want to when they re logging in signing up
0