Unnamed: 0 int64 3 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 2 742 | labels stringlengths 4 431 | body stringlengths 5 239k | index stringclasses 10 values | text_combine stringlengths 96 240k | label stringclasses 2 values | text stringlengths 96 200k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
69,874 | 8,468,592,480 | IssuesEvent | 2018-10-23 20:12:01 | JosefPihrt/Roslynator | https://api.github.com/repos/JosefPihrt/Roslynator | closed | RCS1179 False Negative | Area-Analyzers Resolution-By Design | Should RCS1179 offer to change this:
```csharp
namespace RCS1179_False_Negative
{
static class Program
{
public static string Method()
{
string value = "1";
if (System.Environment.Is64BitProcess)
{
value = "2";
}
return value;
}
}
}
```
To this?
```csharp
namespace RCS1179_False_Negative
{
static class Program
{
public static string Method()
{
if (System.Environment.Is64BitProcess)
{
return "2";
}
return "1";
}
}
}
``` | 1.0 | RCS1179 False Negative - Should RCS1179 offer to change this:
```csharp
namespace RCS1179_False_Negative
{
static class Program
{
public static string Method()
{
string value = "1";
if (System.Environment.Is64BitProcess)
{
value = "2";
}
return value;
}
}
}
```
To this?
```csharp
namespace RCS1179_False_Negative
{
static class Program
{
public static string Method()
{
if (System.Environment.Is64BitProcess)
{
return "2";
}
return "1";
}
}
}
``` | non_usab | false negative should offer to change this csharp namespace false negative static class program public static string method string value if system environment value return value to this csharp namespace false negative static class program public static string method if system environment return return | 0 |
21,999 | 18,263,089,114 | IssuesEvent | 2021-10-04 03:36:05 | SanderMertens/flecs | https://api.github.com/repos/SanderMertens/flecs | closed | Add meta components to core | enhancement usability | **Describe the problem you are trying to solve.**
Currently the flecs-meta module has components for describing types, and a set of macro's to populate those types. Applications may want to use different reflection systems though, and in this case flecs cannot be made aware of the layout of types. This awareness can be useful in several scenarios, such as querying.
**Describe the solution you'd like**
Move components from flecs-meta to a core addon. Keep the code to populate the components in flecs-meta, so that different reflection front ends can populate the components.
| True | Add meta components to core - **Describe the problem you are trying to solve.**
Currently the flecs-meta module has components for describing types, and a set of macro's to populate those types. Applications may want to use different reflection systems though, and in this case flecs cannot be made aware of the layout of types. This awareness can be useful in several scenarios, such as querying.
**Describe the solution you'd like**
Move components from flecs-meta to a core addon. Keep the code to populate the components in flecs-meta, so that different reflection front ends can populate the components.
| usab | add meta components to core describe the problem you are trying to solve currently the flecs meta module has components for describing types and a set of macro s to populate those types applications may want to use different reflection systems though and in this case flecs cannot be made aware of the layout of types this awareness can be useful in several scenarios such as querying describe the solution you d like move components from flecs meta to a core addon keep the code to populate the components in flecs meta so that different reflection front ends can populate the components | 1 |
27,659 | 30,037,393,839 | IssuesEvent | 2023-06-27 13:34:34 | haraldng/omnipaxos | https://api.github.com/repos/haraldng/omnipaxos | closed | Complete reconnection automation | enhancement usability | Together issue #61 and issue #67 enable us to discover and react to dropped `Prepare`, `AcceptSync`, `AcceptDecide`, and `Decide` messages rather than relying on the user to call reconnected().
They don't cover cases of dropped `AcceptStopSign`, `DecideStopSign`, `PrepareReq` messages. The absence of these messages can't be easily detected with sequence numbers. Instead they can be resent after some timeout (can for example use the same timeout as BLE). | True | Complete reconnection automation - Together issue #61 and issue #67 enable us to discover and react to dropped `Prepare`, `AcceptSync`, `AcceptDecide`, and `Decide` messages rather than relying on the user to call reconnected().
They don't cover cases of dropped `AcceptStopSign`, `DecideStopSign`, `PrepareReq` messages. The absence of these messages can't be easily detected with sequence numbers. Instead they can be resent after some timeout (can for example use the same timeout as BLE). | usab | complete reconnection automation together issue and issue enable us to discover and react to dropped prepare acceptsync acceptdecide and decide messages rather than relying on the user to call reconnected they don t cover cases of dropped acceptstopsign decidestopsign preparereq messages the absence of these messages can t be easily detected with sequence numbers instead they can be resent after some timeout can for example use the same timeout as ble | 1 |
115,362 | 4,663,779,773 | IssuesEvent | 2016-10-05 10:28:11 | Lakshman-LD/LetsMeetUp | https://api.github.com/repos/Lakshman-LD/LetsMeetUp | closed | render table in a different way for phones | enhancement High Priority | table in event page has to be rendered in an horizontal way | 1.0 | render table in a different way for phones - table in event page has to be rendered in an horizontal way | non_usab | render table in a different way for phones table in event page has to be rendered in an horizontal way | 0 |
679,666 | 23,241,243,431 | IssuesEvent | 2022-08-03 15:45:39 | fpdcc/ccfp-asset-dashboard | https://api.github.com/repos/fpdcc/ccfp-asset-dashboard | closed | production - Phase funding can not be deleted | bug high priority production | After adding a phase funding item it can not be deleted. Takes you to "page not found".

| 1.0 | production - Phase funding can not be deleted - After adding a phase funding item it can not be deleted. Takes you to "page not found".

| non_usab | production phase funding can not be deleted after adding a phase funding item it can not be deleted takes you to page not found | 0 |
50,191 | 10,467,394,503 | IssuesEvent | 2019-09-22 04:44:40 | evanplaice/evanplaice | https://api.github.com/repos/evanplaice/evanplaice | closed | Maintenance release for 'absurdum' | Code | Clean up everything, improve/add missing documentation, streamline CI/CD. | 1.0 | Maintenance release for 'absurdum' - Clean up everything, improve/add missing documentation, streamline CI/CD. | non_usab | maintenance release for absurdum clean up everything improve add missing documentation streamline ci cd | 0 |
176,506 | 28,104,445,354 | IssuesEvent | 2023-03-30 22:35:41 | telosnetwork/telos-wallet | https://api.github.com/repos/telosnetwork/telos-wallet | closed | Design EVM Wallet 0.1 for mobile | 🎨 Needs Design | ## Overview
After getting confirmation from the current design, we want to provide the mobile version to the developers.
## Acceptance criteria
- Design the landing page
- Design the wallet balance
- Design the wallet transactions
- Design the wallet send
- Design the wallet receive
- Design the wallet stake | 1.0 | Design EVM Wallet 0.1 for mobile - ## Overview
After getting confirmation from the current design, we want to provide the mobile version to the developers.
## Acceptance criteria
- Design the landing page
- Design the wallet balance
- Design the wallet transactions
- Design the wallet send
- Design the wallet receive
- Design the wallet stake | non_usab | design evm wallet for mobile overview after getting confirmation from the current design we want to provide the mobile version to the developers acceptance criteria design the landing page design the wallet balance design the wallet transactions design the wallet send design the wallet receive design the wallet stake | 0 |
28,265 | 4,086,920,956 | IssuesEvent | 2016-06-01 08:04:36 | RestComm/Restcomm-Connect | https://api.github.com/repos/RestComm/Restcomm-Connect | opened | RVD application log UI does not work | 1. Bug Visual App Designer | RVD application log UI does not display messages and the following exception is shown in server.log:
```
10:30:10,639 SEVERE [com.sun.jersey.spi.container.ContainerResponse] (http-/10.42.0.1:8080-20) The RuntimeException could not be mapped to a response, re-throwing to the HTTP container: java.lang.NullPointerException
at org.mobicents.servlet.restcomm.rvd.http.resources.SecuredRestService.secure(SecuredRestService.java:36) [classes:]
at org.mobicents.servlet.restcomm.rvd.http.resources.RvdController.appLog(RvdController.java:421) [classes:]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [rt.jar:1.7.0_95]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) [rt.jar:1.7.0_95]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_95]
``` | 1.0 | RVD application log UI does not work - RVD application log UI does not display messages and the following exception is shown in server.log:
```
10:30:10,639 SEVERE [com.sun.jersey.spi.container.ContainerResponse] (http-/10.42.0.1:8080-20) The RuntimeException could not be mapped to a response, re-throwing to the HTTP container: java.lang.NullPointerException
at org.mobicents.servlet.restcomm.rvd.http.resources.SecuredRestService.secure(SecuredRestService.java:36) [classes:]
at org.mobicents.servlet.restcomm.rvd.http.resources.RvdController.appLog(RvdController.java:421) [classes:]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [rt.jar:1.7.0_95]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) [rt.jar:1.7.0_95]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_95]
``` | non_usab | rvd application log ui does not work rvd application log ui does not display messages and the following exception is shown in server log severe http the runtimeexception could not be mapped to a response re throwing to the http container java lang nullpointerexception at org mobicents servlet restcomm rvd http resources securedrestservice secure securedrestservice java at org mobicents servlet restcomm rvd http resources rvdcontroller applog rvdcontroller java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java | 0 |
264,708 | 23,134,702,571 | IssuesEvent | 2022-07-28 13:28:14 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Failing ES Promotion: Chrome X-Pack UI Functional Tests - ML anomaly_detection.x-pack/test/functional/apps/ml/anomaly_detection/anomaly_explorer·ts - machine learning - anomaly detection anomaly explorer with farequote based multi metric job renders View By swim lane | blocker :ml skipped-test failed-es-promotion Team:ML v8.4.0 | **Chrome X-Pack UI Functional Tests - ML anomaly_detection**
**x-pack/test/functional/apps/ml/anomaly_detection/anomaly_explorer.ts**
**machine learning - anomaly detection anomaly explorer with farequote based multi metric job renders View By swim lane**
This failure is preventing the promotion of the current Elasticsearch nightly snapshot.
For more information on the Elasticsearch snapshot promotion process including how to reproduce using the unverified nightly ES build: https://www.elastic.co/guide/en/kibana/master/development-es-snapshots.html
* [Failed promotion job](https://buildkite.com/elastic/kibana-elasticsearch-snapshot-verify/builds/1486#01824005-72d9-4b2a-9d76-45c3be5e2b98)
* [Test Failure](https://buildkite.com/organizations/elastic/pipelines/kibana-elasticsearch-snapshot-verify/builds/1486/jobs/01824005-72d9-4b2a-9d76-45c3be5e2b98/artifacts/0182402d-8731-4fba-9424-26caabd1e1b5)
```
Error: Expected swim lane y labels to be AAL,VRD,EGF,SWR,AMX,JZA,TRS,ACA,BAW,ASA, got AAL,EGF,VRD,SWR,JZA,AMX,TRS,ACA,BAW,ASA
at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11)
at Assertion.eql (node_modules/@kbn/expect/expect.js:244:8)
at Object.assertAxisLabels (x-pack/test/functional/services/ml/swim_lane.ts:88:31)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at Context.<anonymous> (x-pack/test/functional/apps/ml/anomaly_detection/anomaly_explorer.ts:167:11)
at Object.apply (node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16) {
actual: '[\n' +
' "AAL"\n' +
' "EGF"\n' +
' "VRD"\n' +
' "SWR"\n' +
' "JZA"\n' +
' "AMX"\n' +
' "TRS"\n' +
' "ACA"\n' +
' "BAW"\n' +
' "ASA"\n' +
']',
expected: '[\n' +
' "AAL"\n' +
' "VRD"\n' +
' "EGF"\n' +
' "SWR"\n' +
' "AMX"\n' +
' "JZA"\n' +
' "TRS"\n' +
' "ACA"\n' +
' "BAW"\n' +
' "ASA"\n' +
']',
showDiff: true
}
``` | 1.0 | Failing ES Promotion: Chrome X-Pack UI Functional Tests - ML anomaly_detection.x-pack/test/functional/apps/ml/anomaly_detection/anomaly_explorer·ts - machine learning - anomaly detection anomaly explorer with farequote based multi metric job renders View By swim lane - **Chrome X-Pack UI Functional Tests - ML anomaly_detection**
**x-pack/test/functional/apps/ml/anomaly_detection/anomaly_explorer.ts**
**machine learning - anomaly detection anomaly explorer with farequote based multi metric job renders View By swim lane**
This failure is preventing the promotion of the current Elasticsearch nightly snapshot.
For more information on the Elasticsearch snapshot promotion process including how to reproduce using the unverified nightly ES build: https://www.elastic.co/guide/en/kibana/master/development-es-snapshots.html
* [Failed promotion job](https://buildkite.com/elastic/kibana-elasticsearch-snapshot-verify/builds/1486#01824005-72d9-4b2a-9d76-45c3be5e2b98)
* [Test Failure](https://buildkite.com/organizations/elastic/pipelines/kibana-elasticsearch-snapshot-verify/builds/1486/jobs/01824005-72d9-4b2a-9d76-45c3be5e2b98/artifacts/0182402d-8731-4fba-9424-26caabd1e1b5)
```
Error: Expected swim lane y labels to be AAL,VRD,EGF,SWR,AMX,JZA,TRS,ACA,BAW,ASA, got AAL,EGF,VRD,SWR,JZA,AMX,TRS,ACA,BAW,ASA
at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11)
at Assertion.eql (node_modules/@kbn/expect/expect.js:244:8)
at Object.assertAxisLabels (x-pack/test/functional/services/ml/swim_lane.ts:88:31)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at Context.<anonymous> (x-pack/test/functional/apps/ml/anomaly_detection/anomaly_explorer.ts:167:11)
at Object.apply (node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16) {
actual: '[\n' +
' "AAL"\n' +
' "EGF"\n' +
' "VRD"\n' +
' "SWR"\n' +
' "JZA"\n' +
' "AMX"\n' +
' "TRS"\n' +
' "ACA"\n' +
' "BAW"\n' +
' "ASA"\n' +
']',
expected: '[\n' +
' "AAL"\n' +
' "VRD"\n' +
' "EGF"\n' +
' "SWR"\n' +
' "AMX"\n' +
' "JZA"\n' +
' "TRS"\n' +
' "ACA"\n' +
' "BAW"\n' +
' "ASA"\n' +
']',
showDiff: true
}
``` | non_usab | failing es promotion chrome x pack ui functional tests ml anomaly detection x pack test functional apps ml anomaly detection anomaly explorer·ts machine learning anomaly detection anomaly explorer with farequote based multi metric job renders view by swim lane chrome x pack ui functional tests ml anomaly detection x pack test functional apps ml anomaly detection anomaly explorer ts machine learning anomaly detection anomaly explorer with farequote based multi metric job renders view by swim lane this failure is preventing the promotion of the current elasticsearch nightly snapshot for more information on the elasticsearch snapshot promotion process including how to reproduce using the unverified nightly es build error expected swim lane y labels to be aal vrd egf swr amx jza trs aca baw asa got aal egf vrd swr jza amx trs aca baw asa at assertion assert node modules kbn expect expect js at assertion eql node modules kbn expect expect js at object assertaxislabels x pack test functional services ml swim lane ts at runmicrotasks at processticksandrejections node internal process task queues at context x pack test functional apps ml anomaly detection anomaly explorer ts at object apply node modules kbn test target node functional test runner lib mocha wrap function js actual n aal n egf n vrd n swr n jza n amx n trs n aca n baw n asa n expected n aal n vrd n egf n swr n amx n jza n trs n aca n baw n asa n showdiff true | 0 |
22,183 | 18,817,423,070 | IssuesEvent | 2021-11-10 01:58:40 | tailscale/tailscale | https://api.github.com/repos/tailscale/tailscale | closed | Auth Keys and non-admin accounts access to admin console. | admin UI L3 Some users P2 Aggravating T5 Usability | The customer is working on a scenario where they want to provision developer infrastructure with Terraform and automatically log in and register machine to Tailscale under the developer's account.
While this is doable with auth keys, non-admin accounts cannot access this interface right now unless they temporarily give admin access to the user.
Workaround:
Share the service account or support account access with developers.
<img src="https://frontapp.com/assets/img/favicons/favicon-32x32.png" height="16" width="16" alt="Front logo" /> [Front conversations](https://app.frontapp.com/open/top_3gglt) | True | Auth Keys and non-admin accounts access to admin console. - The customer is working on a scenario where they want to provision developer infrastructure with Terraform and automatically log in and register machine to Tailscale under the developer's account.
While this is doable with auth keys, non-admin accounts cannot access this interface right now unless they temporarily give admin access to the user.
Workaround:
Share the service account or support account access with developers.
<img src="https://frontapp.com/assets/img/favicons/favicon-32x32.png" height="16" width="16" alt="Front logo" /> [Front conversations](https://app.frontapp.com/open/top_3gglt) | usab | auth keys and non admin accounts access to admin console the customer is working on a scenario where they want to provision developer infrastructure with terraform and automatically log in and register machine to tailscale under the developer s account while this is doable with auth keys non admin accounts cannot access this interface right now unless they temporarily give admin access to the user workaround share the service account or support account access with developers | 1 |
99,542 | 30,488,401,239 | IssuesEvent | 2023-07-18 05:29:32 | microsoft/onnxruntime | https://api.github.com/repos/microsoft/onnxruntime | closed | [Build] How to use cmake to compile and generate 'onnxruntime. dll' from source code in Windows | build platform:windows | ### Describe the issue
Compile from source code using cmake on Windows using default options to generate the following lib, but without 'onnxruntime. dll'“

### Urgency
_No response_
### Target platform
Windows
### Build script
Cmake 3.26.4+VS2019
### Error / output
no 'onnxruntime.dll' in the generated file

### Visual Studio Version
_No response_
### GCC / Compiler Version
_No response_ | 1.0 | [Build] How to use cmake to compile and generate 'onnxruntime. dll' from source code in Windows - ### Describe the issue
Compile from source code using cmake on Windows using default options to generate the following lib, but without 'onnxruntime. dll'“

### Urgency
_No response_
### Target platform
Windows
### Build script
Cmake 3.26.4+VS2019
### Error / output
no 'onnxruntime.dll' in the generated file

### Visual Studio Version
_No response_
### GCC / Compiler Version
_No response_ | non_usab | how to use cmake to compile and generate onnxruntime dll from source code in windows describe the issue compile from source code using cmake on windows using default options to generate the following lib but without onnxruntime dll “ urgency no response target platform windows build script cmake error output no onnxruntime dll in the generated file visual studio version no response gcc compiler version no response | 0 |
685,303 | 23,452,003,316 | IssuesEvent | 2022-08-16 04:29:47 | dnd-side-project/dnd-7th-7-backend | https://api.github.com/repos/dnd-side-project/dnd-7th-7-backend | opened | feat(route): Modify the return value of getByID API | Priority: High Type: Feature Status: In Progress | ## 🤷 이슈 내용
경로가 메인 경로일 경우 return 값에 리뷰 목록을 추가합니다.
## 📸 스크린샷
<img width="187" alt="image" src="https://user-images.githubusercontent.com/89819254/184797917-348a241e-772e-492f-9ab1-c77298fb883e.png">
| 1.0 | feat(route): Modify the return value of getByID API - ## 🤷 이슈 내용
경로가 메인 경로일 경우 return 값에 리뷰 목록을 추가합니다.
## 📸 스크린샷
<img width="187" alt="image" src="https://user-images.githubusercontent.com/89819254/184797917-348a241e-772e-492f-9ab1-c77298fb883e.png">
| non_usab | feat route modify the return value of getbyid api 🤷 이슈 내용 경로가 메인 경로일 경우 return 값에 리뷰 목록을 추가합니다 📸 스크린샷 img width alt image src | 0 |
152,501 | 5,848,202,621 | IssuesEvent | 2017-05-10 20:20:29 | samsung-cnct/k2 | https://api.github.com/repos/samsung-cnct/k2 | closed | Replace cloud-init coreos.units with write-file+runcmd | feature request K2 priority-p1 | coreos.units is specific to the modified cloud-init that coreos ships and will not work with other distros. Replace it.
We can use `write-files` to create the systemd.service files.
We can use `runcmd` to start and enable the service files once they're written. | 1.0 | Replace cloud-init coreos.units with write-file+runcmd - coreos.units is specific to the modified cloud-init that coreos ships and will not work with other distros. Replace it.
We can use `write-files` to create the systemd.service files.
We can use `runcmd` to start and enable the service files once they're written. | non_usab | replace cloud init coreos units with write file runcmd coreos units is specific to the modified cloud init that coreos ships and will not work with other distros replace it we can use write files to create the systemd service files we can use runcmd to start and enable the service files once they re written | 0 |
3,637 | 3,510,075,094 | IssuesEvent | 2016-01-09 06:02:27 | stamp-web/stamp-web-aurelia | https://api.github.com/repos/stamp-web/stamp-web-aurelia | closed | Setting catalogue number as active does not update live instance in page | bug usability | If you have a stamp, add a new active catalogue number, set this to active and then edit the stamp, the old non-active number will be set now as the active, however since it was not a re-activatve the old active catalogue number is not de-activated resulting in two active numbers. This causes a data-corruption in the database (perhaps a constraint is needed there?) which has to be repaired.
Refreshing the page and then editing is a workaround, but due to data corruption it can not be repaired in the UI if the above steps are followed | True | Setting catalogue number as active does not update live instance in page - If you have a stamp, add a new active catalogue number, set this to active and then edit the stamp, the old non-active number will be set now as the active, however since it was not a re-activatve the old active catalogue number is not de-activated resulting in two active numbers. This causes a data-corruption in the database (perhaps a constraint is needed there?) which has to be repaired.
Refreshing the page and then editing is a workaround, but due to data corruption it can not be repaired in the UI if the above steps are followed | usab | setting catalogue number as active does not update live instance in page if you have a stamp add a new active catalogue number set this to active and then edit the stamp the old non active number will be set now as the active however since it was not a re activatve the old active catalogue number is not de activated resulting in two active numbers this causes a data corruption in the database perhaps a constraint is needed there which has to be repaired refreshing the page and then editing is a workaround but due to data corruption it can not be repaired in the ui if the above steps are followed | 1 |
22,847 | 20,360,495,494 | IssuesEvent | 2022-02-20 16:08:55 | ClickHouse/ClickHouse | https://api.github.com/repos/ClickHouse/ClickHouse | opened | It's inconvenient that we require to specify `<port>` in `<remote_servers>` cluster configuration. | usability | **Describe the issue**
```
remote_servers.play.shard.replica.port
```
Should use server's TCP port by default. | True | It's inconvenient that we require to specify `<port>` in `<remote_servers>` cluster configuration. - **Describe the issue**
```
remote_servers.play.shard.replica.port
```
Should use server's TCP port by default. | usab | it s inconvenient that we require to specify in cluster configuration describe the issue remote servers play shard replica port should use server s tcp port by default | 1 |
61,336 | 8,514,502,268 | IssuesEvent | 2018-10-31 18:44:53 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [docs] Document new data sources: S3, WebDAV | documentation priority: medium | Much like the docs for Box and CMIS we need to document WebDAV and S3.
S3 is still minimal, so won't be as elaborate as WebDAV and Box. It will fully fleshed out once this ticket is completed: https://github.com/craftercms/craftercms/issues/2508 | 1.0 | [docs] Document new data sources: S3, WebDAV - Much like the docs for Box and CMIS we need to document WebDAV and S3.
S3 is still minimal, so won't be as elaborate as WebDAV and Box. It will fully fleshed out once this ticket is completed: https://github.com/craftercms/craftercms/issues/2508 | non_usab | document new data sources webdav much like the docs for box and cmis we need to document webdav and is still minimal so won t be as elaborate as webdav and box it will fully fleshed out once this ticket is completed | 0 |
497,870 | 14,395,787,959 | IssuesEvent | 2020-12-03 04:41:22 | hyphacoop/organizing | https://api.github.com/repos/hyphacoop/organizing | opened | Close Hypha office for holidays | [priority-★★☆] wg:operations | <sup>_This initial comment is collaborative and open to modification by all._</sup>
## Task Summary
🎟️ **Re-ticketed from:** #
🗣 **Loomio:** N/A
📅 **Due date:** Dec 18, 2020
🎯 **Success criteria:** ...
Make sure we're ready to close office for 2 weeks
## To Do
- [ ] Set up responder to hello@
- [ ] Record temporary VM greeting
- [ ] encourage members to add email responder? (provide draft text)
| 1.0 | Close Hypha office for holidays - <sup>_This initial comment is collaborative and open to modification by all._</sup>
## Task Summary
🎟️ **Re-ticketed from:** #
🗣 **Loomio:** N/A
📅 **Due date:** Dec 18, 2020
🎯 **Success criteria:** ...
Make sure we're ready to close office for 2 weeks
## To Do
- [ ] Set up responder to hello@
- [ ] Record temporary VM greeting
- [ ] encourage members to add email responder? (provide draft text)
| non_usab | close hypha office for holidays this initial comment is collaborative and open to modification by all task summary 🎟️ re ticketed from 🗣 loomio n a 📅 due date dec 🎯 success criteria make sure we re ready to close office for weeks to do set up responder to hello record temporary vm greeting encourage members to add email responder provide draft text | 0 |
111,862 | 9,544,664,071 | IssuesEvent | 2019-05-01 14:51:47 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | teamcity: failed test: TestLint | C-test-failure O-robot | The following tests appear to have failed on master (lint): TestLint, TestLint/TestVet: TestLint/TestVet/shadow, TestLint/TestVet
You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+TestLint).
[#1262013](https://teamcity.cockroachdb.com/viewLog.html?buildId=1262013):
```
TestLint/TestVet
--- FAIL: lint/TestLint: TestLint/TestVet (842.530s)
------- Stdout: -------
=== PAUSE TestLint/TestVet
TestLint/TestVet: TestLint/TestVet/shadow
...
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/walk.go:2218 +0x146 fp=0xc0034bbe28 sp=0xc0034bbdb8 pc=0xba8596
lint_test.go:1352:
cmd/compile/internal/gc.walkexpr(0xc0013d1800, 0xc000c33e10, 0xc003ff9b00)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/walk.go:1268 +0x8e98 fp=0xc0034bc4e0 sp=0xc0034bbe28 pc=0xba32a8
lint_test.go:1352:
cmd/compile/internal/gc.walkexpr(0xc0013d1a00, 0xc000c33e10, 0xc0013d1a00)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/walk.go:724 +0x42f9 fp=0xc0034bcb98 sp=0xc0034bc4e0 pc=0xb9e709
lint_test.go:1352:
cmd/compile/internal/gc.slicelit(0x0, 0xc000c32e00, 0xc003ff9950, 0xc000c33e10)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/sinit.go:866 +0x1db fp=0xc0034bcd80 sp=0xc0034bcb98 pc=0xb2efab
lint_test.go:1352:
cmd/compile/internal/gc.anylit(0xc000c32e00, 0xc003ff9950, 0xc000c33e10)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/sinit.go:1130 +0xf2c fp=0xc0034bce58 sp=0xc0034bcd80 pc=0xb335ac
lint_test.go:1352:
cmd/compile/internal/gc.walkexpr(0xc000c32e00, 0xc000c33e10, 0xc000d62e10)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/walk.go:1730 +0x2347 fp=0xc0034bd510 sp=0xc0034bce58 pc=0xb9c757
lint_test.go:1352:
cmd/compile/internal/gc.walkexprlist(0xc0005d7410, 0x2, 0x2, 0xc000c33e10)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/walk.go:379 +0x50 fp=0xc0034bd540 sp=0xc0034bd510 pc=0xb99c50
lint_test.go:1352:
cmd/compile/internal/gc.walkstmt(0xc000c33e00, 0xc264c0)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/walk.go:334 +0x83f fp=0xc0034bd6f8 sp=0xc0034bd540 pc=0xb98b6f
lint_test.go:1352:
cmd/compile/internal/gc.walkstmtlist(0xc003ff24d8, 0x1, 0x1)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/walk.go:79 +0x46 fp=0xc0034bd720 sp=0xc0034bd6f8 pc=0xb980d6
lint_test.go:1352:
cmd/compile/internal/gc.walk(0xc0005eab00)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/walk.go:63 +0x39e fp=0xc0034bd7e8 sp=0xc0034bd720 pc=0xb97e0e
lint_test.go:1352:
cmd/compile/internal/gc.compile(0xc0005eab00)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/pgen.go:223 +0x6b fp=0xc0034bd838 sp=0xc0034bd7e8 pc=0xb02a9b
lint_test.go:1352:
cmd/compile/internal/gc.funccompile(0xc0005eab00)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/pgen.go:209 +0xbd fp=0xc0034bd890 sp=0xc0034bd838 pc=0xb0293d
lint_test.go:1352:
cmd/compile/internal/gc.Main(0xcc51f8)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/main.go:641 +0x265b fp=0xc0034bdf20 sp=0xc0034bd890 pc=0xadd1fb
lint_test.go:1352:
main.main()
lint_test.go:1352:
/usr/local/go/src/cmd/compile/main.go:51 +0x96 fp=0xc0034bdf98 sp=0xc0034bdf20 pc=0xbfaa36
lint_test.go:1352:
runtime.main()
lint_test.go:1352:
/usr/local/go/src/runtime/proc.go:201 +0x207 fp=0xc0034bdfe0 sp=0xc0034bdf98 pc=0x42c4a7
lint_test.go:1352:
runtime.goexit()
lint_test.go:1352:
/usr/local/go/src/runtime/asm_amd64.s:1333 +0x1 fp=0xc0034bdfe8 sp=0xc0034bdfe0 pc=0x457da1
TestLint
--- FAIL: lint/TestLint (253.190s)
```
Please assign, take a look and update the issue accordingly.
| 1.0 | teamcity: failed test: TestLint - The following tests appear to have failed on master (lint): TestLint, TestLint/TestVet: TestLint/TestVet/shadow, TestLint/TestVet
You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+TestLint).
[#1262013](https://teamcity.cockroachdb.com/viewLog.html?buildId=1262013):
```
TestLint/TestVet
--- FAIL: lint/TestLint: TestLint/TestVet (842.530s)
------- Stdout: -------
=== PAUSE TestLint/TestVet
TestLint/TestVet: TestLint/TestVet/shadow
...
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/walk.go:2218 +0x146 fp=0xc0034bbe28 sp=0xc0034bbdb8 pc=0xba8596
lint_test.go:1352:
cmd/compile/internal/gc.walkexpr(0xc0013d1800, 0xc000c33e10, 0xc003ff9b00)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/walk.go:1268 +0x8e98 fp=0xc0034bc4e0 sp=0xc0034bbe28 pc=0xba32a8
lint_test.go:1352:
cmd/compile/internal/gc.walkexpr(0xc0013d1a00, 0xc000c33e10, 0xc0013d1a00)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/walk.go:724 +0x42f9 fp=0xc0034bcb98 sp=0xc0034bc4e0 pc=0xb9e709
lint_test.go:1352:
cmd/compile/internal/gc.slicelit(0x0, 0xc000c32e00, 0xc003ff9950, 0xc000c33e10)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/sinit.go:866 +0x1db fp=0xc0034bcd80 sp=0xc0034bcb98 pc=0xb2efab
lint_test.go:1352:
cmd/compile/internal/gc.anylit(0xc000c32e00, 0xc003ff9950, 0xc000c33e10)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/sinit.go:1130 +0xf2c fp=0xc0034bce58 sp=0xc0034bcd80 pc=0xb335ac
lint_test.go:1352:
cmd/compile/internal/gc.walkexpr(0xc000c32e00, 0xc000c33e10, 0xc000d62e10)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/walk.go:1730 +0x2347 fp=0xc0034bd510 sp=0xc0034bce58 pc=0xb9c757
lint_test.go:1352:
cmd/compile/internal/gc.walkexprlist(0xc0005d7410, 0x2, 0x2, 0xc000c33e10)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/walk.go:379 +0x50 fp=0xc0034bd540 sp=0xc0034bd510 pc=0xb99c50
lint_test.go:1352:
cmd/compile/internal/gc.walkstmt(0xc000c33e00, 0xc264c0)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/walk.go:334 +0x83f fp=0xc0034bd6f8 sp=0xc0034bd540 pc=0xb98b6f
lint_test.go:1352:
cmd/compile/internal/gc.walkstmtlist(0xc003ff24d8, 0x1, 0x1)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/walk.go:79 +0x46 fp=0xc0034bd720 sp=0xc0034bd6f8 pc=0xb980d6
lint_test.go:1352:
cmd/compile/internal/gc.walk(0xc0005eab00)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/walk.go:63 +0x39e fp=0xc0034bd7e8 sp=0xc0034bd720 pc=0xb97e0e
lint_test.go:1352:
cmd/compile/internal/gc.compile(0xc0005eab00)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/pgen.go:223 +0x6b fp=0xc0034bd838 sp=0xc0034bd7e8 pc=0xb02a9b
lint_test.go:1352:
cmd/compile/internal/gc.funccompile(0xc0005eab00)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/pgen.go:209 +0xbd fp=0xc0034bd890 sp=0xc0034bd838 pc=0xb0293d
lint_test.go:1352:
cmd/compile/internal/gc.Main(0xcc51f8)
lint_test.go:1352:
/usr/local/go/src/cmd/compile/internal/gc/main.go:641 +0x265b fp=0xc0034bdf20 sp=0xc0034bd890 pc=0xadd1fb
lint_test.go:1352:
main.main()
lint_test.go:1352:
/usr/local/go/src/cmd/compile/main.go:51 +0x96 fp=0xc0034bdf98 sp=0xc0034bdf20 pc=0xbfaa36
lint_test.go:1352:
runtime.main()
lint_test.go:1352:
/usr/local/go/src/runtime/proc.go:201 +0x207 fp=0xc0034bdfe0 sp=0xc0034bdf98 pc=0x42c4a7
lint_test.go:1352:
runtime.goexit()
lint_test.go:1352:
/usr/local/go/src/runtime/asm_amd64.s:1333 +0x1 fp=0xc0034bdfe8 sp=0xc0034bdfe0 pc=0x457da1
TestLint
--- FAIL: lint/TestLint (253.190s)
```
Please assign, take a look and update the issue accordingly.
| non_usab | teamcity failed test testlint the following tests appear to have failed on master lint testlint testlint testvet testlint testvet shadow testlint testvet you may want to check testlint testvet fail lint testlint testlint testvet stdout pause testlint testvet testlint testvet testlint testvet shadow lint test go usr local go src cmd compile internal gc walk go fp sp pc lint test go cmd compile internal gc walkexpr lint test go usr local go src cmd compile internal gc walk go fp sp pc lint test go cmd compile internal gc walkexpr lint test go usr local go src cmd compile internal gc walk go fp sp pc lint test go cmd compile internal gc slicelit lint test go usr local go src cmd compile internal gc sinit go fp sp pc lint test go cmd compile internal gc anylit lint test go usr local go src cmd compile internal gc sinit go fp sp pc lint test go cmd compile internal gc walkexpr lint test go usr local go src cmd compile internal gc walk go fp sp pc lint test go cmd compile internal gc walkexprlist lint test go usr local go src cmd compile internal gc walk go fp sp pc lint test go cmd compile internal gc walkstmt lint test go usr local go src cmd compile internal gc walk go fp sp pc lint test go cmd compile internal gc walkstmtlist lint test go usr local go src cmd compile internal gc walk go fp sp pc lint test go cmd compile internal gc walk lint test go usr local go src cmd compile internal gc walk go fp sp pc lint test go cmd compile internal gc compile lint test go usr local go src cmd compile internal gc pgen go fp sp pc lint test go cmd compile internal gc funccompile lint test go usr local go src cmd compile internal gc pgen go fp sp pc lint test go cmd compile internal gc main lint test go usr local go src cmd compile internal gc main go fp sp pc lint test go main main lint test go usr local go src cmd compile main go fp sp pc lint test go runtime main lint test go usr local go src runtime proc go fp sp pc lint test go runtime goexit lint test go usr local go src runtime asm s fp sp pc testlint fail lint testlint please assign take a look and update the issue accordingly | 0 |
13,833 | 9,084,513,044 | IssuesEvent | 2019-02-18 04:00:52 | fieldenms/tg | https://api.github.com/repos/fieldenms/tg | opened | User Role: deactivation of roles should check for existence of active users | P2 Security User management | ### Description
It should not be possible to deactivate `UserRole` instance that are associated with active users.
`UserRole`s are associated with users via `UserAndRoleAssociation`, which models many-2-many association and is not activatable. A `user` is associated with a `role` if a corresponding instance `UserAndRoleAssociation` exists. The validation logic for deactivation of user roles should take this specificity into account.
### Expected outcome
User roles deactivation to be aligned with the use of `UserAndRoleAssociation` for managing user-role associations. | True | User Role: deactivation of roles should check for existence of active users - ### Description
It should not be possible to deactivate `UserRole` instance that are associated with active users.
`UserRole`s are associated with users via `UserAndRoleAssociation`, which models many-2-many association and is not activatable. A `user` is associated with a `role` if a corresponding instance `UserAndRoleAssociation` exists. The validation logic for deactivation of user roles should take this specificity into account.
### Expected outcome
User roles deactivation to be aligned with the use of `UserAndRoleAssociation` for managing user-role associations. | non_usab | user role deactivation of roles should check for existence of active users description it should not be possible to deactivate userrole instance that are associated with active users userrole s are associated with users via userandroleassociation which models many many association and is not activatable a user is associated with a role if a corresponding instance userandroleassociation exists the validation logic for deactivation of user roles should take this specificity into account expected outcome user roles deactivation to be aligned with the use of userandroleassociation for managing user role associations | 0 |
14,023 | 2,789,855,588 | IssuesEvent | 2015-05-08 21:56:53 | google/google-visualization-api-issues | https://api.github.com/repos/google/google-visualization-api-issues | closed | Gauge is Broken | Priority-Medium Type-Defect | Original [issue 317](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=317) created by orwant on 2010-06-16T18:14:44.000Z:
Look even at your own examples - the gauge does not fire correctly on first 'changeGauge' and on subsequent changes it jerks to the change, no longer smooth.
JS error - TypeError: Result of expression 'fe.ve' [undefined] is not an object. default.gauge.I.js:537 | 1.0 | Gauge is Broken - Original [issue 317](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=317) created by orwant on 2010-06-16T18:14:44.000Z:
Look even at your own examples - the gauge does not fire correctly on first 'changeGauge' and on subsequent changes it jerks to the change, no longer smooth.
JS error - TypeError: Result of expression 'fe.ve' [undefined] is not an object. default.gauge.I.js:537 | non_usab | gauge is broken original created by orwant on look even at your own examples the gauge does not fire correctly on first changegauge and on subsequent changes it jerks to the change no longer smooth js error typeerror result of expression fe ve is not an object default gauge i js | 0 |
277,317 | 30,610,829,768 | IssuesEvent | 2023-07-23 15:33:20 | tyhal/tyhal.com | https://api.github.com/repos/tyhal/tyhal.com | closed | CVE-2018-11697 (High) detected in node-sassv4.13.1, CSS::Sassv3.6.0 | security vulnerability | ## CVE-2018-11697 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sassv4.13.1</b>, <b>CSS::Sassv3.6.0</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. An out-of-bounds read of a memory region was found in the function Sass::Prelexer::exactly() which could be leveraged by an attacker to disclose information or manipulated to read from unmapped memory causing a denial of service.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11697>CVE-2018-11697</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sass/libsass/releases/tag/3.6.0">https://github.com/sass/libsass/releases/tag/3.6.0</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: libsass - 3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-11697 (High) detected in node-sassv4.13.1, CSS::Sassv3.6.0 - ## CVE-2018-11697 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sassv4.13.1</b>, <b>CSS::Sassv3.6.0</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. An out-of-bounds read of a memory region was found in the function Sass::Prelexer::exactly() which could be leveraged by an attacker to disclose information or manipulated to read from unmapped memory causing a denial of service.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11697>CVE-2018-11697</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sass/libsass/releases/tag/3.6.0">https://github.com/sass/libsass/releases/tag/3.6.0</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: libsass - 3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_usab | cve high detected in node css cve high severity vulnerability vulnerable libraries node css vulnerability details an issue was discovered in libsass through an out of bounds read of a memory region was found in the function sass prelexer exactly which could be leveraged by an attacker to disclose information or manipulated to read from unmapped memory causing a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass step up your open source security game with whitesource | 0 |
23,104 | 21,013,712,450 | IssuesEvent | 2022-03-30 09:04:44 | ClickHouse/ClickHouse | https://api.github.com/repos/ClickHouse/ClickHouse | opened | Openssl now work in Rhel | usability | Hello!
There is RHEL 8, Clickhouse 22.2.2.1
For openssl I generated and signed by the created CA. (method 2 from https://altinity.com/blog/2019/3/5/clickhouse-networking-part-2 )
**CA certificate added to trusted:**
cp ca.crt /etc/pki/ca-trust/anchors
update-ca-etrust extract
**Connection verification via openssl is successful:**
openssl s_client -connect $HOSTNAME:9440 < /dev/null
Verify return code: 0 (ok)
**But connecting via clickhouse-client I get error:**
"The certificate yielded the error: unable to get the issuer certificate" and
"... unable to verify the first certificate"
(you don't have to strictly follow this form)
**I try to add path to trust store certificates, but not help:**
<caConfig>/etc/pki/ca-trust/</caConfig>
<caConfig>/usr/share/pki/ca-trust-source</caConfig>
How can I configure openssl on RHEL?
It works fine in Ubuntu.

| True | Openssl now work in Rhel - Hello!
There is RHEL 8, Clickhouse 22.2.2.1
For openssl I generated and signed by the created CA. (method 2 from https://altinity.com/blog/2019/3/5/clickhouse-networking-part-2 )
**CA certificate added to trusted:**
cp ca.crt /etc/pki/ca-trust/anchors
update-ca-etrust extract
**Connection verification via openssl is successful:**
openssl s_client -connect $HOSTNAME:9440 < /dev/null
Verify return code: 0 (ok)
**But connecting via clickhouse-client I get error:**
"The certificate yielded the error: unable to get the issuer certificate" and
"... unable to verify the first certificate"
(you don't have to strictly follow this form)
**I try to add path to trust store certificates, but not help:**
<caConfig>/etc/pki/ca-trust/</caConfig>
<caConfig>/usr/share/pki/ca-trust-source</caConfig>
How can I configure openssl on RHEL?
It works fine in Ubuntu.

| usab | openssl now work in rhel hello there is rhel clickhouse for openssl i generated and signed by the created ca method from ca certificate added to trusted cp ca crt etc pki ca trust anchors update ca etrust extract connection verification via openssl is successful openssl s client connect hostname dev null verify return code ok but connecting via clickhouse client i get error the certificate yielded the error unable to get the issuer certificate and unable to verify the first certificate you don t have to strictly follow this form i try to add path to trust store certificates but not help etc pki ca trust usr share pki ca trust source how can i configure openssl on rhel it works fine in ubuntu | 1 |
6,342 | 4,228,765,028 | IssuesEvent | 2016-07-04 02:05:42 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | Hotkey mode still doesn't stay the same when you switch mobs | Bug Usability | Althought the annoying issue of looking like it did is out, it still doesn't keep the preference when you enter a new mob. | True | Hotkey mode still doesn't stay the same when you switch mobs - Althought the annoying issue of looking like it did is out, it still doesn't keep the preference when you enter a new mob. | usab | hotkey mode still doesn t stay the same when you switch mobs althought the annoying issue of looking like it did is out it still doesn t keep the preference when you enter a new mob | 1 |
15,064 | 9,694,089,600 | IssuesEvent | 2019-05-24 17:56:17 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | Scrolling a multiline text box scrolls the Inspector also | enhancement topic:editor usability | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
8fc92ae86faed72c402e7770246ed18d50b5c43b
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
Linux
**Issue description:**
<!-- What happened, and what was expected. -->
When the mouse wheel is used to scroll a multiline text box, it will scroll both it and the Inspector. Ideally it should only scroll the text box when the mouse is hovering over it.
| True | Scrolling a multiline text box scrolls the Inspector also - <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
8fc92ae86faed72c402e7770246ed18d50b5c43b
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
Linux
**Issue description:**
<!-- What happened, and what was expected. -->
When the mouse wheel is used to scroll a multiline text box, it will scroll both it and the Inspector. Ideally it should only scroll the text box when the mouse is hovering over it.
| usab | scrolling a multiline text box scrolls the inspector also please search existing issues for potential duplicates before filing yours godot version os device including version linux issue description when the mouse wheel is used to scroll a multiline text box it will scroll both it and the inspector ideally it should only scroll the text box when the mouse is hovering over it | 1 |
9,421 | 6,288,975,274 | IssuesEvent | 2017-07-19 18:10:48 | FStarLang/FStar | https://api.github.com/repos/FStarLang/FStar | closed | `val` annotations yield ugly-looking signatures | enhancement pull-request usability | Contrast this:
```
val fold_left: ('a -> 'b -> ML 'a) -> 'a -> list 'b -> ML 'a
let rec fold_left f x y = match y with
| [] -> x
| hd::tl -> fold_left f (f x hd) tl
```
which yields this for `#info fold_left`:
```
fold_left: (uu___:(uu___:'a@1 -> uu___:'b@1 -> 'a@3) -> uu___:'a@2 -> uu___:(list 'b@2) -> 'a@4)
```
with this:
```
let rec fold_left (f: 'a -> 'b -> ML 'a) (x: 'a) (y: list 'b) : ML 'a = match y with
| [] -> x
| hd::tl -> fold_left f (f x hd) tl
```
which yields this for `#info fold_left`:
```
fold_left: (f:(uu___:'a@1 -> uu___:'b@1 -> 'a@3) -> x:'a@2 -> y:(list 'b@2) -> 'a@4)
```
It would be nice t preserve the names `f`, `x`, and `y` even when there is a `val`.
| True | `val` annotations yield ugly-looking signatures - Contrast this:
```
val fold_left: ('a -> 'b -> ML 'a) -> 'a -> list 'b -> ML 'a
let rec fold_left f x y = match y with
| [] -> x
| hd::tl -> fold_left f (f x hd) tl
```
which yields this for `#info fold_left`:
```
fold_left: (uu___:(uu___:'a@1 -> uu___:'b@1 -> 'a@3) -> uu___:'a@2 -> uu___:(list 'b@2) -> 'a@4)
```
with this:
```
let rec fold_left (f: 'a -> 'b -> ML 'a) (x: 'a) (y: list 'b) : ML 'a = match y with
| [] -> x
| hd::tl -> fold_left f (f x hd) tl
```
which yields this for `#info fold_left`:
```
fold_left: (f:(uu___:'a@1 -> uu___:'b@1 -> 'a@3) -> x:'a@2 -> y:(list 'b@2) -> 'a@4)
```
It would be nice t preserve the names `f`, `x`, and `y` even when there is a `val`.
| usab | val annotations yield ugly looking signatures contrast this val fold left a b ml a a list b ml a let rec fold left f x y match y with x hd tl fold left f f x hd tl which yields this for info fold left fold left uu uu a uu b a uu a uu list b a with this let rec fold left f a b ml a x a y list b ml a match y with x hd tl fold left f f x hd tl which yields this for info fold left fold left f uu a uu b a x a y list b a it would be nice t preserve the names f x and y even when there is a val | 1 |
19,731 | 4,441,997,699 | IssuesEvent | 2016-08-19 11:41:13 | coala-analyzer/coala | https://api.github.com/repos/coala-analyzer/coala | closed | Codestyle: Add continuation line policy | area/documentation difficulty/newcomer | Basically we are already enforcing a style, where multiple-line lists, dicts, tuples, function definitions, function calls, and any such structures either:
- stay in one line
- span multiple lines that list one parameter/ item each
Since this is not covered by PEP8 we should add a section to our [codestyle](https://github.com/coala-analyzer/coala/blob/master/docs/Getting_Involved/Codestyle.rst)
| 1.0 | Codestyle: Add continuation line policy - Basically we are already enforcing a style, where multiple-line lists, dicts, tuples, function definitions, function calls, and any such structures either:
- stay in one line
- span multiple lines that list one parameter/ item each
Since this is not covered by PEP8 we should add a section to our [codestyle](https://github.com/coala-analyzer/coala/blob/master/docs/Getting_Involved/Codestyle.rst)
| non_usab | codestyle add continuation line policy basically we are already enforcing a style where multiple line lists dicts tuples function definitions function calls and any such structures either stay in one line span multiple lines that list one parameter item each since this is not covered by we should add a section to our | 0 |
19,350 | 13,901,941,958 | IssuesEvent | 2020-10-20 04:09:21 | pulumi/pulumi | https://api.github.com/repos/pulumi/pulumi | opened | Add `logout --all` option | impact/usability kind/enhancement | #### Enhancement
The pulumi CLI should have a `--all` option on `pulumi logout` that removes `~/.pulumi/credentials.json` from the user's machine to "hard reset" a user's credentials without requiring them to remove the file manually. | True | Add `logout --all` option - #### Enhancement
The pulumi CLI should have a `--all` option on `pulumi logout` that removes `~/.pulumi/credentials.json` from the user's machine to "hard reset" a user's credentials without requiring them to remove the file manually. | usab | add logout all option enhancement the pulumi cli should have a all option on pulumi logout that removes pulumi credentials json from the user s machine to hard reset a user s credentials without requiring them to remove the file manually | 1 |
18,666 | 13,152,292,872 | IssuesEvent | 2020-08-09 21:24:01 | greenlion/warp | https://api.github.com/repos/greenlion/warp | opened | Support clone plugin | enhancement usability | MySQL 8.0 can clone a server from another server. Right now only InnoDB is supported by the server for cloning. Cloning WARP based tables should be supported as well. This will require modifying the server and won't be pluggable so it is a semi-non-compatible change to upstream MySQL. WARP could still be loaded into a non WarpSQL server, but cloning would clone WARP tables as empty. | True | Support clone plugin - MySQL 8.0 can clone a server from another server. Right now only InnoDB is supported by the server for cloning. Cloning WARP based tables should be supported as well. This will require modifying the server and won't be pluggable so it is a semi-non-compatible change to upstream MySQL. WARP could still be loaded into a non WarpSQL server, but cloning would clone WARP tables as empty. | usab | support clone plugin mysql can clone a server from another server right now only innodb is supported by the server for cloning cloning warp based tables should be supported as well this will require modifying the server and won t be pluggable so it is a semi non compatible change to upstream mysql warp could still be loaded into a non warpsql server but cloning would clone warp tables as empty | 1 |
16,885 | 11,455,143,266 | IssuesEvent | 2020-02-06 18:29:28 | connectome-neuprint/neuPrintExplorer | https://api.github.com/repos/connectome-neuprint/neuPrintExplorer | closed | Option to copy collected results | enhancement in progress nicetohave usability | People often run a query just to put the results into another query. Add some way to take the format of a column as an input, like a built in collect function. Some check box where you can see and copy your results as a, b, c, d or a,b,c,d. | True | Option to copy collected results - People often run a query just to put the results into another query. Add some way to take the format of a column as an input, like a built in collect function. Some check box where you can see and copy your results as a, b, c, d or a,b,c,d. | usab | option to copy collected results people often run a query just to put the results into another query add some way to take the format of a column as an input like a built in collect function some check box where you can see and copy your results as a b c d or a b c d | 1 |
6,807 | 23,938,141,095 | IssuesEvent | 2022-09-11 14:26:00 | smcnab1/op-question-mark | https://api.github.com/repos/smcnab1/op-question-mark | closed | [BUG] Fix and re-enable HA Restart Notify | Status: Confirmed Type: Bug Priority: High For: Automations | Fix how often notified and only TTS during day then re-enable on HA UI | 1.0 | [BUG] Fix and re-enable HA Restart Notify - Fix how often notified and only TTS during day then re-enable on HA UI | non_usab | fix and re enable ha restart notify fix how often notified and only tts during day then re enable on ha ui | 0 |
11,737 | 7,423,183,064 | IssuesEvent | 2018-03-23 03:42:24 | matomo-org/matomo | https://api.github.com/repos/matomo-org/matomo | closed | Chrome: Selecting the tracking code with one click does not work anymore | Bug c: Usability | We have an angular directive `<pre piwik-select-on-focus>...</pre>` to select for example a code block with just one click. This is used for example when generating the tracking code. It is also used in other places for example by the Widgets screen to select a link or HTML, by Custom Dimensions, by plugins like A/B testing etc.
This is likely due to https://www.chromestatus.com/feature/6680566019653632

It likely still works for `<textarea piwik-select-on-focus>...</textarea>` but not for any other element. | True | Chrome: Selecting the tracking code with one click does not work anymore - We have an angular directive `<pre piwik-select-on-focus>...</pre>` to select for example a code block with just one click. This is used for example when generating the tracking code. It is also used in other places for example by the Widgets screen to select a link or HTML, by Custom Dimensions, by plugins like A/B testing etc.
This is likely due to https://www.chromestatus.com/feature/6680566019653632

It likely still works for `<textarea piwik-select-on-focus>...</textarea>` but not for any other element. | usab | chrome selecting the tracking code with one click does not work anymore we have an angular directive to select for example a code block with just one click this is used for example when generating the tracking code it is also used in other places for example by the widgets screen to select a link or html by custom dimensions by plugins like a b testing etc this is likely due to it likely still works for but not for any other element | 1 |
2,456 | 3,466,160,083 | IssuesEvent | 2015-12-22 01:02:00 | adobe-photoshop/spaces-design | https://api.github.com/repos/adobe-photoshop/spaces-design | opened | Document activation is slow | Performance | There are at least two problems:
1. We currently don't start updating the UI until after Photoshop finishes changing the document and updating the canvas. Usually, we should be able to perform these operations in parallel.
2. For documents that are already initialized, we shouldn't need to change more than one or two CSS classes because each document's panel structure is maintained separately in the DOM. I suspect it takes longer than this because our `shouldComponentUpdate` functions aren't sharp enough to limit re-rendering to the top-most components. | True | Document activation is slow - There are at least two problems:
1. We currently don't start updating the UI until after Photoshop finishes changing the document and updating the canvas. Usually, we should be able to perform these operations in parallel.
2. For documents that are already initialized, we shouldn't need to change more than one or two CSS classes because each document's panel structure is maintained separately in the DOM. I suspect it takes longer than this because our `shouldComponentUpdate` functions aren't sharp enough to limit re-rendering to the top-most components. | non_usab | document activation is slow there are at least two problems we currently don t start updating the ui until after photoshop finishes changing the document and updating the canvas usually we should be able to perform these operations in parallel for documents that are already initialized we shouldn t need to change more than one or two css classes because each document s panel structure is maintained separately in the dom i suspect it takes longer than this because our shouldcomponentupdate functions aren t sharp enough to limit re rendering to the top most components | 0 |
20,683 | 15,878,058,635 | IssuesEvent | 2021-04-09 10:28:07 | opengovsg/checkfirst | https://api.github.com/repos/opengovsg/checkfirst | closed | Make button unclickable when it can't be changed | usability enhancement | See screenshot below
<img width="469" alt="Screenshot 2021-03-22 at 1 52 00 PM" src="https://user-images.githubusercontent.com/23736580/111946149-ea839c80-8b15-11eb-86d7-783e0d59d534.png">
| True | Make button unclickable when it can't be changed - See screenshot below
<img width="469" alt="Screenshot 2021-03-22 at 1 52 00 PM" src="https://user-images.githubusercontent.com/23736580/111946149-ea839c80-8b15-11eb-86d7-783e0d59d534.png">
| usab | make button unclickable when it can t be changed see screenshot below img width alt screenshot at pm src | 1 |
12,009 | 3,562,008,042 | IssuesEvent | 2016-01-24 06:10:15 | CollaboratingPlatypus/PetaPoco | https://api.github.com/repos/CollaboratingPlatypus/PetaPoco | closed | Integration Testing Guidelines - Doc review | documentation review | Doc review request - [Integration testing guidelines](https://github.com/CollaboratingPlatypus/PetaPoco/wiki/Integration-Testing-Guidelines)
@CollaboratingPlatypus/petapoco-documentation
Community input welcome! | 1.0 | Integration Testing Guidelines - Doc review - Doc review request - [Integration testing guidelines](https://github.com/CollaboratingPlatypus/PetaPoco/wiki/Integration-Testing-Guidelines)
@CollaboratingPlatypus/petapoco-documentation
Community input welcome! | non_usab | integration testing guidelines doc review doc review request collaboratingplatypus petapoco documentation community input welcome | 0 |
28,374 | 12,834,703,495 | IssuesEvent | 2020-07-07 11:33:38 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | Change the word create to creates | Pri2 cxp doc-enhancement service-fabric/svc triaged | Not: create
But: creates
Then depending on that information the manager service create an instance of your actual contact-storage service just for that customer.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 9bb25acc-6868-7ae8-5b42-813eee29efa4
* Version Independent ID: ba88e4b7-f27c-6d0a-7550-6e04c6f36c4c
* Content: [Scalability of Service Fabric services - Azure Service Fabric](https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-concepts-scalability)
* Content Source: [articles/service-fabric/service-fabric-concepts-scalability.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/service-fabric/service-fabric-concepts-scalability.md)
* Service: **service-fabric**
* GitHub Login: @masnider
* Microsoft Alias: **masnider** | 1.0 | Change the word create to creates - Not: create
But: creates
Then depending on that information the manager service create an instance of your actual contact-storage service just for that customer.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 9bb25acc-6868-7ae8-5b42-813eee29efa4
* Version Independent ID: ba88e4b7-f27c-6d0a-7550-6e04c6f36c4c
* Content: [Scalability of Service Fabric services - Azure Service Fabric](https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-concepts-scalability)
* Content Source: [articles/service-fabric/service-fabric-concepts-scalability.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/service-fabric/service-fabric-concepts-scalability.md)
* Service: **service-fabric**
* GitHub Login: @masnider
* Microsoft Alias: **masnider** | non_usab | change the word create to creates not create but creates then depending on that information the manager service create an instance of your actual contact storage service just for that customer document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service service fabric github login masnider microsoft alias masnider | 0 |
728,143 | 25,067,700,350 | IssuesEvent | 2022-11-07 09:40:04 | JanssenProject/jans | https://api.github.com/repos/JanssenProject/jans | opened | fix:(jans-cli) unable to add custom script with mendatory fields | kind-bug comp-jans-config-api priority-1 | **Describe the bug**
unable to add custom script with mendatory fields
**To Reproduce**
Steps to reproduce the behavior:
1. isntall jans 1.0.3 release
2. launch /opt/jans/jans-cli/config-cli.py
3. select 6 for custom script and 2 to add custom script
4. add required details
5. when prompted "Populate optional fields?" add n
6. continue? add y
7. see error
**Expected behavior**
should able to add custom script with mendatory fields
**Screenshots**

**Desktop (please complete the following information):**
- OS: ubuntu
- Browser [e.g. chrome, safari]
- Version 20.04
- DB: openDJ
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| 1.0 | fix:(jans-cli) unable to add custom script with mendatory fields - **Describe the bug**
unable to add custom script with mendatory fields
**To Reproduce**
Steps to reproduce the behavior:
1. isntall jans 1.0.3 release
2. launch /opt/jans/jans-cli/config-cli.py
3. select 6 for custom script and 2 to add custom script
4. add required details
5. when prompted "Populate optional fields?" add n
6. continue? add y
7. see error
**Expected behavior**
should able to add custom script with mendatory fields
**Screenshots**

**Desktop (please complete the following information):**
- OS: ubuntu
- Browser [e.g. chrome, safari]
- Version 20.04
- DB: openDJ
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| non_usab | fix jans cli unable to add custom script with mendatory fields describe the bug unable to add custom script with mendatory fields to reproduce steps to reproduce the behavior isntall jans release launch opt jans jans cli config cli py select for custom script and to add custom script add required details when prompted populate optional fields add n continue add y see error expected behavior should able to add custom script with mendatory fields screenshots desktop please complete the following information os ubuntu browser version db opendj smartphone please complete the following information device os browser version additional context add any other context about the problem here | 0 |
22,276 | 18,943,530,871 | IssuesEvent | 2021-11-18 07:29:28 | VirtusLab/git-machete | https://api.github.com/repos/VirtusLab/git-machete | closed | `github create-pr`: check if base branch for PR exists in remote | github usability | Attempt to create a pull request with base branch being already deleted from remote ends up with `Unprocessable Entity` error. (example in #332).
Proposed solution:
Perform `git fetch <remote>` at the beginning of `create-pr`. if base branch is not present in remote branches, perform `handle_untracked_branch` with relevant remote for missing base branch. | True | `github create-pr`: check if base branch for PR exists in remote - Attempt to create a pull request with base branch being already deleted from remote ends up with `Unprocessable Entity` error. (example in #332).
Proposed solution:
Perform `git fetch <remote>` at the beginning of `create-pr`. if base branch is not present in remote branches, perform `handle_untracked_branch` with relevant remote for missing base branch. | usab | github create pr check if base branch for pr exists in remote attempt to create a pull request with base branch being already deleted from remote ends up with unprocessable entity error example in proposed solution perform git fetch at the beginning of create pr if base branch is not present in remote branches perform handle untracked branch with relevant remote for missing base branch | 1 |
469,568 | 13,521,017,602 | IssuesEvent | 2020-09-15 06:14:19 | geocollections/sarv-edit | https://api.github.com/repos/geocollections/sarv-edit | closed | location table upload error | Difficulty: Hard Priority: High Source: API Status: Completed Type: Bug | API throws key error:

and suggests choices related to 'attachment_link' table | 1.0 | location table upload error - API throws key error:

and suggests choices related to 'attachment_link' table | non_usab | location table upload error api throws key error and suggests choices related to attachment link table | 0 |
50,781 | 21,409,800,686 | IssuesEvent | 2022-04-22 03:47:24 | Azure/azure-sdk-for-java | https://api.github.com/repos/Azure/azure-sdk-for-java | closed | [QUERY] ServiceBusQueueOperation and ServiceBusTopicOperation extends deprecated interface (SendOperation) | question Client customer-reported needs-team-attention azure-spring azure-spring-servicebus | **Query/Question**
* com.azure.spring.integration.servicebus.queue.ServiceBusQueueOperation
* com.azure.spring.integration.servicebus.topic.ServiceBusTopicOperation
I'm using these interfaces for sending to the Service Bus.
And I found that the interface (SendOperation) they inherit are deprecated.
Is there any alternative to avoid using deprecations?
**Setup (please complete the following information if applicable):**
- Library/Libraries: com.azure.spring:azure-spring-integration-servicebus:2.13.0 | 1.0 | [QUERY] ServiceBusQueueOperation and ServiceBusTopicOperation extends deprecated interface (SendOperation) - **Query/Question**
* com.azure.spring.integration.servicebus.queue.ServiceBusQueueOperation
* com.azure.spring.integration.servicebus.topic.ServiceBusTopicOperation
I'm using these interfaces for sending to the Service Bus.
And I found that the interface (SendOperation) they inherit are deprecated.
Is there any alternative to avoid using deprecations?
**Setup (please complete the following information if applicable):**
- Library/Libraries: com.azure.spring:azure-spring-integration-servicebus:2.13.0 | non_usab | servicebusqueueoperation and servicebustopicoperation extends deprecated interface sendoperation query question com azure spring integration servicebus queue servicebusqueueoperation com azure spring integration servicebus topic servicebustopicoperation i m using these interfaces for sending to the service bus and i found that the interface sendoperation they inherit are deprecated is there any alternative to avoid using deprecations setup please complete the following information if applicable library libraries com azure spring azure spring integration servicebus | 0 |
120,075 | 15,700,646,322 | IssuesEvent | 2021-03-26 10:07:37 | appsmithorg/appsmith | https://api.github.com/repos/appsmithorg/appsmith | opened | [Bug] UX : End Tour option must be displayed to user on the Tour Pop up | Bug Low Needs Design Needs Triaging | Steps to reproduce:
1) Navigate to application
2) Click on Welcome tour
3) Now navigate through the app
and observe the pop up
Observation:
End tour option is not displayed on the Pop up however this is placed on the header.
Expectation:
Since the user focus is on the Pop up that is helping him to navigate or travel through the app it would be great help to display the "End Tour" option on the Pop Up
| 1.0 | [Bug] UX : End Tour option must be displayed to user on the Tour Pop up - Steps to reproduce:
1) Navigate to application
2) Click on Welcome tour
3) Now navigate through the app
and observe the pop up
Observation:
End tour option is not displayed on the Pop up however this is placed on the header.
Expectation:
Since the user focus is on the Pop up that is helping him to navigate or travel through the app it would be great help to display the "End Tour" option on the Pop Up
| non_usab | ux end tour option must be displayed to user on the tour pop up steps to reproduce navigate to application click on welcome tour now navigate through the app and observe the pop up observation end tour option is not displayed on the pop up however this is placed on the header expectation since the user focus is on the pop up that is helping him to navigate or travel through the app it would be great help to display the end tour option on the pop up | 0 |
323,420 | 23,946,784,357 | IssuesEvent | 2022-09-12 08:04:48 | Tonomy-Foundation/Tonomy-ID | https://api.github.com/repos/Tonomy-Foundation/Tonomy-ID | closed | Create a developer guide | documentation duplicate | maybe with videos
Contributors guide
maybe with slides
Different repos
How they work together
How to run the app
How run tests
Development
Environment and dependencies
linting and code practices
See https://www.notion.so/tonomy-foundation/Developer-contributor-guide-for-Tonomy-ID-b88e1fbf725840e0bbee9a3abcdea640 | 1.0 | Create a developer guide - maybe with videos
Contributors guide
maybe with slides
Different repos
How they work together
How to run the app
How run tests
Development
Environment and dependencies
linting and code practices
See https://www.notion.so/tonomy-foundation/Developer-contributor-guide-for-Tonomy-ID-b88e1fbf725840e0bbee9a3abcdea640 | non_usab | create a developer guide maybe with videos contributors guide maybe with slides different repos how they work together how to run the app how run tests development environment and dependencies linting and code practices see | 0 |
7,304 | 4,866,351,066 | IssuesEvent | 2016-11-14 23:28:20 | lionheart/openradar-mirror | https://api.github.com/repos/lionheart/openradar-mirror | opened | 29255917: Two-digit build numbers are not on the same line | classification:ui/usability reproducible:always status:open | #### Description
Summary:
If an app has a two-digit build number, when viewing the version/build in the "Activity" section of the app in iTC, the first number of the build number is on a different line than the second. See screenshots attached.
Apps with one-digit build numbers are not affected by this issue.
Steps to Reproduce:
1. Upload an app with a two-digit build number to iTunes Connect.
2. Click the Activity tab.
3. Click the version of the app.
4. You will be brought to a screen detailing info about the app. I have attached screenshots of this.
5. Look under the "Bundle Version String" section.
Expected Results:
The two digits of the build number should be on the same line.
Actual Results:
The two digits of the build number are not on the same line.
Version:
macOS 10.12.2 beta 2 (build 16C41b), Chrome 54.0.2840.98 (stable channel)
Notes:
Configuration:
Attachments:
'Screen Shot 2016-11-14 at 5.07.36 PM.png' and 'Screen Shot 2016-11-14 at 5.07.31 PM.png' were successfully uploaded.
-
Product Version: N/A
Created: 2016-11-14T23:14:07.577120
Originated: 2016-11-14T17:12:00
Open Radar Link: http://www.openradar.me/29255917 | True | 29255917: Two-digit build numbers are not on the same line - #### Description
Summary:
If an app has a two-digit build number, when viewing the version/build in the "Activity" section of the app in iTC, the first number of the build number is on a different line than the second. See screenshots attached.
Apps with one-digit build numbers are not affected by this issue.
Steps to Reproduce:
1. Upload an app with a two-digit build number to iTunes Connect.
2. Click the Activity tab.
3. Click the version of the app.
4. You will be brought to a screen detailing info about the app. I have attached screenshots of this.
5. Look under the "Bundle Version String" section.
Expected Results:
The two digits of the build number should be on the same line.
Actual Results:
The two digits of the build number are not on the same line.
Version:
macOS 10.12.2 beta 2 (build 16C41b), Chrome 54.0.2840.98 (stable channel)
Notes:
Configuration:
Attachments:
'Screen Shot 2016-11-14 at 5.07.36 PM.png' and 'Screen Shot 2016-11-14 at 5.07.31 PM.png' were successfully uploaded.
-
Product Version: N/A
Created: 2016-11-14T23:14:07.577120
Originated: 2016-11-14T17:12:00
Open Radar Link: http://www.openradar.me/29255917 | usab | two digit build numbers are not on the same line description summary if an app has a two digit build number when viewing the version build in the activity section of the app in itc the first number of the build number is on a different line than the second see screenshots attached apps with one digit build numbers are not affected by this issue steps to reproduce upload an app with a two digit build number to itunes connect click the activity tab click the version of the app you will be brought to a screen detailing info about the app i have attached screenshots of this look under the bundle version string section expected results the two digits of the build number should be on the same line actual results the two digits of the build number are not on the same line version macos beta build chrome stable channel notes configuration attachments screen shot at pm png and screen shot at pm png were successfully uploaded product version n a created originated open radar link | 1 |
18,778 | 13,213,438,579 | IssuesEvent | 2020-08-16 12:48:03 | textpattern/textpattern | https://api.github.com/repos/textpattern/textpattern | closed | 'No styles recorded' message needed | usability | ### Expected behaviour
There should be a 'No styles recorded' message on the styles page panel like so when no styles are available for the current theme:
```
graf(
span(null, array('class' => 'ui-icon ui-icon-info')).' '.
gTxt('no_css_recorded'),
array('class' => 'alert-block information')
);
```
### Actual behaviour
The left-hand column is blank, no message. This is inconsistent UI behaviour compared to the rest of the admin-side.
### Steps to reproduce
1. Select a theme.
2. Unassign styles from each section (i.e. style = none).
3. Go to styles panel and delete all styles.
4. See the empty left-hand column.
#### Additional information
Textpattern version: 4.8.3, 4.9
Once someone has coded this in, I will ensure the Textpacks are updated with this extra entry. | True | 'No styles recorded' message needed - ### Expected behaviour
There should be a 'No styles recorded' message on the styles page panel like so when no styles are available for the current theme:
```
graf(
span(null, array('class' => 'ui-icon ui-icon-info')).' '.
gTxt('no_css_recorded'),
array('class' => 'alert-block information')
);
```
### Actual behaviour
The left-hand column is blank, no message. This is inconsistent UI behaviour compared to the rest of the admin-side.
### Steps to reproduce
1. Select a theme.
2. Unassign styles from each section (i.e. style = none).
3. Go to styles panel and delete all styles.
4. See the empty left-hand column.
#### Additional information
Textpattern version: 4.8.3, 4.9
Once someone has coded this in, I will ensure the Textpacks are updated with this extra entry. | usab | no styles recorded message needed expected behaviour there should be a no styles recorded message on the styles page panel like so when no styles are available for the current theme graf span null array class ui icon ui icon info gtxt no css recorded array class alert block information actual behaviour the left hand column is blank no message this is inconsistent ui behaviour compared to the rest of the admin side steps to reproduce select a theme unassign styles from each section i e style none go to styles panel and delete all styles see the empty left hand column additional information textpattern version once someone has coded this in i will ensure the textpacks are updated with this extra entry | 1 |
258,207 | 22,292,969,315 | IssuesEvent | 2022-06-12 16:33:28 | scylladb/scylla | https://api.github.com/repos/scylladb/scylla | closed | test test_mv_quoted_column_names_build failed | test materialized-views | The cql-pytest test `test_materialized_view.py`::`test_mv_quoted_column_names_build` failed once in a CI build, in
https://jenkins.scylladb.com/job/releng/job/Scylla-CI/765/testReport/junit/(root)/test_materialized_view/test_mv_quoted_column_names_build/
```
def test_mv_quoted_column_names_build(cql, test_keyspace):
for colname in ['"dog"', '"Dog"', 'DOG', '"to"', 'int']:
with new_test_table(cql, test_keyspace, f'p int primary key, {colname} int') as table:
cql.execute(f'INSERT INTO {table} (p, {colname}) values (1, 2)')
with new_materialized_view(cql, table, '*', f'{colname}, p', f'{colname} is not null and p is not null') as mv:
# When Scylla's view builder fails as it did in issue #9450,
# there is no way to tell this state apart from a view build
# that simply hasn't completed (besides looking at the logs,
# which we don't). This means, unfortunately, that a failure
# of this test is slow - it needs to wait for a timeout.
start_time = time.time()
while time.time() < start_time + 30:
if list(cql.execute(f'SELECT * from {mv}')) == [(2, 1)]:
break
> assert list(cql.execute(f'SELECT * from {mv}')) == [(2, 1)]
E assert [] == [(2, 1)]
E Right contains one more item: (2, 1)
E Use -v to get the full diff
```
This test writes a tiny table (just one row), starts to build a view, and expects to see the view being updated in 30 seconds or less.
My usual guess in such fluke failures that haven't reproduced beyond this one time is an overcommitted test machine which causes a test to me hundreds of times slower than usual, but in this case, it's very surprising that building a view on a table with one row should take 30 seconds, even on a ridiculously overcommitted machine. But I don't have any other idea what can explain this failure. | 1.0 | test test_mv_quoted_column_names_build failed - The cql-pytest test `test_materialized_view.py`::`test_mv_quoted_column_names_build` failed once in a CI build, in
https://jenkins.scylladb.com/job/releng/job/Scylla-CI/765/testReport/junit/(root)/test_materialized_view/test_mv_quoted_column_names_build/
```
def test_mv_quoted_column_names_build(cql, test_keyspace):
for colname in ['"dog"', '"Dog"', 'DOG', '"to"', 'int']:
with new_test_table(cql, test_keyspace, f'p int primary key, {colname} int') as table:
cql.execute(f'INSERT INTO {table} (p, {colname}) values (1, 2)')
with new_materialized_view(cql, table, '*', f'{colname}, p', f'{colname} is not null and p is not null') as mv:
# When Scylla's view builder fails as it did in issue #9450,
# there is no way to tell this state apart from a view build
# that simply hasn't completed (besides looking at the logs,
# which we don't). This means, unfortunately, that a failure
# of this test is slow - it needs to wait for a timeout.
start_time = time.time()
while time.time() < start_time + 30:
if list(cql.execute(f'SELECT * from {mv}')) == [(2, 1)]:
break
> assert list(cql.execute(f'SELECT * from {mv}')) == [(2, 1)]
E assert [] == [(2, 1)]
E Right contains one more item: (2, 1)
E Use -v to get the full diff
```
This test writes a tiny table (just one row), starts to build a view, and expects to see the view being updated in 30 seconds or less.
My usual guess in such fluke failures that haven't reproduced beyond this one time is an overcommitted test machine which causes a test to me hundreds of times slower than usual, but in this case, it's very surprising that building a view on a table with one row should take 30 seconds, even on a ridiculously overcommitted machine. But I don't have any other idea what can explain this failure. | non_usab | test test mv quoted column names build failed the cql pytest test test materialized view py test mv quoted column names build failed once in a ci build in def test mv quoted column names build cql test keyspace for colname in with new test table cql test keyspace f p int primary key colname int as table cql execute f insert into table p colname values with new materialized view cql table f colname p f colname is not null and p is not null as mv when scylla s view builder fails as it did in issue there is no way to tell this state apart from a view build that simply hasn t completed besides looking at the logs which we don t this means unfortunately that a failure of this test is slow it needs to wait for a timeout start time time time while time time start time if list cql execute f select from mv break assert list cql execute f select from mv e assert e right contains one more item e use v to get the full diff this test writes a tiny table just one row starts to build a view and expects to see the view being updated in seconds or less my usual guess in such fluke failures that haven t reproduced beyond this one time is an overcommitted test machine which causes a test to me hundreds of times slower than usual but in this case it s very surprising that building a view on a table with one row should take seconds even on a ridiculously overcommitted machine but i don t have any other idea what can explain this failure | 0 |
79,515 | 28,349,930,596 | IssuesEvent | 2023-04-12 01:20:37 | dotCMS/core | https://api.github.com/repos/dotCMS/core | reopened | Pages View: Edit Mode must consider language and device | Type : Defect Team : Scout | ### Parent Issue
#22343
### Problem Statement
If I click on a page using the new pages list, it takes me to the Edit Mode without respecting the page language and the default device.
### Steps to Reproduce
1. Using full starter, open any existing page in English from the new Pages view.
2. Once in Edit Mode, change the device
3. Go back to the Pages View and open a page in Spanish (for example: Destinos)
4. You should see the Edit Mode in English and the last selected device
https://user-images.githubusercontent.com/8741395/221628050-3f97a21a-dbaf-4631-8737-9f8baa03b0ce.mov
### Acceptance Criteria
When I click on any page from the list, it should behave the same way the site browser does, respecting page language and device.
### dotCMS Version
23.03
### Proposed Objective
Core Features
### Proposed Priority
Priority 2 - Important
### External Links... Slack Conversations, Support Tickets, Figma Designs, etc.
_No response_
### Assumptions & Initiation Needs
_No response_
### Quality Assurance Notes & Workarounds
_No response_
### Sub-Tasks & Estimates
_No response_ | 1.0 | Pages View: Edit Mode must consider language and device - ### Parent Issue
#22343
### Problem Statement
If I click on a page using the new pages list, it takes me to the Edit Mode without respecting the page language and the default device.
### Steps to Reproduce
1. Using full starter, open any existing page in English from the new Pages view.
2. Once in Edit Mode, change the device
3. Go back to the Pages View and open a page in Spanish (for example: Destinos)
4. You should see the Edit Mode in English and the last selected device
https://user-images.githubusercontent.com/8741395/221628050-3f97a21a-dbaf-4631-8737-9f8baa03b0ce.mov
### Acceptance Criteria
When I click on any page from the list, it should behave the same way the site browser does, respecting page language and device.
### dotCMS Version
23.03
### Proposed Objective
Core Features
### Proposed Priority
Priority 2 - Important
### External Links... Slack Conversations, Support Tickets, Figma Designs, etc.
_No response_
### Assumptions & Initiation Needs
_No response_
### Quality Assurance Notes & Workarounds
_No response_
### Sub-Tasks & Estimates
_No response_ | non_usab | pages view edit mode must consider language and device parent issue problem statement if i click on a page using the new pages list it takes me to the edit mode without respecting the page language and the default device steps to reproduce using full starter open any existing page in english from the new pages view once in edit mode change the device go back to the pages view and open a page in spanish for example destinos you should see the edit mode in english and the last selected device acceptance criteria when i click on any page from the list it should behave the same way the site browser does respecting page language and device dotcms version proposed objective core features proposed priority priority important external links slack conversations support tickets figma designs etc no response assumptions initiation needs no response quality assurance notes workarounds no response sub tasks estimates no response | 0 |
19,837 | 14,628,749,456 | IssuesEvent | 2020-12-23 14:41:59 | dstackai/dstack | https://api.github.com/repos/dstackai/dstack | closed | Drop Reports and Jobs. Introduce Applications and ML Models. | enhancement usability | Drop Reports and Jobs from the sidebar.
Replace Stacks with Applications and ML Models. Instead of showing all types of stacks within Stacks, we need to offer two separate pages. Each page going to have its own `Create Stack` - `Push Application` and `Push ML Model`. | True | Drop Reports and Jobs. Introduce Applications and ML Models. - Drop Reports and Jobs from the sidebar.
Replace Stacks with Applications and ML Models. Instead of showing all types of stacks within Stacks, we need to offer two separate pages. Each page going to have its own `Create Stack` - `Push Application` and `Push ML Model`. | usab | drop reports and jobs introduce applications and ml models drop reports and jobs from the sidebar replace stacks with applications and ml models instead of showing all types of stacks within stacks we need to offer two separate pages each page going to have its own create stack push application and push ml model | 1 |
15,179 | 9,817,969,581 | IssuesEvent | 2019-06-13 18:03:05 | tlaplus/tlaplus | https://api.github.com/repos/tlaplus/tlaplus | closed | Model editor wastes screen estate | Toolbox enhancement help wanted usability | Let model editor make better use of screen estate. Especially:
- Result Page
- General (alignment values)
- Statistics (horizontally)
- Progress Output (vertically)
| True | Model editor wastes screen estate - Let model editor make better use of screen estate. Especially:
- Result Page
- General (alignment values)
- Statistics (horizontally)
- Progress Output (vertically)
| usab | model editor wastes screen estate let model editor make better use of screen estate especially result page general alignment values statistics horizontally progress output vertically | 1 |
6,638 | 4,415,511,652 | IssuesEvent | 2016-08-14 04:16:26 | rabbitmq/rabbitmq-consistent-hash-exchange | https://api.github.com/repos/rabbitmq/rabbitmq-consistent-hash-exchange | closed | Documentation Lacking around Routing Key Implications | bug doc effort-low usability | In my opinion you are missing some relevant information about the significance of the routing key. The questions I would like to see answered are:
Given a single consumer what should my routing key be? Based on the current documentation I think it should be less than thousands but as close to thousands as possible. So is 1000 a good number?
Given the addition of a new consumer should I adjust my bindings to reflect the additional consumer ie with 1 consumer we have a routing key of 1000 but with 2 we have a routing key of 500 on each queue because we expect the same throughput but spread across multiple queues.
Also, given the previous statement is true, why is this not just a percentage of total work load? Ie we you specify with the routing key what percentage of messages go to each queue. (this is why I don't think the previous statement is true)
Also, I will just throw it out there that I think you should handle a # being passed as the routing key. I started out without fully understanding and just assumed that a # would give me everything. That caused some real headaches.
Finally, I don't want to sound like a negative nancy so I will commend you on this excellent plugin. I will probably be sending a pull request in the near future. Our routing keys are of this nature ID.Hash.MessageType which will cause some slight issues when the message type is different. There is a chance that a messagem of type Foo will not go to the same queue as a message of type Bar, even though they have the same Identifier. Therefore, the feature I will (hopefully) add is the ability to specify what segment of the routing key to hash with some pattern matching. In our scenario something like #.#.* where hashes are matches and *'s are ignores or something similar. | True | Documentation Lacking around Routing Key Implications - In my opinion you are missing some relevant information about the significance of the routing key. The questions I would like to see answered are:
Given a single consumer what should my routing key be? Based on the current documentation I think it should be less than thousands but as close to thousands as possible. So is 1000 a good number?
Given the addition of a new consumer should I adjust my bindings to reflect the additional consumer ie with 1 consumer we have a routing key of 1000 but with 2 we have a routing key of 500 on each queue because we expect the same throughput but spread across multiple queues.
Also, given the previous statement is true, why is this not just a percentage of total work load? Ie we you specify with the routing key what percentage of messages go to each queue. (this is why I don't think the previous statement is true)
Also, I will just throw it out there that I think you should handle a # being passed as the routing key. I started out without fully understanding and just assumed that a # would give me everything. That caused some real headaches.
Finally, I don't want to sound like a negative nancy so I will commend you on this excellent plugin. I will probably be sending a pull request in the near future. Our routing keys are of this nature ID.Hash.MessageType which will cause some slight issues when the message type is different. There is a chance that a messagem of type Foo will not go to the same queue as a message of type Bar, even though they have the same Identifier. Therefore, the feature I will (hopefully) add is the ability to specify what segment of the routing key to hash with some pattern matching. In our scenario something like #.#.* where hashes are matches and *'s are ignores or something similar. | usab | documentation lacking around routing key implications in my opinion you are missing some relevant information about the significance of the routing key the questions i would like to see answered are given a single consumer what should my routing key be based on the current documentation i think it should be less than thousands but as close to thousands as possible so is a good number given the addition of a new consumer should i adjust my bindings to reflect the additional consumer ie with consumer we have a routing key of but with we have a routing key of on each queue because we expect the same throughput but spread across multiple queues also given the previous statement is true why is this not just a percentage of total work load ie we you specify with the routing key what percentage of messages go to each queue this is why i don t think the previous statement is true also i will just throw it out there that i think you should handle a being passed as the routing key i started out without fully understanding and just assumed that a would give me everything that caused some real headaches finally i don t want to sound like a negative nancy so i will commend you on this excellent plugin i will probably be sending a pull request in the near future our routing keys are of this nature id hash messagetype which will cause some slight issues when the message type is different there is a chance that a messagem of type foo will not go to the same queue as a message of type bar even though they have the same identifier therefore the feature i will hopefully add is the ability to specify what segment of the routing key to hash with some pattern matching in our scenario something like where hashes are matches and s are ignores or something similar | 1 |
17,911 | 12,421,475,062 | IssuesEvent | 2020-05-23 17:02:33 | christianpoveda/pijama | https://api.github.com/repos/christianpoveda/pijama | opened | Parse hexadecimal and binary integers | A-parsing C-enhancement C-usability E-easy E-mentoring | We only support decimal integers now. However, in certain contexts it is clearer to write `0xff` or `0b11` instead. To solve this issue we need to:
- [ ] Figure out the syntax:
- do `0x10` and `0X10` mean the same? are they both valid?
- what happens with the `-`? is processed as usual and just flips the sign of the number?
- [ ] Write two new parsers `hex_number`and `bin_number` inside `parser::literal`.
- [ ] Add some parsing and evaluation tests. Stuff like `2 == 0b10` and so on.
One problem here is that there is no method in `std` to produce a `i64` from an string like `0x42f23a`. We would need to do that conversion by hand during parsing (i don't think adding a dependency is worth). | True | Parse hexadecimal and binary integers - We only support decimal integers now. However, in certain contexts it is clearer to write `0xff` or `0b11` instead. To solve this issue we need to:
- [ ] Figure out the syntax:
- do `0x10` and `0X10` mean the same? are they both valid?
- what happens with the `-`? is processed as usual and just flips the sign of the number?
- [ ] Write two new parsers `hex_number`and `bin_number` inside `parser::literal`.
- [ ] Add some parsing and evaluation tests. Stuff like `2 == 0b10` and so on.
One problem here is that there is no method in `std` to produce a `i64` from an string like `0x42f23a`. We would need to do that conversion by hand during parsing (i don't think adding a dependency is worth). | usab | parse hexadecimal and binary integers we only support decimal integers now however in certain contexts it is clearer to write or instead to solve this issue we need to figure out the syntax do and mean the same are they both valid what happens with the is processed as usual and just flips the sign of the number write two new parsers hex number and bin number inside parser literal add some parsing and evaluation tests stuff like and so on one problem here is that there is no method in std to produce a from an string like we would need to do that conversion by hand during parsing i don t think adding a dependency is worth | 1 |
12,807 | 8,123,579,926 | IssuesEvent | 2018-08-16 14:58:30 | nerdalize/nerd | https://api.github.com/repos/nerdalize/nerd | opened | Slow OOM containers are complete gracefully without any hint | usability | When running the ethereum client `nerd job run ethereum/client-go` as a job the process will (seemingly) gracefully end. What happened is that the process slowly reached the memory constrained and Kubernetes asked the process gracefully to exit. running it like this solves the problem: `nerd job run --memory=6GiB ethereum/client-go `
We should probably catch this for the CLI user by reading out OOM events and displaying them inline:
```
JOB IMAGE INPUT OUTPUT MEMORY VCPU CREATED AT PHASE DETAILS
flights2-co2-calc nerdalize/co2-calculator flights2 1.1 1.0 27 minutes ago Completed OOM detected
```
| True | Slow OOM containers are complete gracefully without any hint - When running the ethereum client `nerd job run ethereum/client-go` as a job the process will (seemingly) gracefully end. What happened is that the process slowly reached the memory constrained and Kubernetes asked the process gracefully to exit. running it like this solves the problem: `nerd job run --memory=6GiB ethereum/client-go `
We should probably catch this for the CLI user by reading out OOM events and displaying them inline:
```
JOB IMAGE INPUT OUTPUT MEMORY VCPU CREATED AT PHASE DETAILS
flights2-co2-calc nerdalize/co2-calculator flights2 1.1 1.0 27 minutes ago Completed OOM detected
```
| usab | slow oom containers are complete gracefully without any hint when running the ethereum client nerd job run ethereum client go as a job the process will seemingly gracefully end what happened is that the process slowly reached the memory constrained and kubernetes asked the process gracefully to exit running it like this solves the problem nerd job run memory ethereum client go we should probably catch this for the cli user by reading out oom events and displaying them inline job image input output memory vcpu created at phase details calc nerdalize calculator minutes ago completed oom detected | 1 |
25,093 | 24,704,317,517 | IssuesEvent | 2022-10-19 17:42:51 | adrianamariaruiz/BOG005-social-network | https://api.github.com/repos/adrianamariaruiz/BOG005-social-network | closed | test de usabilidad | usabilidad | utilizando el localhost para realizar los test de usabilidad e incorporar el feedback en la app | True | test de usabilidad - utilizando el localhost para realizar los test de usabilidad e incorporar el feedback en la app | usab | test de usabilidad utilizando el localhost para realizar los test de usabilidad e incorporar el feedback en la app | 1 |
175,254 | 14,521,046,421 | IssuesEvent | 2020-12-14 06:40:40 | vmware/herald | https://api.github.com/repos/vmware/herald | opened | Add Venue Check-in screenshots and explanation | documentation | - Include advantages over QR code manual check-in
- Ensure it is obvious that this can be used independently of the phone-to-phone protocol in use (i.e. Herald Beacon detection can live side by side with GAEN/Robert/other protocol on a GAEN/Robert/other powered app) | 1.0 | Add Venue Check-in screenshots and explanation - - Include advantages over QR code manual check-in
- Ensure it is obvious that this can be used independently of the phone-to-phone protocol in use (i.e. Herald Beacon detection can live side by side with GAEN/Robert/other protocol on a GAEN/Robert/other powered app) | non_usab | add venue check in screenshots and explanation include advantages over qr code manual check in ensure it is obvious that this can be used independently of the phone to phone protocol in use i e herald beacon detection can live side by side with gaen robert other protocol on a gaen robert other powered app | 0 |
20,919 | 16,194,917,220 | IssuesEvent | 2021-05-04 13:31:30 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | Bad mouse position when releasing EditorSpinSlider | bug regression topic:editor usability | **Godot version:**
4.0 Master branch
**OS/device including version:**
Windows 10
**Issue description:**
When the mouse cursor reappears after dragging an EditorSpinSlider grabber, the position of the cursor is wrong.
It reappears at the exact location it was hidden despite the fact that the slider has moved.

**Steps to reproduce:**
Move any slider in a EditorSpinSlider (with the mouse set the rotation_degrees property of a node2d for instance)
Look at the location where the cursor reappears. | True | Bad mouse position when releasing EditorSpinSlider - **Godot version:**
4.0 Master branch
**OS/device including version:**
Windows 10
**Issue description:**
When the mouse cursor reappears after dragging an EditorSpinSlider grabber, the position of the cursor is wrong.
It reappears at the exact location it was hidden despite the fact that the slider has moved.

**Steps to reproduce:**
Move any slider in a EditorSpinSlider (with the mouse set the rotation_degrees property of a node2d for instance)
Look at the location where the cursor reappears. | usab | bad mouse position when releasing editorspinslider godot version master branch os device including version windows issue description when the mouse cursor reappears after dragging an editorspinslider grabber the position of the cursor is wrong it reappears at the exact location it was hidden despite the fact that the slider has moved steps to reproduce move any slider in a editorspinslider with the mouse set the rotation degrees property of a for instance look at the location where the cursor reappears | 1 |
1,937 | 3,025,369,023 | IssuesEvent | 2015-08-03 08:05:41 | lionheart/openradar-mirror | https://api.github.com/repos/lionheart/openradar-mirror | opened | 21373236: iOS 9: Glitch when rotating the screen and swiping between screens | classification:ui/usability reproducible:always status:open | #### Description
1. Be on iPad homescreen, in portrait
2. Swipe left to new search
3. Rotate to landscape
4. Swipe right back to first app page
Result: app folders which are on the first page are black for a moment before they become translucent again
→ Also broken: rdar://21373224
-
Product Version: iOS 9 (13A4254v)
Created: 2015-06-13T21:07:24.911390
Originated: 2015-06-13T23:07:00
Open Radar Link: http://www.openradar.me/21373236 | True | 21373236: iOS 9: Glitch when rotating the screen and swiping between screens - #### Description
1. Be on iPad homescreen, in portrait
2. Swipe left to new search
3. Rotate to landscape
4. Swipe right back to first app page
Result: app folders which are on the first page are black for a moment before they become translucent again
→ Also broken: rdar://21373224
-
Product Version: iOS 9 (13A4254v)
Created: 2015-06-13T21:07:24.911390
Originated: 2015-06-13T23:07:00
Open Radar Link: http://www.openradar.me/21373236 | usab | ios glitch when rotating the screen and swiping between screens description be on ipad homescreen in portrait swipe left to new search rotate to landscape swipe right back to first app page result app folders which are on the first page are black for a moment before they become translucent again → also broken rdar product version ios created originated open radar link | 1 |
17,406 | 11,993,583,141 | IssuesEvent | 2020-04-08 12:17:14 | the-tale/the-tale | https://api.github.com/repos/the-tale/the-tale | opened | Эмиссары: доработки интерфейса | comp_emissary cont_usability est_simple good first issue type_improvement | В таблицах со списком эмиссаров убрать/уменьшить/заменить колонку «статус».
К информации о запущенных мероприятиях (в таблицах особенно) добавить время окончания. | True | Эмиссары: доработки интерфейса - В таблицах со списком эмиссаров убрать/уменьшить/заменить колонку «статус».
К информации о запущенных мероприятиях (в таблицах особенно) добавить время окончания. | usab | эмиссары доработки интерфейса в таблицах со списком эмиссаров убрать уменьшить заменить колонку «статус» к информации о запущенных мероприятиях в таблицах особенно добавить время окончания | 1 |
159,607 | 12,483,669,206 | IssuesEvent | 2020-05-30 10:40:05 | WoWManiaUK/Redemption | https://api.github.com/repos/WoWManiaUK/Redemption | closed | [Battlegrounds] Demolishers | Fix - Tester Confirmed | **Links:** https://wowwiki.fandom.com/wiki/Strand_of_the_Ancients
https://wowwiki.fandom.com/wiki/Wintergrasp
https://www.wowhead.com/npc=28094/wintergrasp-demolisher#comments
**What is Happening:** The demolishers in both [Strand of the Ancients](https://wowwiki.fandom.com/wiki/Strand_of_the_Ancients) and [Wintergrasp](https://wowwiki.fandom.com/wiki/Wintergrasp) behave incorrectly:
The players in the passenger seats of demolishers are able to cast/shoot as intended, but their damage output is greatly increased. It seems they get a 2-3x damage increase by doing so.
I have searched in wowhead comments, in wowwiki pages, and was unable to find any mention of such a thing.
**What Should happen:**
The demolishers should provide a safe spot for ranged characters to attack, but it shouldn't affect their raw damage output.
| 1.0 | [Battlegrounds] Demolishers - **Links:** https://wowwiki.fandom.com/wiki/Strand_of_the_Ancients
https://wowwiki.fandom.com/wiki/Wintergrasp
https://www.wowhead.com/npc=28094/wintergrasp-demolisher#comments
**What is Happening:** The demolishers in both [Strand of the Ancients](https://wowwiki.fandom.com/wiki/Strand_of_the_Ancients) and [Wintergrasp](https://wowwiki.fandom.com/wiki/Wintergrasp) behave incorrectly:
The players in the passenger seats of demolishers are able to cast/shoot as intended, but their damage output is greatly increased. It seems they get a 2-3x damage increase by doing so.
I have searched in wowhead comments, in wowwiki pages, and was unable to find any mention of such a thing.
**What Should happen:**
The demolishers should provide a safe spot for ranged characters to attack, but it shouldn't affect their raw damage output.
| non_usab | demolishers links what is happening the demolishers in both and behave incorrectly the players in the passenger seats of demolishers are able to cast shoot as intended but their damage output is greatly increased it seems they get a damage increase by doing so i have searched in wowhead comments in wowwiki pages and was unable to find any mention of such a thing what should happen the demolishers should provide a safe spot for ranged characters to attack but it shouldn t affect their raw damage output | 0 |
32,434 | 26,696,630,004 | IssuesEvent | 2023-01-27 10:54:54 | UnitTestBot/UTBotJava | https://api.github.com/repos/UnitTestBot/UTBotJava | closed | Fix image publishing issue with containerd | ctg-enhancement comp-infrastructure | **Description**
Docker images publishing occasionally fails with error: `failed to copy: io: read/write on closed pipe`.
Examples of builds affected:
https://github.com/UnitTestBot/UTBotJava/actions/runs/4014201944
https://github.com/UnitTestBot/UTBotJava/actions/runs/4004789340
https://github.com/UnitTestBot/UTBotJava/actions/runs/3993543945
Problem mentioned here https://github.com/containerd/containerd/issues/7972
**Expected behavior**
Publishing works well
**Environment**
Not applicable
**Potential alternatives**
No
**Context**
Not applicable
| 1.0 | Fix image publishing issue with containerd - **Description**
Docker images publishing occasionally fails with error: `failed to copy: io: read/write on closed pipe`.
Examples of builds affected:
https://github.com/UnitTestBot/UTBotJava/actions/runs/4014201944
https://github.com/UnitTestBot/UTBotJava/actions/runs/4004789340
https://github.com/UnitTestBot/UTBotJava/actions/runs/3993543945
Problem mentioned here https://github.com/containerd/containerd/issues/7972
**Expected behavior**
Publishing works well
**Environment**
Not applicable
**Potential alternatives**
No
**Context**
Not applicable
| non_usab | fix image publishing issue with containerd description docker images publishing occasionally fails with error failed to copy io read write on closed pipe examples of builds affected problem mentioned here expected behavior publishing works well environment not applicable potential alternatives no context not applicable | 0 |
25,350 | 25,031,723,836 | IssuesEvent | 2022-11-04 12:58:04 | precice/precice | https://api.github.com/repos/precice/precice | opened | Rename attributes in `m2n` tag | usability breaking change | ## Problem
We currently have:
```xml
<m2n:sockets from="Fluid" to="Solid" exchange-directory="../" />
```
The `from` and `to` are kind of hard to understand. I often have to look up again which was which. And sometimes users misunderstand these and think that for a bi-directional coupling one would need two:
```xml
<m2n:sockets from="Fluid" to="Solid" exchange-directory="../" />
<m2n:sockets from="Solid" to="Fluid" exchange-directory="../" />
```
## Suggested solution
Rename to:
```xml
<m2n:sockets acceptor="Fluid" requestor="Solid" exchange-directory="../" />
```
## Alternatives
?
| True | Rename attributes in `m2n` tag - ## Problem
We currently have:
```xml
<m2n:sockets from="Fluid" to="Solid" exchange-directory="../" />
```
The `from` and `to` are kind of hard to understand. I often have to look up again which was which. And sometimes users misunderstand these and think that for a bi-directional coupling one would need two:
```xml
<m2n:sockets from="Fluid" to="Solid" exchange-directory="../" />
<m2n:sockets from="Solid" to="Fluid" exchange-directory="../" />
```
## Suggested solution
Rename to:
```xml
<m2n:sockets acceptor="Fluid" requestor="Solid" exchange-directory="../" />
```
## Alternatives
?
| usab | rename attributes in tag problem we currently have xml the from and to are kind of hard to understand i often have to look up again which was which and sometimes users misunderstand these and think that for a bi directional coupling one would need two xml suggested solution rename to xml alternatives | 1 |
16,372 | 2,889,828,337 | IssuesEvent | 2015-06-13 20:03:06 | damonkohler/sl4a | https://api.github.com/repos/damonkohler/sl4a | opened | contactsGet (from python) does not return correct phone number, email address, etc | auto-migrated Priority-Medium Type-Defect | _From @GoogleCodeExporter on May 31, 2015 11:26_
```
What steps will reproduce the problem?
Start ASE public server on the Android device.
From desktop computer, run this code:
import android
d = android.Android(addr=('192.168.3.24', '33068'))
res = d.contactsGet()
contacts = res[1]
print contacts[0]
What is the expected output? What do you see instead?
Expected output is:
{u'primary_email': u'2111', u'_id': u'2489', u'type': u'2', u'name': u'John Smith', u'primary_phone': u'6082666742'}
Output seen is:
{u'primary_email': u'john.smith@gmail.com', u'_id': u'2489', u'type': u'2', u'name': u'John Smith', u'primary_phone': u'1867'}
Basically, the primary_email, primary_phone and a bunch of other fields are
returning 4-digit numbers instead of actual values.
What version of the product are you using? On what operating system?
I am running ASE r25, and python_r7 I'm seeing this problem on an HTC Hero, running Android 1.5. Using
Please provide any additional information below.
If someone can point out where I should look for this bug, I can try fixing this myself. Thanks.
```
Original issue reported on code.google.com by `navin.ka...@gmail.com` on 23 Jun 2010 at 12:40
_Copied from original issue: damonkohler/android-scripting#357_ | 1.0 | contactsGet (from python) does not return correct phone number, email address, etc - _From @GoogleCodeExporter on May 31, 2015 11:26_
```
What steps will reproduce the problem?
Start ASE public server on the Android device.
From desktop computer, run this code:
import android
d = android.Android(addr=('192.168.3.24', '33068'))
res = d.contactsGet()
contacts = res[1]
print contacts[0]
What is the expected output? What do you see instead?
Expected output is:
{u'primary_email': u'2111', u'_id': u'2489', u'type': u'2', u'name': u'John Smith', u'primary_phone': u'6082666742'}
Output seen is:
{u'primary_email': u'john.smith@gmail.com', u'_id': u'2489', u'type': u'2', u'name': u'John Smith', u'primary_phone': u'1867'}
Basically, the primary_email, primary_phone and a bunch of other fields are
returning 4-digit numbers instead of actual values.
What version of the product are you using? On what operating system?
I am running ASE r25, and python_r7 I'm seeing this problem on an HTC Hero, running Android 1.5. Using
Please provide any additional information below.
If someone can point out where I should look for this bug, I can try fixing this myself. Thanks.
```
Original issue reported on code.google.com by `navin.ka...@gmail.com` on 23 Jun 2010 at 12:40
_Copied from original issue: damonkohler/android-scripting#357_ | non_usab | contactsget from python does not return correct phone number email address etc from googlecodeexporter on may what steps will reproduce the problem start ase public server on the android device from desktop computer run this code import android d android android addr res d contactsget contacts res print contacts what is the expected output what do you see instead expected output is u primary email u u id u u type u u name u john smith u primary phone u output seen is u primary email u john smith gmail com u id u u type u u name u john smith u primary phone u basically the primary email primary phone and a bunch of other fields are returning digit numbers instead of actual values what version of the product are you using on what operating system i am running ase and python i m seeing this problem on an htc hero running android using please provide any additional information below if someone can point out where i should look for this bug i can try fixing this myself thanks original issue reported on code google com by navin ka gmail com on jun at copied from original issue damonkohler android scripting | 0 |
9,566 | 6,387,558,211 | IssuesEvent | 2017-08-03 13:55:15 | buildbot/buildbot | https://api.github.com/repos/buildbot/buildbot | closed | Builders view: Tag selected: "Back" button doesn't work as expected | easy usability web UI | Steps to reproduce:
1. open https://nine.buildbot.net/#/builders
2. click the "buildbot" tag (now the URL is https://nine.buildbot.net/#/builders?tags=%2Bbuildbot)
3. press the browser's "back" button
Actual result:
1. URL in changed back to https://nine.buildbot.net/#/builders
2. builders are still filtered by the "buildbot" tag
Expected result:
1. is ok
2. builders are not filtered by the tag anymore | True | Builders view: Tag selected: "Back" button doesn't work as expected - Steps to reproduce:
1. open https://nine.buildbot.net/#/builders
2. click the "buildbot" tag (now the URL is https://nine.buildbot.net/#/builders?tags=%2Bbuildbot)
3. press the browser's "back" button
Actual result:
1. URL in changed back to https://nine.buildbot.net/#/builders
2. builders are still filtered by the "buildbot" tag
Expected result:
1. is ok
2. builders are not filtered by the tag anymore | usab | builders view tag selected back button doesn t work as expected steps to reproduce open click the buildbot tag now the url is press the browser s back button actual result url in changed back to builders are still filtered by the buildbot tag expected result is ok builders are not filtered by the tag anymore | 1 |
13,321 | 8,409,577,662 | IssuesEvent | 2018-10-12 07:51:16 | virtual-labs/vlsi-iiith | https://api.github.com/repos/virtual-labs/vlsi-iiith | closed | QA_VLSI_Spice Code Platform_Simulator_text-wrapped-in-small-screen | Category: Usability Developed By: VLEAD Open-Edx QA-bugs Severity: S1 Status : Closed category : UI | Defect Description
While testing in Windows machine found that the content in the small screen of simulator section is wrapped due to improper alignment. Due to this the user is unable to get the content of the experiment.
Environment :
OS: Windows 7, Linux
Browsers: Firefox,Chrome
Bandwidth : 100Mbps
Hardware Configuration:8GBRAM ,
Processor:i5
Attachments
Not available. | True | QA_VLSI_Spice Code Platform_Simulator_text-wrapped-in-small-screen - Defect Description
While testing in Windows machine found that the content in the small screen of simulator section is wrapped due to improper alignment. Due to this the user is unable to get the content of the experiment.
Environment :
OS: Windows 7, Linux
Browsers: Firefox,Chrome
Bandwidth : 100Mbps
Hardware Configuration:8GBRAM ,
Processor:i5
Attachments
Not available. | usab | qa vlsi spice code platform simulator text wrapped in small screen defect description while testing in windows machine found that the content in the small screen of simulator section is wrapped due to improper alignment due to this the user is unable to get the content of the experiment environment os windows linux browsers firefox chrome bandwidth hardware configuration processor attachments not available | 1 |
153,645 | 19,708,522,430 | IssuesEvent | 2022-01-13 01:37:52 | rvvergara/tv-series-app | https://api.github.com/repos/rvvergara/tv-series-app | opened | CVE-2021-32803 (High) detected in tar-4.4.1.tgz | security vulnerability | ## CVE-2021-32803 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.1.tgz">https://registry.npmjs.org/tar/-/tar-4.4.1.tgz</a></p>
<p>
Dependency Hierarchy:
- react-scripts-1.1.4.tgz (Root Library)
- fsevents-1.2.4.tgz
- node-pre-gyp-0.10.0.tgz
- :x: **tar-4.4.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 6.1.2, 5.0.7, 4.4.15, and 3.2.3 has an arbitrary File Creation/Overwrite vulnerability via insufficient symlink protection. `node-tar` aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary `stat` calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory. This order of operations resulted in the directory being created and added to the `node-tar` directory cache. When a directory is present in the directory cache, subsequent calls to mkdir for that directory are skipped. However, this is also where `node-tar` checks for symlinks occur. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass `node-tar` symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. This issue was addressed in releases 3.2.3, 4.4.15, 5.0.7 and 6.1.2.
<p>Publish Date: 2021-08-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32803>CVE-2021-32803</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-r628-mhmh-qjhw">https://github.com/npm/node-tar/security/advisories/GHSA-r628-mhmh-qjhw</a></p>
<p>Release Date: 2021-08-03</p>
<p>Fix Resolution: tar - 3.2.3, 4.4.15, 5.0.7, 6.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-32803 (High) detected in tar-4.4.1.tgz - ## CVE-2021-32803 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.1.tgz">https://registry.npmjs.org/tar/-/tar-4.4.1.tgz</a></p>
<p>
Dependency Hierarchy:
- react-scripts-1.1.4.tgz (Root Library)
- fsevents-1.2.4.tgz
- node-pre-gyp-0.10.0.tgz
- :x: **tar-4.4.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 6.1.2, 5.0.7, 4.4.15, and 3.2.3 has an arbitrary File Creation/Overwrite vulnerability via insufficient symlink protection. `node-tar` aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary `stat` calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory. This order of operations resulted in the directory being created and added to the `node-tar` directory cache. When a directory is present in the directory cache, subsequent calls to mkdir for that directory are skipped. However, this is also where `node-tar` checks for symlinks occur. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass `node-tar` symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. This issue was addressed in releases 3.2.3, 4.4.15, 5.0.7 and 6.1.2.
<p>Publish Date: 2021-08-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32803>CVE-2021-32803</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-r628-mhmh-qjhw">https://github.com/npm/node-tar/security/advisories/GHSA-r628-mhmh-qjhw</a></p>
<p>Release Date: 2021-08-03</p>
<p>Fix Resolution: tar - 3.2.3, 4.4.15, 5.0.7, 6.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_usab | cve high detected in tar tgz cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href dependency hierarchy react scripts tgz root library fsevents tgz node pre gyp tgz x tar tgz vulnerable library vulnerability details the npm package tar aka node tar before versions and has an arbitrary file creation overwrite vulnerability via insufficient symlink protection node tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted this is in part achieved by ensuring that extracted directories are not symlinks additionally in order to prevent unnecessary stat calls to determine whether a given path is a directory paths are cached when directories are created this logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory this order of operations resulted in the directory being created and added to the node tar directory cache when a directory is present in the directory cache subsequent calls to mkdir for that directory are skipped however this is also where node tar checks for symlinks occur by first creating a directory and then replacing that directory with a symlink it was thus possible to bypass node tar symlink checks on directories essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location thus allowing arbitrary file creation and overwrite this issue was addressed in releases and publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar step up your open source security game with whitesource | 0 |
14,925 | 9,594,138,543 | IssuesEvent | 2019-05-09 13:21:13 | virtualsatellite/VirtualSatellite4-Core | https://api.github.com/repos/virtualsatellite/VirtualSatellite4-Core | opened | Update QUDV Dialog icons | comfort/usability easy | There are just standard icons in the QUDV Dialogs. They should be replaced with something more adequate to QUDV.

| True | Update QUDV Dialog icons - There are just standard icons in the QUDV Dialogs. They should be replaced with something more adequate to QUDV.

| usab | update qudv dialog icons there are just standard icons in the qudv dialogs they should be replaced with something more adequate to qudv | 1 |
366,211 | 10,818,329,022 | IssuesEvent | 2019-11-08 11:47:51 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | mail.google.com - see bug description | browser-chrome priority-critical | <!-- @browser: Chrome 78.0.3904 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.87 Safari/537.36 -->
<!-- @reported_with: -->
**URL**: https://mail.google.com/mail/u/0/#inbox/FMfcgxwDsFbCpVNgmHJpQjLpkdPRmdnF
**Browser / Version**: Chrome 78.0.3904
**Operating System**: Windows 10
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: wsacadc
**Steps to Reproduce**:
sacaacsaasasxsas
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | mail.google.com - see bug description - <!-- @browser: Chrome 78.0.3904 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.87 Safari/537.36 -->
<!-- @reported_with: -->
**URL**: https://mail.google.com/mail/u/0/#inbox/FMfcgxwDsFbCpVNgmHJpQjLpkdPRmdnF
**Browser / Version**: Chrome 78.0.3904
**Operating System**: Windows 10
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: wsacadc
**Steps to Reproduce**:
sacaacsaasasxsas
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_usab | mail google com see bug description url browser version chrome operating system windows tested another browser yes problem type something else description wsacadc steps to reproduce sacaacsaasasxsas browser configuration none from with ❤️ | 0 |
16,011 | 10,481,530,133 | IssuesEvent | 2019-09-24 09:52:44 | City-of-Helsinki/decidim-helsinki | https://api.github.com/repos/City-of-Helsinki/decidim-helsinki | closed | Palaaminen äänestysnäkymästä prosessilistaukseen (käytettävyys) | usability issue | Kun menee ylälinkin kautta niin siellä tulee hienosti että Takaisin listaukseen, jos on jo äänestänyt. Mutta jos ei ole äänestänyt tai menee etusivun kautta projektiin Osbudemo ja sieltä valitsee esim läntinen suurpiiri ja äänestä niin puuttuu paluu projektiin.
Nyt olen mennyt aina pääsivun kautta.
Esim. Tässä ei ole suoraa paluuta

Tähän takaisin

| True | Palaaminen äänestysnäkymästä prosessilistaukseen (käytettävyys) - Kun menee ylälinkin kautta niin siellä tulee hienosti että Takaisin listaukseen, jos on jo äänestänyt. Mutta jos ei ole äänestänyt tai menee etusivun kautta projektiin Osbudemo ja sieltä valitsee esim läntinen suurpiiri ja äänestä niin puuttuu paluu projektiin.
Nyt olen mennyt aina pääsivun kautta.
Esim. Tässä ei ole suoraa paluuta

Tähän takaisin

| usab | palaaminen äänestysnäkymästä prosessilistaukseen käytettävyys kun menee ylälinkin kautta niin siellä tulee hienosti että takaisin listaukseen jos on jo äänestänyt mutta jos ei ole äänestänyt tai menee etusivun kautta projektiin osbudemo ja sieltä valitsee esim läntinen suurpiiri ja äänestä niin puuttuu paluu projektiin nyt olen mennyt aina pääsivun kautta esim tässä ei ole suoraa paluuta tähän takaisin | 1 |
165,376 | 6,275,546,558 | IssuesEvent | 2017-07-18 07:14:36 | kubernetes/kubeadm | https://api.github.com/repos/kubernetes/kubeadm | closed | Move all environment variables into the API | kind/cleanup kind/refactor priority/backlog | We should have this mix of "you can set this env param to modify behavior" (which is unversioned from the API machinery PoV) and the real API group with corresponding types using the correct API machinery.
Everything in [this file](https://github.com/kubernetes/kubernetes/blob/v1.7.0/cmd/kubeadm/app/apis/kubeadm/env.go) should be moved into the API types file. | 1.0 | Move all environment variables into the API - We should have this mix of "you can set this env param to modify behavior" (which is unversioned from the API machinery PoV) and the real API group with corresponding types using the correct API machinery.
Everything in [this file](https://github.com/kubernetes/kubernetes/blob/v1.7.0/cmd/kubeadm/app/apis/kubeadm/env.go) should be moved into the API types file. | non_usab | move all environment variables into the api we should have this mix of you can set this env param to modify behavior which is unversioned from the api machinery pov and the real api group with corresponding types using the correct api machinery everything in should be moved into the api types file | 0 |
128,727 | 18,070,106,325 | IssuesEvent | 2021-09-21 01:11:56 | brogers588/Java_Demo | https://api.github.com/repos/brogers588/Java_Demo | opened | CVE-2021-3805 (Medium) detected in object-path-0.11.4.tgz | security vulnerability | ## CVE-2021-3805 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>object-path-0.11.4.tgz</b></p></summary>
<p>Access deep object properties using a path</p>
<p>Library home page: <a href="https://registry.npmjs.org/object-path/-/object-path-0.11.4.tgz">https://registry.npmjs.org/object-path/-/object-path-0.11.4.tgz</a></p>
<p>Path to dependency file: Java_Demo/client/package.json</p>
<p>Path to vulnerable library: Java_Demo/client/node_modules/object-path/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.2.0.tgz (Root Library)
- resolve-url-loader-3.1.0.tgz
- adjust-sourcemap-loader-2.0.0.tgz
- :x: **object-path-0.11.4.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
object-path is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution')
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3805>CVE-2021-3805</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/571e3baf-7c46-46e3-9003-ba7e4e623053/">https://huntr.dev/bounties/571e3baf-7c46-46e3-9003-ba7e4e623053/</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: object-path - 0.11.8</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"object-path","packageVersion":"0.11.4","packageFilePaths":["/client/package.json"],"isTransitiveDependency":true,"dependencyTree":"react-scripts:3.2.0;resolve-url-loader:3.1.0;adjust-sourcemap-loader:2.0.0;object-path:0.11.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"object-path - 0.11.8"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-3805","vulnerabilityDetails":"object-path is vulnerable to Improperly Controlled Modification of Object Prototype Attributes (\u0027Prototype Pollution\u0027)","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3805","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-3805 (Medium) detected in object-path-0.11.4.tgz - ## CVE-2021-3805 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>object-path-0.11.4.tgz</b></p></summary>
<p>Access deep object properties using a path</p>
<p>Library home page: <a href="https://registry.npmjs.org/object-path/-/object-path-0.11.4.tgz">https://registry.npmjs.org/object-path/-/object-path-0.11.4.tgz</a></p>
<p>Path to dependency file: Java_Demo/client/package.json</p>
<p>Path to vulnerable library: Java_Demo/client/node_modules/object-path/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.2.0.tgz (Root Library)
- resolve-url-loader-3.1.0.tgz
- adjust-sourcemap-loader-2.0.0.tgz
- :x: **object-path-0.11.4.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
object-path is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution')
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3805>CVE-2021-3805</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/571e3baf-7c46-46e3-9003-ba7e4e623053/">https://huntr.dev/bounties/571e3baf-7c46-46e3-9003-ba7e4e623053/</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: object-path - 0.11.8</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"object-path","packageVersion":"0.11.4","packageFilePaths":["/client/package.json"],"isTransitiveDependency":true,"dependencyTree":"react-scripts:3.2.0;resolve-url-loader:3.1.0;adjust-sourcemap-loader:2.0.0;object-path:0.11.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"object-path - 0.11.8"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-3805","vulnerabilityDetails":"object-path is vulnerable to Improperly Controlled Modification of Object Prototype Attributes (\u0027Prototype Pollution\u0027)","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3805","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> --> | non_usab | cve medium detected in object path tgz cve medium severity vulnerability vulnerable library object path tgz access deep object properties using a path library home page a href path to dependency file java demo client package json path to vulnerable library java demo client node modules object path package json dependency hierarchy react scripts tgz root library resolve url loader tgz adjust sourcemap loader tgz x object path tgz vulnerable library found in base branch master vulnerability details object path is vulnerable to improperly controlled modification of object prototype attributes prototype pollution publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution object path isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree react scripts resolve url loader adjust sourcemap loader object path isminimumfixversionavailable true minimumfixversion object path basebranches vulnerabilityidentifier cve vulnerabilitydetails object path is vulnerable to improperly controlled modification of object prototype attributes pollution vulnerabilityurl | 0 |
211,560 | 23,833,152,582 | IssuesEvent | 2022-09-06 01:08:37 | RG4421/spark-tpcds-benchmark | https://api.github.com/repos/RG4421/spark-tpcds-benchmark | opened | CVE-2022-38752 (Medium) detected in snakeyaml-1.26.jar | security vulnerability | ## CVE-2022-38752 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.26.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /spark-tpcds-benchmark-runner/build.gradle</p>
<p>Path to vulnerable library: /20210226193317_RPTMIF/downloadResource_EEVKWP/20210226193449/snakeyaml-1.26.jar,/tmp/ws-ua_20210226193317_RPTMIF/downloadResource_EEVKWP/20210226193449/snakeyaml-1.26.jar</p>
<p>
Dependency Hierarchy:
- :x: **snakeyaml-1.26.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/RG4421/spark-tpcds-benchmark/commit/24795721e1aed3432c75aa8da2526a6878146e28">24795721e1aed3432c75aa8da2526a6878146e28</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stack-overflow.
<p>Publish Date: 2022-09-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38752>CVE-2022-38752</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
| True | CVE-2022-38752 (Medium) detected in snakeyaml-1.26.jar - ## CVE-2022-38752 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.26.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /spark-tpcds-benchmark-runner/build.gradle</p>
<p>Path to vulnerable library: /20210226193317_RPTMIF/downloadResource_EEVKWP/20210226193449/snakeyaml-1.26.jar,/tmp/ws-ua_20210226193317_RPTMIF/downloadResource_EEVKWP/20210226193449/snakeyaml-1.26.jar</p>
<p>
Dependency Hierarchy:
- :x: **snakeyaml-1.26.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/RG4421/spark-tpcds-benchmark/commit/24795721e1aed3432c75aa8da2526a6878146e28">24795721e1aed3432c75aa8da2526a6878146e28</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stack-overflow.
<p>Publish Date: 2022-09-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38752>CVE-2022-38752</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
| non_usab | cve medium detected in snakeyaml jar cve medium severity vulnerability vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file spark tpcds benchmark runner build gradle path to vulnerable library rptmif downloadresource eevkwp snakeyaml jar tmp ws ua rptmif downloadresource eevkwp snakeyaml jar dependency hierarchy x snakeyaml jar vulnerable library found in head commit a href found in base branch develop vulnerability details using snakeyaml to parse untrusted yaml files may be vulnerable to denial of service attacks dos if the parser is running on user supplied input an attacker may supply content that causes the parser to crash by stack overflow publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href | 0 |
21,836 | 17,859,366,781 | IssuesEvent | 2021-09-05 17:11:51 | humhub/humhub | https://api.github.com/repos/humhub/humhub | closed | UserPickerField - Instant display of users | Kind:Enhancement Component:User Topic:Usability | ### Is your feature request related to a problem? Please describe.
When typing the @ sign in a post or comment, it would be nice to directly see a list of possible and relevant users.
### Describe the solution you'd like
For example, in a post, this could be a list of the last 10 members active in the Space/ContentContainer.
For a comment, this could be e.g. the followers of the content. (The creator, other commenters, Liking Users).
| True | UserPickerField - Instant display of users - ### Is your feature request related to a problem? Please describe.
When typing the @ sign in a post or comment, it would be nice to directly see a list of possible and relevant users.
### Describe the solution you'd like
For example, in a post, this could be a list of the last 10 members active in the Space/ContentContainer.
For a comment, this could be e.g. the followers of the content. (The creator, other commenters, Liking Users).
| usab | userpickerfield instant display of users is your feature request related to a problem please describe when typing the sign in a post or comment it would be nice to directly see a list of possible and relevant users describe the solution you d like for example in a post this could be a list of the last members active in the space contentcontainer for a comment this could be e g the followers of the content the creator other commenters liking users | 1 |
133,347 | 12,537,227,350 | IssuesEvent | 2020-06-05 02:43:42 | sfu-db/covid19-datasets | https://api.github.com/repos/sfu-db/covid19-datasets | closed | Add : John Hopkins COVID-19 dataset | documentation | The dataset can be found here:
https://github.com/CSSEGISandData/COVID-19
Please perform the following tasks for this dataset:
1) Create a dataset-details page and fill in all the relevant info.
2) Add link to dataset-details page to the README.md.
3) Generate pandas-profile webpage and add the link to the webpage as described in [how to contribute](https://github.com/sfu-db/covid19-datasets/blob/master/assets/how-to-contribute-COVID19repo.md) page.
4) add your .py script that was used to generate the pandas-profile report to `scripts` directory.
**PMP RAs** - Please feel to assign this to yourself by commenting on it and I will assign this to you.
| 1.0 | Add : John Hopkins COVID-19 dataset - The dataset can be found here:
https://github.com/CSSEGISandData/COVID-19
Please perform the following tasks for this dataset:
1) Create a dataset-details page and fill in all the relevant info.
2) Add link to dataset-details page to the README.md.
3) Generate pandas-profile webpage and add the link to the webpage as described in [how to contribute](https://github.com/sfu-db/covid19-datasets/blob/master/assets/how-to-contribute-COVID19repo.md) page.
4) add your .py script that was used to generate the pandas-profile report to `scripts` directory.
**PMP RAs** - Please feel to assign this to yourself by commenting on it and I will assign this to you.
| non_usab | add john hopkins covid dataset the dataset can be found here please perform the following tasks for this dataset create a dataset details page and fill in all the relevant info add link to dataset details page to the readme md generate pandas profile webpage and add the link to the webpage as described in page add your py script that was used to generate the pandas profile report to scripts directory pmp ras please feel to assign this to yourself by commenting on it and i will assign this to you | 0 |
201,236 | 7,027,789,328 | IssuesEvent | 2017-12-25 02:48:09 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | hp.myway.com - site is not usable | browser-firefox priority-important type-stylo | <!-- @browser: Firefox 59.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:59.0) Gecko/20100101 Firefox/59.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @extra_label: type-stylo -->
**URL**: http://hp.myway.com/radiorage/ttab02/index.html?p2=%5EZX%5Expt539%5ETTAB02%5Esa&n=783ad866&ptb=A7A630E7-0D62-4B43-B3B1-8A26656708F0&si=674340&st=tab
**Browser / Version**: Firefox 59.0
**Operating System**: Windows 10
**Tested Another Browser**: Unknown
**Problem type**: Site is not usable
**Description**: It doesnot worj
**Steps to Reproduce**:
layout.css.servo.enabled: true
[](https://webcompat.com/uploads/2017/12/5157545c-a743-487b-910a-1a4621b62a1e.jpg)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | hp.myway.com - site is not usable - <!-- @browser: Firefox 59.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:59.0) Gecko/20100101 Firefox/59.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @extra_label: type-stylo -->
**URL**: http://hp.myway.com/radiorage/ttab02/index.html?p2=%5EZX%5Expt539%5ETTAB02%5Esa&n=783ad866&ptb=A7A630E7-0D62-4B43-B3B1-8A26656708F0&si=674340&st=tab
**Browser / Version**: Firefox 59.0
**Operating System**: Windows 10
**Tested Another Browser**: Unknown
**Problem type**: Site is not usable
**Description**: It doesnot worj
**Steps to Reproduce**:
layout.css.servo.enabled: true
[](https://webcompat.com/uploads/2017/12/5157545c-a743-487b-910a-1a4621b62a1e.jpg)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_usab | hp myway com site is not usable url browser version firefox operating system windows tested another browser unknown problem type site is not usable description it doesnot worj steps to reproduce layout css servo enabled true from with ❤️ | 0 |
21,612 | 17,374,551,463 | IssuesEvent | 2021-07-30 18:47:33 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | Changing editor theme is extremely slow | bug confirmed regression topic:editor usability | ### Godot version
4.0.dev (bdcc8741e4826ce27850bbffa5880c57451e3be5)
### System information
Windows 10
### Issue description
When applying a theme from editor settings, the editor freezes during a very long time before applying the theme (40 seconds on my configuration, in release_debug).
### Steps to reproduce
1. Open any project
2. Open `Editor > Editor Settings`
3. Go to `Interface > Theme`
4. Select a different theme in the `Preset` setting
5. Press close
6. Observe the editor freeze for a long time
### Minimal reproduction project
n/a | True | Changing editor theme is extremely slow - ### Godot version
4.0.dev (bdcc8741e4826ce27850bbffa5880c57451e3be5)
### System information
Windows 10
### Issue description
When applying a theme from editor settings, the editor freezes during a very long time before applying the theme (40 seconds on my configuration, in release_debug).
### Steps to reproduce
1. Open any project
2. Open `Editor > Editor Settings`
3. Go to `Interface > Theme`
4. Select a different theme in the `Preset` setting
5. Press close
6. Observe the editor freeze for a long time
### Minimal reproduction project
n/a | usab | changing editor theme is extremely slow godot version dev system information windows issue description when applying a theme from editor settings the editor freezes during a very long time before applying the theme seconds on my configuration in release debug steps to reproduce open any project open editor editor settings go to interface theme select a different theme in the preset setting press close observe the editor freeze for a long time minimal reproduction project n a | 1 |
24,501 | 23,851,663,117 | IssuesEvent | 2022-09-06 18:32:40 | bevyengine/bevy | https://api.github.com/repos/bevyengine/bevy | closed | Add `pop` method to reflection `List` trait | D-Good-First-Issue C-Usability A-Reflection | Yeah that's what I was getting at in my other comment. I don't think it's going to be beneficial to add a new trait for each data structure type. That can probably be served somehow via `TypeData`.
But a `pop` method at least let's us handle lists, queues, and stacks in a general way.
_Originally posted by @MrGVSV in https://github.com/bevyengine/bevy/pull/5792#discussion_r955195316_ | True | Add `pop` method to reflection `List` trait - Yeah that's what I was getting at in my other comment. I don't think it's going to be beneficial to add a new trait for each data structure type. That can probably be served somehow via `TypeData`.
But a `pop` method at least let's us handle lists, queues, and stacks in a general way.
_Originally posted by @MrGVSV in https://github.com/bevyengine/bevy/pull/5792#discussion_r955195316_ | usab | add pop method to reflection list trait yeah that s what i was getting at in my other comment i don t think it s going to be beneficial to add a new trait for each data structure type that can probably be served somehow via typedata but a pop method at least let s us handle lists queues and stacks in a general way originally posted by mrgvsv in | 1 |
28,178 | 31,998,864,402 | IssuesEvent | 2023-09-21 10:56:54 | VirtusLab/besom | https://api.github.com/repos/VirtusLab/besom | opened | `ResourceId` opaque type cannot be used in places that require `String` easiliy | kind/improvement impact/usability size/S area/api | ```
[error] Found: besom.internal.Output[besom.types.ResourceId]
[error] Required: besom.internal.Input[String | String]
[error] restApi = api.id,
[error] ^^^^^^
```
Instead of this:
```
restApi = api.id.map(_.asString), // FIXME this is a hack
```
It would be nice to be able to use this:
```
restApi = api.id,
```
or, where applicable, just this:
```
restApi = api,
``` | True | `ResourceId` opaque type cannot be used in places that require `String` easiliy - ```
[error] Found: besom.internal.Output[besom.types.ResourceId]
[error] Required: besom.internal.Input[String | String]
[error] restApi = api.id,
[error] ^^^^^^
```
Instead of this:
```
restApi = api.id.map(_.asString), // FIXME this is a hack
```
It would be nice to be able to use this:
```
restApi = api.id,
```
or, where applicable, just this:
```
restApi = api,
``` | usab | resourceid opaque type cannot be used in places that require string easiliy found besom internal output required besom internal input restapi api id instead of this restapi api id map asstring fixme this is a hack it would be nice to be able to use this restapi api id or where applicable just this restapi api | 1 |
18,256 | 12,705,479,126 | IssuesEvent | 2020-06-23 04:46:29 | dgraph-io/dgraph | https://api.github.com/repos/dgraph-io/dgraph | opened | Dgraph Zero Reports Errors when Alpha Connects | area/usability kind/enhancement status/accepted | <!-- If you suspect this could be a bug, follow the template. -->
### What version of Dgraph are you using?
```
v20.07.0-beta.Jun15
```
### Have you tried reproducing the issue with the latest release?
Yep.
### What is the hardware spec (RAM, OS)?
* 8 vCPUs, 30 GB memory, 100 GB HD
* Ubuntu Bionic Beaver
### Steps to reproduce the issue (command/config used to run Dgraph).
```bash
dgraph zero &
dgraph alpha --tls_dir ./tls -p p1 -w w1 &
dgraph alpha --tls_dir ./tls -p p2 -w w2 -o1 &
```
### Expected behaviour and actual result.
The expected is that errors would report errors, and non-errors would not report errors.
The actual results are the following when starting up 1st alpha and 2nd alpha.
```
I0622 19:41:27.283878 23691 zero.go:422] Got connection request: cluster_info_only:true
I0622 19:41:27.284102 23691 zero.go:440] Connected: cluster_info_only:true
I0622 19:41:27.285139 23691 zero.go:422] Got connection request: addr:"localhost:7080"
I0622 19:41:27.285354 23691 pool.go:160] CONNECTING to localhost:7080
W0622 19:41:27.286628 23691 pool.go:254] Connection lost with localhost:7080. Error: rpc error: code = Unknown desc = No node has been set up yet
I0622 19:41:27.288309 23691 zero.go:571] Connected: id:1 group_id:1 addr:"localhost:7080"
I0622 19:43:44.026371 23691 zero.go:422] Got connection request: cluster_info_only:true
I0622 19:43:44.026598 23691 zero.go:440] Connected: cluster_info_only:true
I0622 19:43:44.027507 23691 zero.go:422] Got connection request: addr:"localhost:7081"
I0622 19:43:44.027810 23691 pool.go:160] CONNECTING to localhost:7081
W0622 19:43:44.028838 23691 pool.go:254] Connection lost with localhost:7081. Error: rpc error: code = Unknown desc = No node has been set up yet
I0622 19:43:44.030501 23691 zero.go:571] Connected: id:2 group_id:2 addr:"localhost:7081"
```
| True | Dgraph Zero Reports Errors when Alpha Connects - <!-- If you suspect this could be a bug, follow the template. -->
### What version of Dgraph are you using?
```
v20.07.0-beta.Jun15
```
### Have you tried reproducing the issue with the latest release?
Yep.
### What is the hardware spec (RAM, OS)?
* 8 vCPUs, 30 GB memory, 100 GB HD
* Ubuntu Bionic Beaver
### Steps to reproduce the issue (command/config used to run Dgraph).
```bash
dgraph zero &
dgraph alpha --tls_dir ./tls -p p1 -w w1 &
dgraph alpha --tls_dir ./tls -p p2 -w w2 -o1 &
```
### Expected behaviour and actual result.
The expected is that errors would report errors, and non-errors would not report errors.
The actual results are the following when starting up 1st alpha and 2nd alpha.
```
I0622 19:41:27.283878 23691 zero.go:422] Got connection request: cluster_info_only:true
I0622 19:41:27.284102 23691 zero.go:440] Connected: cluster_info_only:true
I0622 19:41:27.285139 23691 zero.go:422] Got connection request: addr:"localhost:7080"
I0622 19:41:27.285354 23691 pool.go:160] CONNECTING to localhost:7080
W0622 19:41:27.286628 23691 pool.go:254] Connection lost with localhost:7080. Error: rpc error: code = Unknown desc = No node has been set up yet
I0622 19:41:27.288309 23691 zero.go:571] Connected: id:1 group_id:1 addr:"localhost:7080"
I0622 19:43:44.026371 23691 zero.go:422] Got connection request: cluster_info_only:true
I0622 19:43:44.026598 23691 zero.go:440] Connected: cluster_info_only:true
I0622 19:43:44.027507 23691 zero.go:422] Got connection request: addr:"localhost:7081"
I0622 19:43:44.027810 23691 pool.go:160] CONNECTING to localhost:7081
W0622 19:43:44.028838 23691 pool.go:254] Connection lost with localhost:7081. Error: rpc error: code = Unknown desc = No node has been set up yet
I0622 19:43:44.030501 23691 zero.go:571] Connected: id:2 group_id:2 addr:"localhost:7081"
```
| usab | dgraph zero reports errors when alpha connects what version of dgraph are you using beta have you tried reproducing the issue with the latest release yep what is the hardware spec ram os vcpus gb memory gb hd ubuntu bionic beaver steps to reproduce the issue command config used to run dgraph bash dgraph zero dgraph alpha tls dir tls p w dgraph alpha tls dir tls p w expected behaviour and actual result the expected is that errors would report errors and non errors would not report errors the actual results are the following when starting up alpha and alpha zero go got connection request cluster info only true zero go connected cluster info only true zero go got connection request addr localhost pool go connecting to localhost pool go connection lost with localhost error rpc error code unknown desc no node has been set up yet zero go connected id group id addr localhost zero go got connection request cluster info only true zero go connected cluster info only true zero go got connection request addr localhost pool go connecting to localhost pool go connection lost with localhost error rpc error code unknown desc no node has been set up yet zero go connected id group id addr localhost | 1 |
21,006 | 16,439,338,737 | IssuesEvent | 2021-05-20 12:49:38 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | PhysicalSkyMaterial is black (but ProceduralSky displays normally) | confirmed discussion topic:rendering usability | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if using non-official build. -->
Current Master: `a17fba3f21241beb68127d30b8d07db267e65e35`
**OS/device including version:**
<!-- Specify GPU model, drivers, and the backend (GLES2, GLES3, Vulkan) if graphics-related. -->
Windows 10, Vulkan, iGPU intel 10th gen ( intel hd graphics ):
`i5-1035G1 CPU @ 1.00GHz, 1.19GHz`
**Issue description:**
<!-- What happened, and what was expected. -->
The sky is black from the default PhysicalSkyMaterial, I was expecting the sky to look like a sky.

**Steps to reproduce:**
Create a WorldEnvironment node, add a sky as the background, then add a PhysicalSkyMaterial.
**Minimal reproduction project:**
<!-- A small Godot project which reproduces the issue. Drag and drop a zip archive to upload it. -->
[no-sky.zip](https://github.com/godotengine/godot/files/5082455/no-sky.zip)
| True | PhysicalSkyMaterial is black (but ProceduralSky displays normally) - <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if using non-official build. -->
Current Master: `a17fba3f21241beb68127d30b8d07db267e65e35`
**OS/device including version:**
<!-- Specify GPU model, drivers, and the backend (GLES2, GLES3, Vulkan) if graphics-related. -->
Windows 10, Vulkan, iGPU intel 10th gen ( intel hd graphics ):
`i5-1035G1 CPU @ 1.00GHz, 1.19GHz`
**Issue description:**
<!-- What happened, and what was expected. -->
The sky is black from the default PhysicalSkyMaterial, I was expecting the sky to look like a sky.

**Steps to reproduce:**
Create a WorldEnvironment node, add a sky as the background, then add a PhysicalSkyMaterial.
**Minimal reproduction project:**
<!-- A small Godot project which reproduces the issue. Drag and drop a zip archive to upload it. -->
[no-sky.zip](https://github.com/godotengine/godot/files/5082455/no-sky.zip)
| usab | physicalskymaterial is black but proceduralsky displays normally please search existing issues for potential duplicates before filing yours godot version current master os device including version windows vulkan igpu intel gen intel hd graphics cpu issue description the sky is black from the default physicalskymaterial i was expecting the sky to look like a sky steps to reproduce create a worldenvironment node add a sky as the background then add a physicalskymaterial minimal reproduction project | 1 |
25,807 | 25,939,282,100 | IssuesEvent | 2022-12-16 16:46:52 | pulumi/pulumi-cloudflare | https://api.github.com/repos/pulumi/pulumi-cloudflare | closed | `v4.13.0` no longer accepts API token from env var? | kind/bug impact/usability | ### What happened?
Since `4.13.0`, it seems like this provided no longer respects `CLOUDFLARE_API_TOKEN` env var, and throws the following error when trying to `up`:
```
error: could not validate provider configuration: 3 errors occurred:
* Invalid combination of arguments: "api_key": one of `api_key,api_token,api_user_service_key` must be specified
* Invalid combination of arguments: "api_token": one of `api_key,api_token,api_user_service_key` must be specified
* Invalid combination of arguments: "api_user_service_key": one of `api_key,api_token,api_user_service_key` must be specified
```
Downgrading to `4.12.1` seems to resolve this issue.
### Steps to reproduce
1. Create a project and a stack, without configuring CloudFlare provider through Pulumi config.
2. Set `CLOUDFLARE_API_TOKEN` as env var and run pulumi up.
### Expected Behavior
Should respect `CLOUDFLARE_API_TOKEN` and use it, when available.
### Actual Behavior
Error regarding a missing token.
### Output of `pulumi about`
```
CLI
Version 3.48.0
Go Version go1.19.3
Go Compiler gc
Plugins
NAME VERSION
cloudflare 4.12.1
nodejs unknown
Host
OS darwin
Version 12.6
Arch arm64
```
### Additional context
_No response_
### Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
| True | `v4.13.0` no longer accepts API token from env var? - ### What happened?
Since `4.13.0`, it seems like this provided no longer respects `CLOUDFLARE_API_TOKEN` env var, and throws the following error when trying to `up`:
```
error: could not validate provider configuration: 3 errors occurred:
* Invalid combination of arguments: "api_key": one of `api_key,api_token,api_user_service_key` must be specified
* Invalid combination of arguments: "api_token": one of `api_key,api_token,api_user_service_key` must be specified
* Invalid combination of arguments: "api_user_service_key": one of `api_key,api_token,api_user_service_key` must be specified
```
Downgrading to `4.12.1` seems to resolve this issue.
### Steps to reproduce
1. Create a project and a stack, without configuring CloudFlare provider through Pulumi config.
2. Set `CLOUDFLARE_API_TOKEN` as env var and run pulumi up.
### Expected Behavior
Should respect `CLOUDFLARE_API_TOKEN` and use it, when available.
### Actual Behavior
Error regarding a missing token.
### Output of `pulumi about`
```
CLI
Version 3.48.0
Go Version go1.19.3
Go Compiler gc
Plugins
NAME VERSION
cloudflare 4.12.1
nodejs unknown
Host
OS darwin
Version 12.6
Arch arm64
```
### Additional context
_No response_
### Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
| usab | no longer accepts api token from env var what happened since it seems like this provided no longer respects cloudflare api token env var and throws the following error when trying to up error could not validate provider configuration errors occurred invalid combination of arguments api key one of api key api token api user service key must be specified invalid combination of arguments api token one of api key api token api user service key must be specified invalid combination of arguments api user service key one of api key api token api user service key must be specified downgrading to seems to resolve this issue steps to reproduce create a project and a stack without configuring cloudflare provider through pulumi config set cloudflare api token as env var and run pulumi up expected behavior should respect cloudflare api token and use it when available actual behavior error regarding a missing token output of pulumi about cli version go version go compiler gc plugins name version cloudflare nodejs unknown host os darwin version arch additional context no response contributing vote on this issue by adding a 👍 reaction to contribute a fix for this issue leave a comment and link to your pull request if you ve opened one already | 1 |
60,420 | 17,023,421,249 | IssuesEvent | 2021-07-03 01:56:42 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | xapi not returning any data | Component: xapi Priority: major Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 5.04pm, Monday, 8th June 2009]**
xapi is timing out and/or returning no data. For version 0.5 and 0.6. | 1.0 | xapi not returning any data - **[Submitted to the original trac issue database at 5.04pm, Monday, 8th June 2009]**
xapi is timing out and/or returning no data. For version 0.5 and 0.6. | non_usab | xapi not returning any data xapi is timing out and or returning no data for version and | 0 |
101,722 | 4,129,085,358 | IssuesEvent | 2016-06-10 09:35:04 | flashxyz/BookMe | https://api.github.com/repos/flashxyz/BookMe | closed | BUG: reservation limit not working well | Backlog bug priority : medium | reservation limit not working well when changing the slot time to time different from 60 minutes | 1.0 | BUG: reservation limit not working well - reservation limit not working well when changing the slot time to time different from 60 minutes | non_usab | bug reservation limit not working well reservation limit not working well when changing the slot time to time different from minutes | 0 |
99,561 | 30,494,083,424 | IssuesEvent | 2023-07-18 09:37:49 | aws/aws-cdk | https://api.github.com/repos/aws/aws-cdk | closed | `CodeBuild`: Can't run docker image on ARM build instance | bug p2 feature-request @aws-cdk/aws-codebuild | ### Describe the bug
Hey there!
It seems like, so far, the `LinuxArmBuildImage` class doesn't support the `from_docker_registry` function [1]. But the `LinuxBuildImage` class does. This seems weird to me since CodeBuild is generally supporting the use of ARM images [2].
Not sure if this is a bug or a future request, feel free to change.
[1] https://docs.aws.amazon.com/cdk/api/v2/python/aws_cdk.aws_codebuild/README.html#images and https://docs.aws.amazon.com/cdk/api/v2/python/aws_cdk.aws_codebuild/LinuxArmBuildImage.html
[2] https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-compute-types.html###
### Reproduction Steps
```python
my_codebuild_project = codebuild.Project(
self,
"MyCodeBuildProject",
project_name="MyCodeBuildProject",
build_spec=codebuild.BuildSpec.from_object(
{
"version": "0.2",
"phases": {
"build": {
"commands": [
"echo hi",
]
}
},
}
),
environment=codebuild.BuildEnvironment(
build_image=codebuild.LinuxBuildImage.from_docker_registry( # can't use LinuxArmBuildImage
f"public.ecr.aws/sam/build-python3.9:latest-arm64" # runs into error with the -arm64 ending
),
compute_type=codebuild.ComputeType.LARGE,
privileged=True,
),
)
```
### Possible Solution
`LinuxArmBuildImage` should also support `from_docker_registry` function
### Additional Information/Context
_No response_
### CDK CLI Version
2.66.1
### Framework Version
_No response_
### Node.js Version
v16.16.0
### OS
MacOS
### Language
Python
### Language Version
3.9.15
### Other information
_No response_ | 1.0 | `CodeBuild`: Can't run docker image on ARM build instance - ### Describe the bug
Hey there!
It seems like, so far, the `LinuxArmBuildImage` class doesn't support the `from_docker_registry` function [1]. But the `LinuxBuildImage` class does. This seems weird to me since CodeBuild is generally supporting the use of ARM images [2].
Not sure if this is a bug or a future request, feel free to change.
[1] https://docs.aws.amazon.com/cdk/api/v2/python/aws_cdk.aws_codebuild/README.html#images and https://docs.aws.amazon.com/cdk/api/v2/python/aws_cdk.aws_codebuild/LinuxArmBuildImage.html
[2] https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-compute-types.html###
### Reproduction Steps
```python
my_codebuild_project = codebuild.Project(
self,
"MyCodeBuildProject",
project_name="MyCodeBuildProject",
build_spec=codebuild.BuildSpec.from_object(
{
"version": "0.2",
"phases": {
"build": {
"commands": [
"echo hi",
]
}
},
}
),
environment=codebuild.BuildEnvironment(
build_image=codebuild.LinuxBuildImage.from_docker_registry( # can't use LinuxArmBuildImage
f"public.ecr.aws/sam/build-python3.9:latest-arm64" # runs into error with the -arm64 ending
),
compute_type=codebuild.ComputeType.LARGE,
privileged=True,
),
)
```
### Possible Solution
`LinuxArmBuildImage` should also support `from_docker_registry` function
### Additional Information/Context
_No response_
### CDK CLI Version
2.66.1
### Framework Version
_No response_
### Node.js Version
v16.16.0
### OS
MacOS
### Language
Python
### Language Version
3.9.15
### Other information
_No response_ | non_usab | codebuild can t run docker image on arm build instance describe the bug hey there it seems like so far the linuxarmbuildimage class doesn t support the from docker registry function but the linuxbuildimage class does this seems weird to me since codebuild is generally supporting the use of arm images not sure if this is a bug or a future request feel free to change and reproduction steps python my codebuild project codebuild project self mycodebuildproject project name mycodebuildproject build spec codebuild buildspec from object version phases build commands echo hi environment codebuild buildenvironment build image codebuild linuxbuildimage from docker registry can t use linuxarmbuildimage f public ecr aws sam build latest runs into error with the ending compute type codebuild computetype large privileged true possible solution linuxarmbuildimage should also support from docker registry function additional information context no response cdk cli version framework version no response node js version os macos language python language version other information no response | 0 |
27,371 | 28,180,143,614 | IssuesEvent | 2023-04-04 01:19:57 | bevyengine/bevy | https://api.github.com/repos/bevyengine/bevy | closed | Reduce render Node boilerplate | D-Good-First-Issue A-Rendering C-Usability | 1. Setting up a render node in Plugin::build() often looks like this:
```rust
// Create node
let taa_node = TAANode::new(&mut render_app.world);
// Get core_3d subgraph
let mut graph = render_app.world.resource_mut::<RenderGraph>();
let draw_3d_graph = graph
.get_sub_graph_mut(crate::core_3d::graph::NAME)
.unwrap();
// Add node, connect to view
draw_3d_graph.add_node(draw_3d_graph::node::TAA, taa_node);
draw_3d_graph.add_slot_edge(
draw_3d_graph.input_node().id,
crate::core_3d::graph::input::VIEW_ENTITY,
draw_3d_graph::node::TAA,
TAANode::IN_VIEW,
);
// Order nodes: MAIN_PASS -> TAA -> BLOOM -> TONEMAPPING
draw_3d_graph.add_node_edge(
crate::core_3d::graph::node::MAIN_PASS,
draw_3d_graph::node::TAA,
);
draw_3d_graph.add_node_edge(draw_3d_graph::node::TAA, crate::core_3d::graph::node::BLOOM);
draw_3d_graph.add_node_edge(
draw_3d_graph::node::TAA,
crate::core_3d::graph::node::TONEMAPPING,
);
```
We could add a helper to simplify it to something like this:
```rust
render_app.add_view_node<TAANode>(
subgraph: crate::core_3d::graph::NAME,
name: draw_3d_graph::node::TAA,
order: &[
crate::core_3d::graph::node::MAIN_PASS,
draw_3d_graph::node::TAA,
crate::core_3d::graph::node::BLOOM,
crate::core_3d::graph::node::TONEMAPPING,
],
);
```
---
2. Declaring a render Node often looks like this:
```rust
struct TAANode {
view_query: QueryState<(
&'static ExtractedCamera,
&'static ViewTarget,
&'static TAAHistoryTextures,
&'static ViewPrepassTextures,
&'static TAAPipelineId,
)>,
}
impl TAANode {
const IN_VIEW: &'static str = "view";
fn new(world: &mut World) -> Self {
Self {
view_query: QueryState::new(world),
}
}
}
impl Node for TAANode {
fn input(&self) -> Vec<SlotInfo> {
vec![SlotInfo::new(Self::IN_VIEW, SlotType::Entity)]
}
fn update(&mut self, world: &mut World) {
self.view_query.update_archetypes(world);
}
fn run(
&self,
graph: &mut RenderGraphContext,
render_context: &mut RenderContext,
world: &World,
) -> Result<(), NodeRunError> {
// Trace span
#[cfg(feature = "trace")]
let _taa_span = info_span!("taa").entered();
// Get view_query
let view_entity = graph.get_input_entity(Self::IN_VIEW)?;
let (
Ok((camera, view_target, taa_history_textures, prepass_textures, taa_pipeline_id)),
Some(pipelines),
Some(pipeline_cache),
) = (
self.view_query.get_manual(world, view_entity),
world.get_resource::<TAAPipeline>(),
world.get_resource::<PipelineCache>(),
) else {
return Ok(());
};
// ...
}
}
```
We could provide an easier and more ergonomic Node impl:
```rust
struct TAANode {
view_query: QueryState<(
&'static ExtractedCamera,
&'static ViewTarget,
&'static TAAHistoryTextures,
&'static ViewPrepassTextures,
&'static TAAPipelineId,
)>,
// If possible, allow resource access declaratively
taa_pipeline: Res<TAAPipeline>,
pipeline_cache: Res<PipelineCache>,
}
impl SimpleNode for TAANode {
fn run(
&self,
graph: &mut RenderGraphContext,
render_context: &mut RenderContext,
world: &World,
// Automatically get view_query
((camera, view_target, taa_history_textures, ...)): (...),
) -> Result<(), NodeRunError> {
// Trace span automatically added
// ...
}
}
``` | True | Reduce render Node boilerplate - 1. Setting up a render node in Plugin::build() often looks like this:
```rust
// Create node
let taa_node = TAANode::new(&mut render_app.world);
// Get core_3d subgraph
let mut graph = render_app.world.resource_mut::<RenderGraph>();
let draw_3d_graph = graph
.get_sub_graph_mut(crate::core_3d::graph::NAME)
.unwrap();
// Add node, connect to view
draw_3d_graph.add_node(draw_3d_graph::node::TAA, taa_node);
draw_3d_graph.add_slot_edge(
draw_3d_graph.input_node().id,
crate::core_3d::graph::input::VIEW_ENTITY,
draw_3d_graph::node::TAA,
TAANode::IN_VIEW,
);
// Order nodes: MAIN_PASS -> TAA -> BLOOM -> TONEMAPPING
draw_3d_graph.add_node_edge(
crate::core_3d::graph::node::MAIN_PASS,
draw_3d_graph::node::TAA,
);
draw_3d_graph.add_node_edge(draw_3d_graph::node::TAA, crate::core_3d::graph::node::BLOOM);
draw_3d_graph.add_node_edge(
draw_3d_graph::node::TAA,
crate::core_3d::graph::node::TONEMAPPING,
);
```
We could add a helper to simplify it to something like this:
```rust
render_app.add_view_node<TAANode>(
subgraph: crate::core_3d::graph::NAME,
name: draw_3d_graph::node::TAA,
order: &[
crate::core_3d::graph::node::MAIN_PASS,
draw_3d_graph::node::TAA,
crate::core_3d::graph::node::BLOOM,
crate::core_3d::graph::node::TONEMAPPING,
],
);
```
---
2. Declaring a render Node often looks like this:
```rust
struct TAANode {
view_query: QueryState<(
&'static ExtractedCamera,
&'static ViewTarget,
&'static TAAHistoryTextures,
&'static ViewPrepassTextures,
&'static TAAPipelineId,
)>,
}
impl TAANode {
const IN_VIEW: &'static str = "view";
fn new(world: &mut World) -> Self {
Self {
view_query: QueryState::new(world),
}
}
}
impl Node for TAANode {
fn input(&self) -> Vec<SlotInfo> {
vec![SlotInfo::new(Self::IN_VIEW, SlotType::Entity)]
}
fn update(&mut self, world: &mut World) {
self.view_query.update_archetypes(world);
}
fn run(
&self,
graph: &mut RenderGraphContext,
render_context: &mut RenderContext,
world: &World,
) -> Result<(), NodeRunError> {
// Trace span
#[cfg(feature = "trace")]
let _taa_span = info_span!("taa").entered();
// Get view_query
let view_entity = graph.get_input_entity(Self::IN_VIEW)?;
let (
Ok((camera, view_target, taa_history_textures, prepass_textures, taa_pipeline_id)),
Some(pipelines),
Some(pipeline_cache),
) = (
self.view_query.get_manual(world, view_entity),
world.get_resource::<TAAPipeline>(),
world.get_resource::<PipelineCache>(),
) else {
return Ok(());
};
// ...
}
}
```
We could provide an easier and more ergonomic Node impl:
```rust
struct TAANode {
view_query: QueryState<(
&'static ExtractedCamera,
&'static ViewTarget,
&'static TAAHistoryTextures,
&'static ViewPrepassTextures,
&'static TAAPipelineId,
)>,
// If possible, allow resource access declaratively
taa_pipeline: Res<TAAPipeline>,
pipeline_cache: Res<PipelineCache>,
}
impl SimpleNode for TAANode {
fn run(
&self,
graph: &mut RenderGraphContext,
render_context: &mut RenderContext,
world: &World,
// Automatically get view_query
((camera, view_target, taa_history_textures, ...)): (...),
) -> Result<(), NodeRunError> {
// Trace span automatically added
// ...
}
}
``` | usab | reduce render node boilerplate setting up a render node in plugin build often looks like this rust create node let taa node taanode new mut render app world get core subgraph let mut graph render app world resource mut let draw graph graph get sub graph mut crate core graph name unwrap add node connect to view draw graph add node draw graph node taa taa node draw graph add slot edge draw graph input node id crate core graph input view entity draw graph node taa taanode in view order nodes main pass taa bloom tonemapping draw graph add node edge crate core graph node main pass draw graph node taa draw graph add node edge draw graph node taa crate core graph node bloom draw graph add node edge draw graph node taa crate core graph node tonemapping we could add a helper to simplify it to something like this rust render app add view node subgraph crate core graph name name draw graph node taa order crate core graph node main pass draw graph node taa crate core graph node bloom crate core graph node tonemapping declaring a render node often looks like this rust struct taanode view query querystate static extractedcamera static viewtarget static taahistorytextures static viewprepasstextures static taapipelineid impl taanode const in view static str view fn new world mut world self self view query querystate new world impl node for taanode fn input self vec vec fn update mut self world mut world self view query update archetypes world fn run self graph mut rendergraphcontext render context mut rendercontext world world result trace span let taa span info span taa entered get view query let view entity graph get input entity self in view let ok camera view target taa history textures prepass textures taa pipeline id some pipelines some pipeline cache self view query get manual world view entity world get resource world get resource else return ok we could provide an easier and more ergonomic node impl rust struct taanode view query querystate static extractedcamera static viewtarget static taahistorytextures static viewprepasstextures static taapipelineid if possible allow resource access declaratively taa pipeline res pipeline cache res impl simplenode for taanode fn run self graph mut rendergraphcontext render context mut rendercontext world world automatically get view query camera view target taa history textures result trace span automatically added | 1 |
106,678 | 23,265,828,431 | IssuesEvent | 2022-08-04 17:16:29 | objectos/objectos | https://api.github.com/repos/objectos/objectos | reopened | AsciiDoc: support inline macros | t:feature c:code a:objectos-asciidoc | ## Test cases
- [x] tc01: well formed https
- [x] tc02: not an inline macro (rollback) | 1.0 | AsciiDoc: support inline macros - ## Test cases
- [x] tc01: well formed https
- [x] tc02: not an inline macro (rollback) | non_usab | asciidoc support inline macros test cases well formed https not an inline macro rollback | 0 |
14,453 | 9,195,321,355 | IssuesEvent | 2019-03-07 01:55:32 | PuzzleServer/mainpuzzleserver | https://api.github.com/repos/PuzzleServer/mainpuzzleserver | opened | Add messages to author responses / player submissions explaining that the answer will be normalized | puzzleday ask usability | We should put text for authors / players reminding them that responses / submissions are normalized (changed to all capitols, whitespace removed) | True | Add messages to author responses / player submissions explaining that the answer will be normalized - We should put text for authors / players reminding them that responses / submissions are normalized (changed to all capitols, whitespace removed) | usab | add messages to author responses player submissions explaining that the answer will be normalized we should put text for authors players reminding them that responses submissions are normalized changed to all capitols whitespace removed | 1 |
28,114 | 31,789,161,714 | IssuesEvent | 2023-09-13 00:53:59 | neurobagel/neurobagel_examples | https://api.github.com/repos/neurobagel/neurobagel_examples | closed | Test local graph creation and querying with example data | maint:usability type:maintenance | - [x] go through [local docker compose API-graph setup](https://neurobagel.org/infrastructure/) with new example data + push pheno-bids.jsonld into newly created graph
- [x] spin up query tool Docker container locally (see https://github.com/neurobagel/api/issues/150)
- [x] run an empty query using query tool and download dataset-level and participant-level TSVs
- [x] add results TSVs from query tool to repo | True | Test local graph creation and querying with example data - - [x] go through [local docker compose API-graph setup](https://neurobagel.org/infrastructure/) with new example data + push pheno-bids.jsonld into newly created graph
- [x] spin up query tool Docker container locally (see https://github.com/neurobagel/api/issues/150)
- [x] run an empty query using query tool and download dataset-level and participant-level TSVs
- [x] add results TSVs from query tool to repo | usab | test local graph creation and querying with example data go through with new example data push pheno bids jsonld into newly created graph spin up query tool docker container locally see run an empty query using query tool and download dataset level and participant level tsvs add results tsvs from query tool to repo | 1 |
180,684 | 13,942,661,164 | IssuesEvent | 2020-10-22 21:25:15 | daniel-norris/neu_ui | https://api.github.com/repos/daniel-norris/neu_ui | closed | Create a simple FormGroup component test | good first issue hacktoberfest tests | **Is your feature request related to a problem? Please describe.**
Test coverage across the application is low. We need to build confidence that the components have the expected behaviour that we want and to help mitigate any regression in the future.
**Describe the solution you'd like**
We need to implement better test coverage across the library. Ideally each component should be accompanied by a test case written using Jest. At a minimum the test should check whether the component successfully shows any child props. If you are able to include tests for any additional functionality then that would be appreciated!
We should have a test case covering the FormGroup component implemented using Jest. More info on Jest can be found here.
For examples of how this is done, take a look at existing test cases in the library. An example would be the CardHeader or Input components.
You can run your tests using `npm run test` or to see test coverage across the library `npm run test:cov`.
This is part of epic #19. | 1.0 | Create a simple FormGroup component test - **Is your feature request related to a problem? Please describe.**
Test coverage across the application is low. We need to build confidence that the components have the expected behaviour that we want and to help mitigate any regression in the future.
**Describe the solution you'd like**
We need to implement better test coverage across the library. Ideally each component should be accompanied by a test case written using Jest. At a minimum the test should check whether the component successfully shows any child props. If you are able to include tests for any additional functionality then that would be appreciated!
We should have a test case covering the FormGroup component implemented using Jest. More info on Jest can be found here.
For examples of how this is done, take a look at existing test cases in the library. An example would be the CardHeader or Input components.
You can run your tests using `npm run test` or to see test coverage across the library `npm run test:cov`.
This is part of epic #19. | non_usab | create a simple formgroup component test is your feature request related to a problem please describe test coverage across the application is low we need to build confidence that the components have the expected behaviour that we want and to help mitigate any regression in the future describe the solution you d like we need to implement better test coverage across the library ideally each component should be accompanied by a test case written using jest at a minimum the test should check whether the component successfully shows any child props if you are able to include tests for any additional functionality then that would be appreciated we should have a test case covering the formgroup component implemented using jest more info on jest can be found here for examples of how this is done take a look at existing test cases in the library an example would be the cardheader or input components you can run your tests using npm run test or to see test coverage across the library npm run test cov this is part of epic | 0 |
14,076 | 2,789,887,655 | IssuesEvent | 2015-05-08 22:11:09 | google/google-visualization-api-issues | https://api.github.com/repos/google/google-visualization-api-issues | opened | Table won't work in XHTML | Priority-Medium Type-Defect | Original [issue 428](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=428) created by orwant on 2010-10-14T16:19:54.000Z:
<b>What steps will reproduce the problem? Please provide a link to a</b>
<b>demonstration page if at all possible, or attach code.</b>
1. Here is a test page, an alteration of the doc example :
http://www.hydrowide.com/google/test.xhtml
2. IMHO the issue is probably related to an improper usage of entity &nbsp; which is incorrect in XHTML and should be replaced by &# 160;
<b>What component is this issue related to (PieChart, LineChart, DataTable,</b>
<b>Query, etc)?</b>
google.visualization.Table
<b>Are you using the test environment (version 1.1)?</b>
<b>(If you are not sure, answer NO)</b>
NO.
<b>What operating system and browser are you using?</b>
Windows 7 / Google chrome 7 / Firefox 3.6
<b>*********************************************************</b>
<b>For developers viewing this issue: please click the 'star' icon to be</b>
<b>notified of future changes, and to let us know how many of you are</b>
<b>interested in seeing it resolved.</b>
<b>*********************************************************</b>
| 1.0 | Table won't work in XHTML - Original [issue 428](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=428) created by orwant on 2010-10-14T16:19:54.000Z:
<b>What steps will reproduce the problem? Please provide a link to a</b>
<b>demonstration page if at all possible, or attach code.</b>
1. Here is a test page, an alteration of the doc example :
http://www.hydrowide.com/google/test.xhtml
2. IMHO the issue is probably related to an improper usage of entity &nbsp; which is incorrect in XHTML and should be replaced by &# 160;
<b>What component is this issue related to (PieChart, LineChart, DataTable,</b>
<b>Query, etc)?</b>
google.visualization.Table
<b>Are you using the test environment (version 1.1)?</b>
<b>(If you are not sure, answer NO)</b>
NO.
<b>What operating system and browser are you using?</b>
Windows 7 / Google chrome 7 / Firefox 3.6
<b>*********************************************************</b>
<b>For developers viewing this issue: please click the 'star' icon to be</b>
<b>notified of future changes, and to let us know how many of you are</b>
<b>interested in seeing it resolved.</b>
<b>*********************************************************</b>
| non_usab | table won t work in xhtml original created by orwant on what steps will reproduce the problem please provide a link to a demonstration page if at all possible or attach code here is a test page an alteration of the doc example imho the issue is probably related to an improper usage of entity amp nbsp which is incorrect in xhtml and should be replaced by amp nbsp what component is this issue related to piechart linechart datatable query etc google visualization table are you using the test environment version if you are not sure answer no no what operating system and browser are you using windows google chrome firefox for developers viewing this issue please click the star icon to be notified of future changes and to let us know how many of you are interested in seeing it resolved | 0 |
252,516 | 27,245,660,176 | IssuesEvent | 2023-02-22 01:39:21 | sesong11/zulondb | https://api.github.com/repos/sesong11/zulondb | closed | CVE-2018-10237 (Medium) detected in guava-20.0.jar - autoclosed | security vulnerability | ## CVE-2018-10237 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>guava-20.0.jar</b></p></summary>
<p>Guava is a suite of core and expanded libraries that include
utility classes, google's collections, io classes, and much
much more.
Guava has only one code dependency - javax.annotation,
per the JSR-305 spec.</p>
<p>Library home page: <a href="https://github.com/google/guava/">https://github.com/google/guava/</a></p>
<p>Path to dependency file: /tmp/ws-scm/zulondb/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/google/guava/guava/20.0/guava-20.0.jar</p>
<p>
Dependency Hierarchy:
- zkbind-8.6.0.jar (Root Library)
- zul-8.6.0.jar
- zk-8.6.0.jar
- closure-compiler-unshaded-v20170626.jar
- :x: **guava-20.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sesong11/zulondb/commit/36bfcf4d3a94a9011dd54b175f794ff3b43f47a9">36bfcf4d3a94a9011dd54b175f794ff3b43f47a9</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Unbounded memory allocation in Google Guava 11.0 through 24.x before 24.1.1 allows remote attackers to conduct denial of service attacks against servers that depend on this library and deserialize attacker-provided data, because the AtomicDoubleArray class (when serialized with Java serialization) and the CompoundOrdering class (when serialized with GWT serialization) perform eager allocation without appropriate checks on what a client has sent and whether the data size is reasonable.
<p>Publish Date: 2018-04-26
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-10237>CVE-2018-10237</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-10237">https://nvd.nist.gov/vuln/detail/CVE-2018-10237</a></p>
<p>Release Date: 2018-04-26</p>
<p>Fix Resolution: 24.1.1-jre, 24.1.1-android</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-10237 (Medium) detected in guava-20.0.jar - autoclosed - ## CVE-2018-10237 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>guava-20.0.jar</b></p></summary>
<p>Guava is a suite of core and expanded libraries that include
utility classes, google's collections, io classes, and much
much more.
Guava has only one code dependency - javax.annotation,
per the JSR-305 spec.</p>
<p>Library home page: <a href="https://github.com/google/guava/">https://github.com/google/guava/</a></p>
<p>Path to dependency file: /tmp/ws-scm/zulondb/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/google/guava/guava/20.0/guava-20.0.jar</p>
<p>
Dependency Hierarchy:
- zkbind-8.6.0.jar (Root Library)
- zul-8.6.0.jar
- zk-8.6.0.jar
- closure-compiler-unshaded-v20170626.jar
- :x: **guava-20.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sesong11/zulondb/commit/36bfcf4d3a94a9011dd54b175f794ff3b43f47a9">36bfcf4d3a94a9011dd54b175f794ff3b43f47a9</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Unbounded memory allocation in Google Guava 11.0 through 24.x before 24.1.1 allows remote attackers to conduct denial of service attacks against servers that depend on this library and deserialize attacker-provided data, because the AtomicDoubleArray class (when serialized with Java serialization) and the CompoundOrdering class (when serialized with GWT serialization) perform eager allocation without appropriate checks on what a client has sent and whether the data size is reasonable.
<p>Publish Date: 2018-04-26
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-10237>CVE-2018-10237</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-10237">https://nvd.nist.gov/vuln/detail/CVE-2018-10237</a></p>
<p>Release Date: 2018-04-26</p>
<p>Fix Resolution: 24.1.1-jre, 24.1.1-android</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_usab | cve medium detected in guava jar autoclosed cve medium severity vulnerability vulnerable library guava jar guava is a suite of core and expanded libraries that include utility classes google s collections io classes and much much more guava has only one code dependency javax annotation per the jsr spec library home page a href path to dependency file tmp ws scm zulondb pom xml path to vulnerable library root repository com google guava guava guava jar dependency hierarchy zkbind jar root library zul jar zk jar closure compiler unshaded jar x guava jar vulnerable library found in head commit a href vulnerability details unbounded memory allocation in google guava through x before allows remote attackers to conduct denial of service attacks against servers that depend on this library and deserialize attacker provided data because the atomicdoublearray class when serialized with java serialization and the compoundordering class when serialized with gwt serialization perform eager allocation without appropriate checks on what a client has sent and whether the data size is reasonable publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jre android step up your open source security game with whitesource | 0 |
15,767 | 10,281,352,416 | IssuesEvent | 2019-08-26 08:17:52 | microsoft/MixedRealityToolkit-Unity | https://api.github.com/repos/microsoft/MixedRealityToolkit-Unity | closed | Prefab and meta serialization is not up to date | 0 - Backlog Bug Editor MRTK usability | ## Describe the bug
prefab, meta, and scene assets are not automatically updated in Unity.
the serialization of those files can change when:
- unity version is updated
- some code of the components used in prefab has changed on the serialization part (renaming adding,removing serialized field)
currently, if i make a change in some of the mrtk prefabs, a whole list of diffs will come up.
bonus effect: if you make a tool (see below) in the mrtk, this can be useful when people are upgrading version on their side (to reserialize their profiles and scenes for example)
## Expected behavior
when a new mrtk release is pushed, all files should be up to date regarding serialization, at least the changes regarding mrtk code. (untiy serialization changes is a subject to be discussed, i don't know the exact repercussion for retrocompatibilities).
##
this process can be automated:
(i can provide a better version that displays a progress bar)
```
[MenuItem("Assets/Reserialize Prefabs&Scenes...")]
private static void ReserializeAll()
{
var array = GetAssets("t:Prefab t:Scene t:ScriptableObject");
AssetDatabase.ForceReserializeAssets(array);
}
private static string[] GetAssets(string filter)
{
string[] allPrefabsGUID = AssetDatabase.FindAssets($"{filter}");
List<string> allPrefabs = new List<string>();
foreach (string guid in allPrefabsGUID)
{
allPrefabs.Add(AssetDatabase.GUIDToAssetPath(guid));
}
return allPrefabs.ToArray();
}
```
| True | Prefab and meta serialization is not up to date - ## Describe the bug
prefab, meta, and scene assets are not automatically updated in Unity.
the serialization of those files can change when:
- unity version is updated
- some code of the components used in prefab has changed on the serialization part (renaming adding,removing serialized field)
currently, if i make a change in some of the mrtk prefabs, a whole list of diffs will come up.
bonus effect: if you make a tool (see below) in the mrtk, this can be useful when people are upgrading version on their side (to reserialize their profiles and scenes for example)
## Expected behavior
when a new mrtk release is pushed, all files should be up to date regarding serialization, at least the changes regarding mrtk code. (untiy serialization changes is a subject to be discussed, i don't know the exact repercussion for retrocompatibilities).
##
this process can be automated:
(i can provide a better version that displays a progress bar)
```
[MenuItem("Assets/Reserialize Prefabs&Scenes...")]
private static void ReserializeAll()
{
var array = GetAssets("t:Prefab t:Scene t:ScriptableObject");
AssetDatabase.ForceReserializeAssets(array);
}
private static string[] GetAssets(string filter)
{
string[] allPrefabsGUID = AssetDatabase.FindAssets($"{filter}");
List<string> allPrefabs = new List<string>();
foreach (string guid in allPrefabsGUID)
{
allPrefabs.Add(AssetDatabase.GUIDToAssetPath(guid));
}
return allPrefabs.ToArray();
}
```
| usab | prefab and meta serialization is not up to date describe the bug prefab meta and scene assets are not automatically updated in unity the serialization of those files can change when unity version is updated some code of the components used in prefab has changed on the serialization part renaming adding removing serialized field currently if i make a change in some of the mrtk prefabs a whole list of diffs will come up bonus effect if you make a tool see below in the mrtk this can be useful when people are upgrading version on their side to reserialize their profiles and scenes for example expected behavior when a new mrtk release is pushed all files should be up to date regarding serialization at least the changes regarding mrtk code untiy serialization changes is a subject to be discussed i don t know the exact repercussion for retrocompatibilities this process can be automated i can provide a better version that displays a progress bar private static void reserializeall var array getassets t prefab t scene t scriptableobject assetdatabase forcereserializeassets array private static string getassets string filter string allprefabsguid assetdatabase findassets filter list allprefabs new list foreach string guid in allprefabsguid allprefabs add assetdatabase guidtoassetpath guid return allprefabs toarray | 1 |
4,144 | 3,735,713,693 | IssuesEvent | 2016-03-08 13:26:01 | kubernetes/dashboard | https://api.github.com/repos/kubernetes/dashboard | closed | Replication Controllers List View - Wrong external endpoint link | area/usability wontfix | #### Issue details
##### Environment
<!-- Describe how do you run Kubernetes and Dashboard.
Versions of Node.js, Go etc. are needed only from developers. To get them use console:
$ node --version
$ go version
-->
```
Dashboard version: HEAD
Kubernetes version: HEAD
Operating system: Windows 7 x64
```
##### Steps to reproduce
<!-- Describe all steps needed to reproduce the issue. It is a good place to use numbered list. -->
I've started cluster with `local-up-cluster` on ubuntu machine, then I've accessed deployed dashboard from notebook with windows.
##### Observed result
<!-- Describe observed result as precisely as possible. -->
External link shows `localhost:<port>` on external machines.
##### Comments
<!-- If you have any comments or more details, put them here. -->
Localhost link is not working for notebook as it is deployed on other machine. It can be accessed but only when actual `<ubuntu_machine_ip>:31521` is provided.

| True | Replication Controllers List View - Wrong external endpoint link - #### Issue details
##### Environment
<!-- Describe how do you run Kubernetes and Dashboard.
Versions of Node.js, Go etc. are needed only from developers. To get them use console:
$ node --version
$ go version
-->
```
Dashboard version: HEAD
Kubernetes version: HEAD
Operating system: Windows 7 x64
```
##### Steps to reproduce
<!-- Describe all steps needed to reproduce the issue. It is a good place to use numbered list. -->
I've started cluster with `local-up-cluster` on ubuntu machine, then I've accessed deployed dashboard from notebook with windows.
##### Observed result
<!-- Describe observed result as precisely as possible. -->
External link shows `localhost:<port>` on external machines.
##### Comments
<!-- If you have any comments or more details, put them here. -->
Localhost link is not working for notebook as it is deployed on other machine. It can be accessed but only when actual `<ubuntu_machine_ip>:31521` is provided.

| usab | replication controllers list view wrong external endpoint link issue details environment describe how do you run kubernetes and dashboard versions of node js go etc are needed only from developers to get them use console node version go version dashboard version head kubernetes version head operating system windows steps to reproduce i ve started cluster with local up cluster on ubuntu machine then i ve accessed deployed dashboard from notebook with windows observed result external link shows localhost on external machines comments localhost link is not working for notebook as it is deployed on other machine it can be accessed but only when actual is provided | 1 |
19,561 | 14,236,902,898 | IssuesEvent | 2020-11-18 16:32:56 | solo-io/gloo-mesh | https://api.github.com/repos/solo-io/gloo-mesh | opened | Validate interaction between Gloo Mesh config and user-provided underlying config | Area: Usability Type: Enhancement | **Is your feature request related to a problem? Please describe.**
Users might want to provide underlying mesh (e.g. Istio) config in addition to Gloo Mesh config. Use cases include migrating to Gloo Mesh from a directly configured service mesh, or the use of mesh-specific config to produce some behavior not currently expose by the Gloo Mesh API.
**Describe the solution you'd like**
We should validate that Gloo Mesh is capable of "playing nicely" with user-provided underlying mesh configuration. Areas of concern include conflicting configuration and excess config produced by automated processes within Gloo Mesh that may be redundant with user-provided configuration. | True | Validate interaction between Gloo Mesh config and user-provided underlying config - **Is your feature request related to a problem? Please describe.**
Users might want to provide underlying mesh (e.g. Istio) config in addition to Gloo Mesh config. Use cases include migrating to Gloo Mesh from a directly configured service mesh, or the use of mesh-specific config to produce some behavior not currently expose by the Gloo Mesh API.
**Describe the solution you'd like**
We should validate that Gloo Mesh is capable of "playing nicely" with user-provided underlying mesh configuration. Areas of concern include conflicting configuration and excess config produced by automated processes within Gloo Mesh that may be redundant with user-provided configuration. | usab | validate interaction between gloo mesh config and user provided underlying config is your feature request related to a problem please describe users might want to provide underlying mesh e g istio config in addition to gloo mesh config use cases include migrating to gloo mesh from a directly configured service mesh or the use of mesh specific config to produce some behavior not currently expose by the gloo mesh api describe the solution you d like we should validate that gloo mesh is capable of playing nicely with user provided underlying mesh configuration areas of concern include conflicting configuration and excess config produced by automated processes within gloo mesh that may be redundant with user provided configuration | 1 |
1,647 | 2,922,211,397 | IssuesEvent | 2015-06-25 08:54:46 | Elgg/Elgg | https://api.github.com/repos/Elgg/Elgg | opened | elgg_normalize_url can cause a lot of php errors | bug dev usability easy | elgg_normalize_url assumes $url is a string (as it should be provided), but it doesn't check it, so if it's not a string but for example an array this causes a lot of php errors
My use case:
```php
$friendly_url = $entity->friendly_url;
echo elgg_normalize_url($friendly_url);
```
For some weird reason $friendly_url became an array and now my php log is flooded
I need to fix this on my side, but also some input validation should be done in elgg_normalize_url | True | elgg_normalize_url can cause a lot of php errors - elgg_normalize_url assumes $url is a string (as it should be provided), but it doesn't check it, so if it's not a string but for example an array this causes a lot of php errors
My use case:
```php
$friendly_url = $entity->friendly_url;
echo elgg_normalize_url($friendly_url);
```
For some weird reason $friendly_url became an array and now my php log is flooded
I need to fix this on my side, but also some input validation should be done in elgg_normalize_url | usab | elgg normalize url can cause a lot of php errors elgg normalize url assumes url is a string as it should be provided but it doesn t check it so if it s not a string but for example an array this causes a lot of php errors my use case php friendly url entity friendly url echo elgg normalize url friendly url for some weird reason friendly url became an array and now my php log is flooded i need to fix this on my side but also some input validation should be done in elgg normalize url | 1 |
327,364 | 24,131,236,271 | IssuesEvent | 2022-09-21 07:37:23 | strimzi/strimzi-kafka-operator | https://api.github.com/repos/strimzi/strimzi-kafka-operator | opened | Update Blog-Entry "Deploying Kafka with Let's Encrypt certificates" with NGINX TLS passthrough | documentation | Hi,
i want to deploy my Kafka using Let's Encrypt certificates, following the Blog-Entry from Jakub Scholz:
[Deploying Kafka with Let's Encrypt certificates](https://strimzi.io/blog/2021/05/07/deploying-kafka-with-lets-encrypt-certificates/)
The description is very clear and step by step i build my system, but i failed because my NGINX Ingress Controller for Kubernetes hadn't TLS passthrough enabled.
So this cost me a lot of time to find out, what's going wrong.
Maybe you can update the Blog-Entry and add a reference to the point, that TLS passthrough has to be enabled, as it is already in the Strimzi documentation:
[4.3. Accessing Kafka using ingress](https://strimzi.io/docs/operators/latest/full/configuring.html#proc-accessing-kafka-using-ingress-str)
Even though this Blog-Entry is not part of the official documentation, it is very important because it is the only point describing this setup.
Thanks.
| 1.0 | Update Blog-Entry "Deploying Kafka with Let's Encrypt certificates" with NGINX TLS passthrough - Hi,
i want to deploy my Kafka using Let's Encrypt certificates, following the Blog-Entry from Jakub Scholz:
[Deploying Kafka with Let's Encrypt certificates](https://strimzi.io/blog/2021/05/07/deploying-kafka-with-lets-encrypt-certificates/)
The description is very clear and step by step i build my system, but i failed because my NGINX Ingress Controller for Kubernetes hadn't TLS passthrough enabled.
So this cost me a lot of time to find out, what's going wrong.
Maybe you can update the Blog-Entry and add a reference to the point, that TLS passthrough has to be enabled, as it is already in the Strimzi documentation:
[4.3. Accessing Kafka using ingress](https://strimzi.io/docs/operators/latest/full/configuring.html#proc-accessing-kafka-using-ingress-str)
Even though this Blog-Entry is not part of the official documentation, it is very important because it is the only point describing this setup.
Thanks.
| non_usab | update blog entry deploying kafka with let s encrypt certificates with nginx tls passthrough hi i want to deploy my kafka using let s encrypt certificates following the blog entry from jakub scholz the description is very clear and step by step i build my system but i failed because my nginx ingress controller for kubernetes hadn t tls passthrough enabled so this cost me a lot of time to find out what s going wrong maybe you can update the blog entry and add a reference to the point that tls passthrough has to be enabled as it is already in the strimzi documentation even though this blog entry is not part of the official documentation it is very important because it is the only point describing this setup thanks | 0 |
523,652 | 15,187,192,514 | IssuesEvent | 2021-02-15 13:28:10 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | closed | Title and abstract don't update in the same edit session (Style Editor) | Priority: Medium StyleEditor bug geonode_integration | ### Description
If title and abstract don't update in the style list if they have been changed in the editor.
### In case of Bug (otherwise remove this paragraph)
*Browser Affected*
(use this site: https://www.whatsmybrowser.org/ for non expert users)
- [ ] Internet Explorer
- [ ] Chrome
- [ ] Firefox
- [ ] Safari
*Browser Version Affected*
- all
*Steps to reproduce*
- open a map with a vector layer
- select layer and open settings
- select style tab
- select a style to edit
- change title and abstract of the style and save
- back to style list
*Expected Result*
- title and abstract are updated
*Current Result*
- title and abstract are still the old ones, you need to close the setting and open again to see the applied changes of title and abstract
### Other useful information (optional):
| 1.0 | Title and abstract don't update in the same edit session (Style Editor) - ### Description
If title and abstract don't update in the style list if they have been changed in the editor.
### In case of Bug (otherwise remove this paragraph)
*Browser Affected*
(use this site: https://www.whatsmybrowser.org/ for non expert users)
- [ ] Internet Explorer
- [ ] Chrome
- [ ] Firefox
- [ ] Safari
*Browser Version Affected*
- all
*Steps to reproduce*
- open a map with a vector layer
- select layer and open settings
- select style tab
- select a style to edit
- change title and abstract of the style and save
- back to style list
*Expected Result*
- title and abstract are updated
*Current Result*
- title and abstract are still the old ones, you need to close the setting and open again to see the applied changes of title and abstract
### Other useful information (optional):
| non_usab | title and abstract don t update in the same edit session style editor description if title and abstract don t update in the style list if they have been changed in the editor in case of bug otherwise remove this paragraph browser affected use this site for non expert users internet explorer chrome firefox safari browser version affected all steps to reproduce open a map with a vector layer select layer and open settings select style tab select a style to edit change title and abstract of the style and save back to style list expected result title and abstract are updated current result title and abstract are still the old ones you need to close the setting and open again to see the applied changes of title and abstract other useful information optional | 0 |
2,810 | 5,591,562,558 | IssuesEvent | 2017-03-30 00:19:57 | thebrightspark/SparksHammers | https://api.github.com/repos/thebrightspark/SparksHammers | closed | Crash when opening Hammer Table if Backpack(Eydamos) is installed | bug compatibility | As seens [here](https://minecraft.curseforge.com/projects/forge-backpacks/issues/1)
Recent [crash](http://pastebin.com/CzzqQCfL)
MC 1.10.2
backpack-3.0.1-1.10.2.jar
SparksHammers-1.10.2-1.4.4.jar
and many more, but irrelevant
Seems to come from something your side, as any other inventory(that i tried) open nicely | True | Crash when opening Hammer Table if Backpack(Eydamos) is installed - As seens [here](https://minecraft.curseforge.com/projects/forge-backpacks/issues/1)
Recent [crash](http://pastebin.com/CzzqQCfL)
MC 1.10.2
backpack-3.0.1-1.10.2.jar
SparksHammers-1.10.2-1.4.4.jar
and many more, but irrelevant
Seems to come from something your side, as any other inventory(that i tried) open nicely | non_usab | crash when opening hammer table if backpack eydamos is installed as seens recent mc backpack jar sparkshammers jar and many more but irrelevant seems to come from something your side as any other inventory that i tried open nicely | 0 |
1,982 | 3,025,918,603 | IssuesEvent | 2015-08-03 12:05:59 | lionheart/openradar-mirror | https://api.github.com/repos/lionheart/openradar-mirror | opened | 20384054: Right-aligned UITextField does not shrink its text along its baseline | classification:ui/usability reproducible:always status:open | #### Description
Summary:
UITextField has a convenient property adjustsFontSizeToFitWidth, which causes the field to shrink its text until it either fits in the field or hits the field's minimumFontSize. This property is documented to shrink the field's text along its baseline.
However, when the field is right-aligned, the text does not shrink along the baseline; instead, it gradually moves up in the field as it shrinks, despite the baseline (as exposed to Auto Layout) staying in the same place.
Steps to Reproduce:
1. Run the attached sample app on an iPhone 4s. Note the placeholder text is aligned with the leading "Name" label.
2. Tap in the field and type "this text is aligned". Note the text is still aligned with the label.
3. Continue typing, adding the phrase "until it has to start shrinking to fit".
Expected Results:
The text remains baseline-aligned with the leading "Name" label, but shrinks to its minimum font size of 10pts.
Actual Results:
The text shrinks to its minimum size, but moves up in the field, causing a visual misalignment between the label and the field.
Version:
iOS 8.3 beta 4 (12F61a)
Notes:
Configuration:
iPhone 4s Simulator
Attachments:
'TextFieldFontAdjustmentTest.zip' was successfully uploaded.
-
Product Version: iOS 8.3 beta 4 (12F61a)
Created: 2015-04-01T17:02:38.657181
Originated: 2015-04-01T10:01:00
Open Radar Link: http://www.openradar.me/20384054 | True | 20384054: Right-aligned UITextField does not shrink its text along its baseline - #### Description
Summary:
UITextField has a convenient property adjustsFontSizeToFitWidth, which causes the field to shrink its text until it either fits in the field or hits the field's minimumFontSize. This property is documented to shrink the field's text along its baseline.
However, when the field is right-aligned, the text does not shrink along the baseline; instead, it gradually moves up in the field as it shrinks, despite the baseline (as exposed to Auto Layout) staying in the same place.
Steps to Reproduce:
1. Run the attached sample app on an iPhone 4s. Note the placeholder text is aligned with the leading "Name" label.
2. Tap in the field and type "this text is aligned". Note the text is still aligned with the label.
3. Continue typing, adding the phrase "until it has to start shrinking to fit".
Expected Results:
The text remains baseline-aligned with the leading "Name" label, but shrinks to its minimum font size of 10pts.
Actual Results:
The text shrinks to its minimum size, but moves up in the field, causing a visual misalignment between the label and the field.
Version:
iOS 8.3 beta 4 (12F61a)
Notes:
Configuration:
iPhone 4s Simulator
Attachments:
'TextFieldFontAdjustmentTest.zip' was successfully uploaded.
-
Product Version: iOS 8.3 beta 4 (12F61a)
Created: 2015-04-01T17:02:38.657181
Originated: 2015-04-01T10:01:00
Open Radar Link: http://www.openradar.me/20384054 | usab | right aligned uitextfield does not shrink its text along its baseline description summary uitextfield has a convenient property adjustsfontsizetofitwidth which causes the field to shrink its text until it either fits in the field or hits the field s minimumfontsize this property is documented to shrink the field s text along its baseline however when the field is right aligned the text does not shrink along the baseline instead it gradually moves up in the field as it shrinks despite the baseline as exposed to auto layout staying in the same place steps to reproduce run the attached sample app on an iphone note the placeholder text is aligned with the leading name label tap in the field and type this text is aligned note the text is still aligned with the label continue typing adding the phrase until it has to start shrinking to fit expected results the text remains baseline aligned with the leading name label but shrinks to its minimum font size of actual results the text shrinks to its minimum size but moves up in the field causing a visual misalignment between the label and the field version ios beta notes configuration iphone simulator attachments textfieldfontadjustmenttest zip was successfully uploaded product version ios beta created originated open radar link | 1 |
26,617 | 27,034,225,231 | IssuesEvent | 2023-02-12 15:29:28 | tailscale/tailscale | https://api.github.com/repos/tailscale/tailscale | closed | MacOS Monterey intermittent stops due to other network extensions | OS-macos L1 Very few P2 Aggravating T5 Usability vpn-interop | Hi there,
On Monterey, the Tailscale app seems to intermittently stop and completely fail. I have to kill the Tailscale app so that the network is fixed, then re-launch and enable it to fix the network issue.
Basically everything stops routing.
Any logs I can provide to help identify the cause? | True | MacOS Monterey intermittent stops due to other network extensions - Hi there,
On Monterey, the Tailscale app seems to intermittently stop and completely fail. I have to kill the Tailscale app so that the network is fixed, then re-launch and enable it to fix the network issue.
Basically everything stops routing.
Any logs I can provide to help identify the cause? | usab | macos monterey intermittent stops due to other network extensions hi there on monterey the tailscale app seems to intermittently stop and completely fail i have to kill the tailscale app so that the network is fixed then re launch and enable it to fix the network issue basically everything stops routing any logs i can provide to help identify the cause | 1 |
194,106 | 22,261,852,130 | IssuesEvent | 2022-06-10 01:45:15 | Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492 | https://api.github.com/repos/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492 | reopened | CVE-2020-10711 (Medium) detected in linuxlinux-4.19.88 | security vulnerability | ## CVE-2020-10711 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.88</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492/commit/8d2169763c8858bce8d07fbb569f01ef9b30383b">8d2169763c8858bce8d07fbb569f01ef9b30383b</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/net/netlabel/netlabel_kapi.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/net/netlabel/netlabel_kapi.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A NULL pointer dereference flaw was found in the Linux kernel's SELinux subsystem in versions before 5.7. This flaw occurs while importing the Commercial IP Security Option (CIPSO) protocol's category bitmap into the SELinux extensible bitmap via the' ebitmap_netlbl_import' routine. While processing the CIPSO restricted bitmap tag in the 'cipso_v4_parsetag_rbm' routine, it sets the security attribute to indicate that the category bitmap is present, even if it has not been allocated. This issue leads to a NULL pointer dereference issue while importing the same category bitmap into SELinux. This flaw allows a remote network user to crash the system kernel, resulting in a denial of service.
<p>Publish Date: 2020-05-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10711>CVE-2020-10711</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-10711">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-10711</a></p>
<p>Release Date: 2020-05-22</p>
<p>Fix Resolution: v5.7-rc6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-10711 (Medium) detected in linuxlinux-4.19.88 - ## CVE-2020-10711 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.88</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492/commit/8d2169763c8858bce8d07fbb569f01ef9b30383b">8d2169763c8858bce8d07fbb569f01ef9b30383b</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/net/netlabel/netlabel_kapi.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/net/netlabel/netlabel_kapi.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A NULL pointer dereference flaw was found in the Linux kernel's SELinux subsystem in versions before 5.7. This flaw occurs while importing the Commercial IP Security Option (CIPSO) protocol's category bitmap into the SELinux extensible bitmap via the' ebitmap_netlbl_import' routine. While processing the CIPSO restricted bitmap tag in the 'cipso_v4_parsetag_rbm' routine, it sets the security attribute to indicate that the category bitmap is present, even if it has not been allocated. This issue leads to a NULL pointer dereference issue while importing the same category bitmap into SELinux. This flaw allows a remote network user to crash the system kernel, resulting in a denial of service.
<p>Publish Date: 2020-05-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10711>CVE-2020-10711</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-10711">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-10711</a></p>
<p>Release Date: 2020-05-22</p>
<p>Fix Resolution: v5.7-rc6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_usab | cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files linux net netlabel netlabel kapi c linux net netlabel netlabel kapi c vulnerability details a null pointer dereference flaw was found in the linux kernel s selinux subsystem in versions before this flaw occurs while importing the commercial ip security option cipso protocol s category bitmap into the selinux extensible bitmap via the ebitmap netlbl import routine while processing the cipso restricted bitmap tag in the cipso parsetag rbm routine it sets the security attribute to indicate that the category bitmap is present even if it has not been allocated this issue leads to a null pointer dereference issue while importing the same category bitmap into selinux this flaw allows a remote network user to crash the system kernel resulting in a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
22,114 | 18,720,501,198 | IssuesEvent | 2021-11-03 11:12:49 | ethersphere/swarm-cli | https://api.github.com/repos/ethersphere/swarm-cli | closed | Feed print could show how many updates happened on the feed | enhancement issue usability | With the current feed indexing schema the performance of the lookup gets worse with the number of updates. It would be good to provide some information to the user about the number of updates on a given feed.
I imagine this could be shown with the `feed print` command (maybe for the sake of consistency we can consider using `show` instead of `print` as command name). | True | Feed print could show how many updates happened on the feed - With the current feed indexing schema the performance of the lookup gets worse with the number of updates. It would be good to provide some information to the user about the number of updates on a given feed.
I imagine this could be shown with the `feed print` command (maybe for the sake of consistency we can consider using `show` instead of `print` as command name). | usab | feed print could show how many updates happened on the feed with the current feed indexing schema the performance of the lookup gets worse with the number of updates it would be good to provide some information to the user about the number of updates on a given feed i imagine this could be shown with the feed print command maybe for the sake of consistency we can consider using show instead of print as command name | 1 |
107,441 | 13,460,677,120 | IssuesEvent | 2020-09-09 13:54:48 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | opened | Complete Design-intent collaboration checkpoint meeting | LIH auth-homepage design needs-grooming vsa-authenticated-exp | ## Background
[Design-Intent checkpoint requirements](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/working-with-vsp/vsp-collaboration-cycle/vsp-collaboration-cycle.md#design-intent-collaboration)
## Tasks
- [ ] Provide all required design artifacts
- [ ] Complete DI checkpoint | 1.0 | Complete Design-intent collaboration checkpoint meeting - ## Background
[Design-Intent checkpoint requirements](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/working-with-vsp/vsp-collaboration-cycle/vsp-collaboration-cycle.md#design-intent-collaboration)
## Tasks
- [ ] Provide all required design artifacts
- [ ] Complete DI checkpoint | non_usab | complete design intent collaboration checkpoint meeting background tasks provide all required design artifacts complete di checkpoint | 0 |
119,507 | 10,056,427,535 | IssuesEvent | 2019-07-22 09:08:00 | kyma-project/kyma | https://api.github.com/repos/kyma-project/kyma | closed | monitoring tests can be run in parallel with others | quality/testability | **Description**
It should be possible to run monitoring tests in parallel with other Kyma tests. We need to verify if it is possible now and change the test if necessary. Then its TestDefinition should be modified to enable concurrency.
**Reasons**
One way to make the Kyma test suite faster is to run tests in parallel.
**Acceptance Criteria**
- [ ] concurrency is enabled in monitoring test and it is stable on CI
See also https://github.com/kyma-project/kyma/issues/4299 as an example | 1.0 | monitoring tests can be run in parallel with others - **Description**
It should be possible to run monitoring tests in parallel with other Kyma tests. We need to verify if it is possible now and change the test if necessary. Then its TestDefinition should be modified to enable concurrency.
**Reasons**
One way to make the Kyma test suite faster is to run tests in parallel.
**Acceptance Criteria**
- [ ] concurrency is enabled in monitoring test and it is stable on CI
See also https://github.com/kyma-project/kyma/issues/4299 as an example | non_usab | monitoring tests can be run in parallel with others description it should be possible to run monitoring tests in parallel with other kyma tests we need to verify if it is possible now and change the test if necessary then its testdefinition should be modified to enable concurrency reasons one way to make the kyma test suite faster is to run tests in parallel acceptance criteria concurrency is enabled in monitoring test and it is stable on ci see also as an example | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.