Unnamed: 0 int64 3 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 2 742 | labels stringlengths 4 431 | body stringlengths 5 239k | index stringclasses 10 values | text_combine stringlengths 96 240k | label stringclasses 2 values | text stringlengths 96 200k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
29,022 | 23,672,857,794 | IssuesEvent | 2022-08-27 16:31:38 | jrsmith3/ibei | https://api.github.com/repos/jrsmith3/ibei | closed | Write GitHub action to post documentation to readthedocs.org for new releases | development infrastructure | # Overview
The scope of this issue is to upload the documentation build by the automation described in #55 to readthedocs.org via GitHub action. Documentation should only be updated in this way for releases.
# Related issues
* Depends on #52.
* Depends on #55. | 1.0 | Write GitHub action to post documentation to readthedocs.org for new releases - # Overview
The scope of this issue is to upload the documentation build by the automation described in #55 to readthedocs.org via GitHub action. Documentation should only be updated in this way for releases.
# Related issues
* Depends on #52.
* Depends on #55. | non_usab | write github action to post documentation to readthedocs org for new releases overview the scope of this issue is to upload the documentation build by the automation described in to readthedocs org via github action documentation should only be updated in this way for releases related issues depends on depends on | 0 |
755,600 | 26,434,323,525 | IssuesEvent | 2023-01-15 07:17:12 | fredo-ai/Fredo-Public | https://api.github.com/repos/fredo-ai/Fredo-Public | closed | Add mixpanel events for snooze | priority-1 current sprint | When you snooze event it create new event "REMINDER CREATED" since we are creating new reminder in 10 minutes.
in that case we should add a boolean property to that event called "snoozed" (True/False).
If Reminder created as a result of snooze the property of REMINDER CREATED event should be **_snoozed=true_** | 1.0 | Add mixpanel events for snooze - When you snooze event it create new event "REMINDER CREATED" since we are creating new reminder in 10 minutes.
in that case we should add a boolean property to that event called "snoozed" (True/False).
If Reminder created as a result of snooze the property of REMINDER CREATED event should be **_snoozed=true_** | non_usab | add mixpanel events for snooze when you snooze event it create new event reminder created since we are creating new reminder in minutes in that case we should add a boolean property to that event called snoozed true false if reminder created as a result of snooze the property of reminder created event should be snoozed true | 0 |
11,620 | 7,326,867,578 | IssuesEvent | 2018-03-04 01:43:52 | fennekki/cdparacord | https://api.github.com/repos/fennekki/cdparacord | closed | Nothing is currently configurable | enhancement usability | There should be a configuration file of some kind. `$XDG_CONFIG_HOME/cdparacord/config`, maybe. | True | Nothing is currently configurable - There should be a configuration file of some kind. `$XDG_CONFIG_HOME/cdparacord/config`, maybe. | usab | nothing is currently configurable there should be a configuration file of some kind xdg config home cdparacord config maybe | 1 |
6,037 | 4,119,206,825 | IssuesEvent | 2016-06-08 14:14:56 | prometheus/prometheus | https://api.github.com/repos/prometheus/prometheus | closed | Rename `target_groups` to `static_configs` | area/usability component/config kind/breaking change kind/friction | Our SD configurations are consistently suffixed with `_configs` and descriptive of what they do. `target_groups` seems out of order.
I'd suggest renaming it to `static_configs` as we have a break to configuration with the recent changes in file SD configs anyway. | True | Rename `target_groups` to `static_configs` - Our SD configurations are consistently suffixed with `_configs` and descriptive of what they do. `target_groups` seems out of order.
I'd suggest renaming it to `static_configs` as we have a break to configuration with the recent changes in file SD configs anyway. | usab | rename target groups to static configs our sd configurations are consistently suffixed with configs and descriptive of what they do target groups seems out of order i d suggest renaming it to static configs as we have a break to configuration with the recent changes in file sd configs anyway | 1 |
2,373 | 3,072,659,389 | IssuesEvent | 2015-08-19 18:01:43 | FieldDB/FieldDB | https://api.github.com/repos/FieldDB/FieldDB | closed | Add second try for "view all sessions" in spreadsheet if first times out, and warn user it will take a long time | Usability | When the user selects "view all sessions" in a large corpus in the spreadsheet app, the server will time out and ask the user to reload. It will then send them to the last session instead of all sessions.
The user is not warned that loading all sessions will take time (as in a popup in the prototype) | True | Add second try for "view all sessions" in spreadsheet if first times out, and warn user it will take a long time - When the user selects "view all sessions" in a large corpus in the spreadsheet app, the server will time out and ask the user to reload. It will then send them to the last session instead of all sessions.
The user is not warned that loading all sessions will take time (as in a popup in the prototype) | usab | add second try for view all sessions in spreadsheet if first times out and warn user it will take a long time when the user selects view all sessions in a large corpus in the spreadsheet app the server will time out and ask the user to reload it will then send them to the last session instead of all sessions the user is not warned that loading all sessions will take time as in a popup in the prototype | 1 |
20,215 | 15,147,950,326 | IssuesEvent | 2021-02-11 09:55:03 | elastic/rally | https://api.github.com/repos/elastic/rally | opened | Allow to selectively ignore response errors | :Usability enhancement | Currently Rally implements three different behaviors when a response error occurs:
1. `continue`: Regardless of the type of error, Rally will continue (even on network issues) and only record that an error has happened.
2. `continue-on-non-fatal` (default): Similar to `continue` but it will fail on network connection errors.
3. `abort`: The benchmark will be aborted as soon as any error happens.
While the differentiation between (1) and (2) is too fine-grained and we should instead never continue on network errors, `abort` is too coarse-grained. While we should still fail with `--on-error=abort` there are cases where we should give track authors more control to decide which tasks are ok to ignore certain errors and for which tasks it would be a problem if an error occurs.
### Proposal
1. We rename `continue-on-non-fatal` (the current default) to `continue` and remove the current behavior of `continue`.
2. We introduce a new task property called `ignore-response-error-level`. At the moment, only one value is allowed: `non-fatal`. When a benchmark is run with `--on-error=abort` and this property is present on a task, only [errors that are considered fatal](https://github.com/elastic/rally/blob/b8592a6071e549be99ac5538f960fe3026e513fb/esrally/driver/driver.py#L1448) will abort the benchmark when this task is run. On all other errors, Rally will continue. | True | Allow to selectively ignore response errors - Currently Rally implements three different behaviors when a response error occurs:
1. `continue`: Regardless of the type of error, Rally will continue (even on network issues) and only record that an error has happened.
2. `continue-on-non-fatal` (default): Similar to `continue` but it will fail on network connection errors.
3. `abort`: The benchmark will be aborted as soon as any error happens.
While the differentiation between (1) and (2) is too fine-grained and we should instead never continue on network errors, `abort` is too coarse-grained. While we should still fail with `--on-error=abort` there are cases where we should give track authors more control to decide which tasks are ok to ignore certain errors and for which tasks it would be a problem if an error occurs.
### Proposal
1. We rename `continue-on-non-fatal` (the current default) to `continue` and remove the current behavior of `continue`.
2. We introduce a new task property called `ignore-response-error-level`. At the moment, only one value is allowed: `non-fatal`. When a benchmark is run with `--on-error=abort` and this property is present on a task, only [errors that are considered fatal](https://github.com/elastic/rally/blob/b8592a6071e549be99ac5538f960fe3026e513fb/esrally/driver/driver.py#L1448) will abort the benchmark when this task is run. On all other errors, Rally will continue. | usab | allow to selectively ignore response errors currently rally implements three different behaviors when a response error occurs continue regardless of the type of error rally will continue even on network issues and only record that an error has happened continue on non fatal default similar to continue but it will fail on network connection errors abort the benchmark will be aborted as soon as any error happens while the differentiation between and is too fine grained and we should instead never continue on network errors abort is too coarse grained while we should still fail with on error abort there are cases where we should give track authors more control to decide which tasks are ok to ignore certain errors and for which tasks it would be a problem if an error occurs proposal we rename continue on non fatal the current default to continue and remove the current behavior of continue we introduce a new task property called ignore response error level at the moment only one value is allowed non fatal when a benchmark is run with on error abort and this property is present on a task only will abort the benchmark when this task is run on all other errors rally will continue | 1 |
10,743 | 6,901,555,540 | IssuesEvent | 2017-11-25 09:09:43 | the-tale/the-tale | https://api.github.com/repos/the-tale/the-tale | opened | Иконка доступной для взятия карты не пропадает сразу по факту взятия | comp_general cont_usability est_simple good first issue type_bug | Надо чтобы её статус обновлялся после каждого взятия. | True | Иконка доступной для взятия карты не пропадает сразу по факту взятия - Надо чтобы её статус обновлялся после каждого взятия. | usab | иконка доступной для взятия карты не пропадает сразу по факту взятия надо чтобы её статус обновлялся после каждого взятия | 1 |
7,179 | 4,805,038,548 | IssuesEvent | 2016-11-02 15:07:15 | Elgg/Elgg | https://api.github.com/repos/Elgg/Elgg | closed | Should we support "plugin" settings for composer project? | dev usability discussion | Currently the root directory of composer project works like a plugin; You can add for example `views/` directory or `start.php` to it, and they will work as they would work within a plugin.
It currently isn't however possible to define plugin settings from the root, because all the features depend on `plugin_id`.
See for example:
- https://github.com/Elgg/Elgg/blob/2.1/views/default/admin/plugin_settings.php#L19
- https://github.com/Elgg/Elgg/blob/2.1/engine/lib/admin.php#L455
Should we add support for plugin settings and plugin user settings for composer projects?
| True | Should we support "plugin" settings for composer project? - Currently the root directory of composer project works like a plugin; You can add for example `views/` directory or `start.php` to it, and they will work as they would work within a plugin.
It currently isn't however possible to define plugin settings from the root, because all the features depend on `plugin_id`.
See for example:
- https://github.com/Elgg/Elgg/blob/2.1/views/default/admin/plugin_settings.php#L19
- https://github.com/Elgg/Elgg/blob/2.1/engine/lib/admin.php#L455
Should we add support for plugin settings and plugin user settings for composer projects?
| usab | should we support plugin settings for composer project currently the root directory of composer project works like a plugin you can add for example views directory or start php to it and they will work as they would work within a plugin it currently isn t however possible to define plugin settings from the root because all the features depend on plugin id see for example should we add support for plugin settings and plugin user settings for composer projects | 1 |
433,787 | 30,350,057,815 | IssuesEvent | 2023-07-11 18:14:09 | ManageIQ/manageiq.org | https://api.github.com/repos/ManageIQ/manageiq.org | closed | Remove scheduling of database backups from documentation | documentation | https://www.manageiq.org/docs/reference/latest/general_configuration/#scheduling-smartstate-analyses-and-backups
I believe it was removed in: https://github.com/ManageIQ/manageiq/pull/21415
I don't know if that section of the documentation needs new screenshots or updated guidance but it looks old to me. | 1.0 | Remove scheduling of database backups from documentation - https://www.manageiq.org/docs/reference/latest/general_configuration/#scheduling-smartstate-analyses-and-backups
I believe it was removed in: https://github.com/ManageIQ/manageiq/pull/21415
I don't know if that section of the documentation needs new screenshots or updated guidance but it looks old to me. | non_usab | remove scheduling of database backups from documentation i believe it was removed in i don t know if that section of the documentation needs new screenshots or updated guidance but it looks old to me | 0 |
274,233 | 8,558,783,217 | IssuesEvent | 2018-11-08 19:17:00 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.facebook.com - design is broken | browser-firefox priority-critical | <!-- @browser: Firefox 65.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:65.0) Gecko/20100101 Firefox/65.0 -->
<!-- @reported_with: -->
**URL**: https://www.facebook.com/permalink.php?story_fbid=267102693997732&id=178816082826394
**Browser / Version**: Firefox 65.0
**Operating System**: Linux
**Tested Another Browser**: Yes
**Problem type**: Design is broken
**Description**: "redwoodcity.org" title-text is covered up by paragraph below it, in Firefox and Edge
**Steps to Reproduce**:
Just visit https://www.facebook.com/permalink.php?story_fbid=267102693997732&id=178816082826394 and click "Not Now" on the create-an-account prompt (if you're prompted).
Edge and Firefox both agree on the unwanted rendering, whereas Chrome and Safari don't show any overlap. So this seems likely to be a Facebook bug where they're accidentally depending on a WebKit/Blink behavior.
[](https://webcompat.com/uploads/2018/11/76cb9e89-d23c-41bd-8c1f-3b804a4f2365.jpg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.facebook.com - design is broken - <!-- @browser: Firefox 65.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:65.0) Gecko/20100101 Firefox/65.0 -->
<!-- @reported_with: -->
**URL**: https://www.facebook.com/permalink.php?story_fbid=267102693997732&id=178816082826394
**Browser / Version**: Firefox 65.0
**Operating System**: Linux
**Tested Another Browser**: Yes
**Problem type**: Design is broken
**Description**: "redwoodcity.org" title-text is covered up by paragraph below it, in Firefox and Edge
**Steps to Reproduce**:
Just visit https://www.facebook.com/permalink.php?story_fbid=267102693997732&id=178816082826394 and click "Not Now" on the create-an-account prompt (if you're prompted).
Edge and Firefox both agree on the unwanted rendering, whereas Chrome and Safari don't show any overlap. So this seems likely to be a Facebook bug where they're accidentally depending on a WebKit/Blink behavior.
[](https://webcompat.com/uploads/2018/11/76cb9e89-d23c-41bd-8c1f-3b804a4f2365.jpg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_usab | design is broken url browser version firefox operating system linux tested another browser yes problem type design is broken description redwoodcity org title text is covered up by paragraph below it in firefox and edge steps to reproduce just visit and click not now on the create an account prompt if you re prompted edge and firefox both agree on the unwanted rendering whereas chrome and safari don t show any overlap so this seems likely to be a facebook bug where they re accidentally depending on a webkit blink behavior browser configuration none from with ❤️ | 0 |
284,384 | 21,416,579,492 | IssuesEvent | 2022-04-22 11:27:58 | opentelekomcloud/vault-plugin-secrets-openstack | https://api.github.com/repos/opentelekomcloud/vault-plugin-secrets-openstack | closed | README reference command inconsistency | documentation | I believe this
``` vault read /os/creds/example-role ```
should be:
``` vault read /openstack/creds/example-role ```
to be aligned with the previous commands.
https://github.com/opentelekomcloud/vault-plugin-secrets-openstack/blob/e56d944faeea6de3e601342395858182748e5c37/README.md?plain=1#L81 | 1.0 | README reference command inconsistency - I believe this
``` vault read /os/creds/example-role ```
should be:
``` vault read /openstack/creds/example-role ```
to be aligned with the previous commands.
https://github.com/opentelekomcloud/vault-plugin-secrets-openstack/blob/e56d944faeea6de3e601342395858182748e5c37/README.md?plain=1#L81 | non_usab | readme reference command inconsistency i believe this vault read os creds example role should be vault read openstack creds example role to be aligned with the previous commands | 0 |
235,566 | 7,740,291,878 | IssuesEvent | 2018-05-28 20:42:58 | GMDevinity/FloodIssues | https://api.github.com/repos/GMDevinity/FloodIssues | closed | Remove waterfall ambient | Modification Priority - Lower | ambient/levels/canals/dam_water_loop2.wav
This has been playing on the sewer pipes and is a major disturbance constantly having to listen to it, obscuring phase music, building
#363
players don't have to stopsound without also having the music stop | 1.0 | Remove waterfall ambient - ambient/levels/canals/dam_water_loop2.wav
This has been playing on the sewer pipes and is a major disturbance constantly having to listen to it, obscuring phase music, building
#363
players don't have to stopsound without also having the music stop | non_usab | remove waterfall ambient ambient levels canals dam water wav this has been playing on the sewer pipes and is a major disturbance constantly having to listen to it obscuring phase music building players don t have to stopsound without also having the music stop | 0 |
12,427 | 7,873,919,680 | IssuesEvent | 2018-06-25 15:30:50 | coreos/tectonic-installer | https://api.github.com/repos/coreos/tectonic-installer | closed | Tectonic console page turn to "Error Loading ... please try again" while accessing from places afar | kind/usabilty | Hi, Team,
I Installed Tectonic on AWS(Singapore) successfully.
But when I tried to access console page from California office, always telling me “Error Loading…”, then clicking “try again”, its content coming out as last. And accessing from Singapore office is fine.
**How to improve this use experience?** “Each sub-menu page need a double fresh **accessing for places afar**”

Best wishes,
Brant
| True | Tectonic console page turn to "Error Loading ... please try again" while accessing from places afar - Hi, Team,
I Installed Tectonic on AWS(Singapore) successfully.
But when I tried to access console page from California office, always telling me “Error Loading…”, then clicking “try again”, its content coming out as last. And accessing from Singapore office is fine.
**How to improve this use experience?** “Each sub-menu page need a double fresh **accessing for places afar**”

Best wishes,
Brant
| usab | tectonic console page turn to error loading please try again while accessing from places afar hi team i installed tectonic on aws singapore successfully but when i tried to access console page from california office always telling me “error loading…” then clicking “try again” its content coming out as last and accessing from singapore office is fine how to improve this use experience “each sub menu page need a double fresh accessing for places afar ” best wishes brant | 1 |
8,904 | 6,029,518,073 | IssuesEvent | 2017-06-08 18:12:41 | unfoldingWord-dev/translationCore | https://api.github.com/repos/unfoldingWord-dev/translationCore | closed | There should be informative text on all dialogs that indicate that something is happening | QA Passed Usability | The dialog with the progress bar and animated icon should always indicate to the user what is happening while he is waiting. | True | There should be informative text on all dialogs that indicate that something is happening - The dialog with the progress bar and animated icon should always indicate to the user what is happening while he is waiting. | usab | there should be informative text on all dialogs that indicate that something is happening the dialog with the progress bar and animated icon should always indicate to the user what is happening while he is waiting | 1 |
519,138 | 15,045,603,430 | IssuesEvent | 2021-02-03 05:48:14 | ryanclark/karma-webpack | https://api.github.com/repos/ryanclark/karma-webpack | closed | [4.0.0-rc4] Regression after removing lodash (when using multi-compiler mode) | help wanted priority: 3 (required) severity: 2 (regression) status: Approved type: Bug | In #364 `_.clone` was replaced with `Object.assign`.
This change assumed that `webpackOptions` is always an object, but in fact, it can be an array (multi-compiler mode, see https://github.com/webpack/webpack/tree/master/examples/multi-compiler).
So after this change, an array `[ objA, objB ]` becomes an object `{ 0: objA, 1: objB }` and then some subsequent logic gets changed, but also the subsequent `webpack(...)` call checks the passed config and throws an error:
```
WebpackOptionsValidationError: Invalid configuration object. Webpack has been initialised using a configuration object that does not match the API schema.
- configuration has an unknown property '1'. These properties are valid:
...
```
I propose to rollback this change then. But instead of using `lodash` we could use `lodash.clone`: https://www.npmjs.com/package/lodash.clone as it's the only lodash method used.
| 1.0 | [4.0.0-rc4] Regression after removing lodash (when using multi-compiler mode) - In #364 `_.clone` was replaced with `Object.assign`.
This change assumed that `webpackOptions` is always an object, but in fact, it can be an array (multi-compiler mode, see https://github.com/webpack/webpack/tree/master/examples/multi-compiler).
So after this change, an array `[ objA, objB ]` becomes an object `{ 0: objA, 1: objB }` and then some subsequent logic gets changed, but also the subsequent `webpack(...)` call checks the passed config and throws an error:
```
WebpackOptionsValidationError: Invalid configuration object. Webpack has been initialised using a configuration object that does not match the API schema.
- configuration has an unknown property '1'. These properties are valid:
...
```
I propose to rollback this change then. But instead of using `lodash` we could use `lodash.clone`: https://www.npmjs.com/package/lodash.clone as it's the only lodash method used.
| non_usab | regression after removing lodash when using multi compiler mode in clone was replaced with object assign this change assumed that webpackoptions is always an object but in fact it can be an array multi compiler mode see so after this change an array becomes an object obja objb and then some subsequent logic gets changed but also the subsequent webpack call checks the passed config and throws an error webpackoptionsvalidationerror invalid configuration object webpack has been initialised using a configuration object that does not match the api schema configuration has an unknown property these properties are valid i propose to rollback this change then but instead of using lodash we could use lodash clone as it s the only lodash method used | 0 |
10,998 | 7,009,470,861 | IssuesEvent | 2017-12-19 19:16:29 | gbif/portal16 | https://api.github.com/repos/gbif/portal16 | closed | gbif network map zoomlevel and background | impact medium usability | 1) changing to affiliates and then to global the map zooms way out. the normal default looks better.
2) the lack of background or border makes it look a bit weird when zoom out. the attribution in the corner, the zoom buttons and the legend floats out of context.

Given that this page is used a lot for outreach I'll assign it medium impact. | True | gbif network map zoomlevel and background - 1) changing to affiliates and then to global the map zooms way out. the normal default looks better.
2) the lack of background or border makes it look a bit weird when zoom out. the attribution in the corner, the zoom buttons and the legend floats out of context.

Given that this page is used a lot for outreach I'll assign it medium impact. | usab | gbif network map zoomlevel and background changing to affiliates and then to global the map zooms way out the normal default looks better the lack of background or border makes it look a bit weird when zoom out the attribution in the corner the zoom buttons and the legend floats out of context given that this page is used a lot for outreach i ll assign it medium impact | 1 |
19,344 | 13,893,498,643 | IssuesEvent | 2020-10-19 13:36:17 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | [ML] Anomaly explorer - links to single metric viewer don't open in new tab anymore | :ml Feature:Anomaly Detection regression usability v7.11.0 | **Found in version**
- 7.10.0-bc1
**Browser**
- Chrome
**Steps to reproduce**
- Open an AD job in the anomaly explorer
- Click the `View` button in a chart or click the `View series` for an anomalies list entry
**Expected result**
- The user should be able to open the single metric viewer in a new tab, either by default or by using the browser functionality (right click -> open link in new tab)
**Actual result**
- There's no way to open the single metric viewer in a new tab using the buttons. As a result the user loses the filtered anomaly explorer view and needs to manually restore the state if they wanted to continue with investigation.
**Additional information**
- This is a regression as in 7.9 both links opened in a new tab by default | True | [ML] Anomaly explorer - links to single metric viewer don't open in new tab anymore - **Found in version**
- 7.10.0-bc1
**Browser**
- Chrome
**Steps to reproduce**
- Open an AD job in the anomaly explorer
- Click the `View` button in a chart or click the `View series` for an anomalies list entry
**Expected result**
- The user should be able to open the single metric viewer in a new tab, either by default or by using the browser functionality (right click -> open link in new tab)
**Actual result**
- There's no way to open the single metric viewer in a new tab using the buttons. As a result the user loses the filtered anomaly explorer view and needs to manually restore the state if they wanted to continue with investigation.
**Additional information**
- This is a regression as in 7.9 both links opened in a new tab by default | usab | anomaly explorer links to single metric viewer don t open in new tab anymore found in version browser chrome steps to reproduce open an ad job in the anomaly explorer click the view button in a chart or click the view series for an anomalies list entry expected result the user should be able to open the single metric viewer in a new tab either by default or by using the browser functionality right click open link in new tab actual result there s no way to open the single metric viewer in a new tab using the buttons as a result the user loses the filtered anomaly explorer view and needs to manually restore the state if they wanted to continue with investigation additional information this is a regression as in both links opened in a new tab by default | 1 |
24,528 | 23,874,112,583 | IssuesEvent | 2022-09-07 17:18:07 | fabric-testbed/fabric-portal | https://api.github.com/repos/fabric-testbed/fabric-portal | closed | Update the Signup Step 1 UI | usability | - [x] Highlight the information that "Please note that ORCID listed as the first available provider does not work well, please choose your institution from the list instead." right above the "Proceed" button. | True | Update the Signup Step 1 UI - - [x] Highlight the information that "Please note that ORCID listed as the first available provider does not work well, please choose your institution from the list instead." right above the "Proceed" button. | usab | update the signup step ui highlight the information that please note that orcid listed as the first available provider does not work well please choose your institution from the list instead right above the proceed button | 1 |
14,994 | 9,639,284,585 | IssuesEvent | 2019-05-16 13:14:05 | peeringdb/peeringdb | https://api.github.com/repos/peeringdb/peeringdb | closed | number of connected networks in the Exchanges search results | Minor enhancement usability | http://ubersmith.peeringdb.com/admin/supportmgr/ticket_view.php?ticket=11850
very much miss the ability to see the number of connected
networks in the Exchanges search results.
```
Previously I'd be able to gauge the size of a set of IXP's using
```
PeeringDB...and now I have to click on each search result individually and count
while scrolling.
```
Please consider this feature request.
```
Thanks,
-Jacob Zack
Sr. DNS Administator - CIRA (.CA TLD)
| True | number of connected networks in the Exchanges search results - http://ubersmith.peeringdb.com/admin/supportmgr/ticket_view.php?ticket=11850
very much miss the ability to see the number of connected
networks in the Exchanges search results.
```
Previously I'd be able to gauge the size of a set of IXP's using
```
PeeringDB...and now I have to click on each search result individually and count
while scrolling.
```
Please consider this feature request.
```
Thanks,
-Jacob Zack
Sr. DNS Administator - CIRA (.CA TLD)
| usab | number of connected networks in the exchanges search results very much miss the ability to see the number of connected networks in the exchanges search results previously i d be able to gauge the size of a set of ixp s using peeringdb and now i have to click on each search result individually and count while scrolling please consider this feature request thanks jacob zack sr dns administator cira ca tld | 1 |
369,844 | 10,918,931,052 | IssuesEvent | 2019-11-21 17:54:15 | nemtech/catapult-rest | https://api.github.com/repos/nemtech/catapult-rest | closed | Incorrect transaction status codes | priority | Description:
Incorrect status codes end up on the client, while the server logs show correct status code.
Steps:
1. Run the following scenario
Scenario: 1. An account blocks receiving transactions containing a specific asset
Given Bobby blocks receiving transactions containing the following assets:
| ticket |
| voucher |
When Alex tries to send 1 asset "ticket" to Bobby
Then Bobby should receive a confirmation message
And Alex should receive the error "Failure_RestrictionAccount_Mosaic_Transfer_Prohibited"
2. Observe the server logs. You'll notice Failure_RestrictionAccount_Mosaic_Transfer_Prohibited in the logs. This means that the server returns the correct error code.
3. However, upon letting the error propagate to the client, you'll see Failure_RestrictionAccount_Operation_Type_Prohibited
Comparing https://github.com/nemtech/catapult-server/blob/v0.9.0.1/plugins/txes/restriction_account/src/validators/Results.h to catapult-sdk/src/model/status.js it looks like the codes are off by 1. For example,
/// Validation failed because the mosaic transfer is prohibited by the recipient.
DEFINE_RESTRICTION_ACCOUNT_RESULT(Mosaic_Transfer_Prohibited, 12);
**_should translate to_**
case 0x8050000**C**: return 'Failure_RestrictionAccount_Mosaic_Transfer_Prohibited';
**_but is_**
case 0x8050000**B**: return 'Failure_RestrictionAccount_Mosaic_Transfer_Prohibited'; | 1.0 | Incorrect transaction status codes - Description:
Incorrect status codes end up on the client, while the server logs show correct status code.
Steps:
1. Run the following scenario
Scenario: 1. An account blocks receiving transactions containing a specific asset
Given Bobby blocks receiving transactions containing the following assets:
| ticket |
| voucher |
When Alex tries to send 1 asset "ticket" to Bobby
Then Bobby should receive a confirmation message
And Alex should receive the error "Failure_RestrictionAccount_Mosaic_Transfer_Prohibited"
2. Observe the server logs. You'll notice Failure_RestrictionAccount_Mosaic_Transfer_Prohibited in the logs. This means that the server returns the correct error code.
3. However, upon letting the error propagate to the client, you'll see Failure_RestrictionAccount_Operation_Type_Prohibited
Comparing https://github.com/nemtech/catapult-server/blob/v0.9.0.1/plugins/txes/restriction_account/src/validators/Results.h to catapult-sdk/src/model/status.js it looks like the codes are off by 1. For example,
/// Validation failed because the mosaic transfer is prohibited by the recipient.
DEFINE_RESTRICTION_ACCOUNT_RESULT(Mosaic_Transfer_Prohibited, 12);
**_should translate to_**
case 0x8050000**C**: return 'Failure_RestrictionAccount_Mosaic_Transfer_Prohibited';
**_but is_**
case 0x8050000**B**: return 'Failure_RestrictionAccount_Mosaic_Transfer_Prohibited'; | non_usab | incorrect transaction status codes description incorrect status codes end up on the client while the server logs show correct status code steps run the following scenario scenario an account blocks receiving transactions containing a specific asset given bobby blocks receiving transactions containing the following assets ticket voucher when alex tries to send asset ticket to bobby then bobby should receive a confirmation message and alex should receive the error failure restrictionaccount mosaic transfer prohibited observe the server logs you ll notice failure restrictionaccount mosaic transfer prohibited in the logs this means that the server returns the correct error code however upon letting the error propagate to the client you ll see failure restrictionaccount operation type prohibited comparing to catapult sdk src model status js it looks like the codes are off by for example validation failed because the mosaic transfer is prohibited by the recipient define restriction account result mosaic transfer prohibited should translate to case c return failure restrictionaccount mosaic transfer prohibited but is case b return failure restrictionaccount mosaic transfer prohibited | 0 |
16,640 | 11,177,509,490 | IssuesEvent | 2019-12-30 10:57:01 | tobiasanker/SakuraTree | https://api.github.com/repos/tobiasanker/SakuraTree | opened | Add prebuild binaries | usability | ### Description
To make it easier for interesten people to test this project, a set of prebuild binaries should be made available.
### Possible Implementation
upgrade the current gitlab-ci-runner to build binaries for the common linux-distributions, like ubuntu, debian and centos, but only in case of merge-requests
| True | Add prebuild binaries - ### Description
To make it easier for interesten people to test this project, a set of prebuild binaries should be made available.
### Possible Implementation
upgrade the current gitlab-ci-runner to build binaries for the common linux-distributions, like ubuntu, debian and centos, but only in case of merge-requests
| usab | add prebuild binaries description to make it easier for interesten people to test this project a set of prebuild binaries should be made available possible implementation upgrade the current gitlab ci runner to build binaries for the common linux distributions like ubuntu debian and centos but only in case of merge requests | 1 |
43,837 | 2,893,238,809 | IssuesEvent | 2015-06-15 16:55:41 | roblox-linux-wrapper/roblox-linux-wrapper | https://api.github.com/repos/roblox-linux-wrapper/roblox-linux-wrapper | opened | Enable graphical improvements on wine-staging by default | enhancement feature-request priority:medium | `wine-staging` recently added performance enhancing settings, which should most definitely be enabled by default. The wrapper should enable CSMT, as well as ensure CUDA and PhysX are working correctly. This will hopefully improve the performance of the game for many, and may even reduce the frequency of crashes.
Here's a list of things that we can implement:
* [CSMT](https://github.com/wine-compholio/wine-staging/wiki/CSMT)
* [CUDA support](https://github.com/wine-compholio/wine-staging/wiki/CUDA)
* [PhysX acceleration support](https://github.com/wine-compholio/wine-staging/wiki/PhysX)
* [Some useful documentation on advanced wine-staging usage](https://github.com/wine-compholio/wine-staging/wiki/Usage)
| 1.0 | Enable graphical improvements on wine-staging by default - `wine-staging` recently added performance enhancing settings, which should most definitely be enabled by default. The wrapper should enable CSMT, as well as ensure CUDA and PhysX are working correctly. This will hopefully improve the performance of the game for many, and may even reduce the frequency of crashes.
Here's a list of things that we can implement:
* [CSMT](https://github.com/wine-compholio/wine-staging/wiki/CSMT)
* [CUDA support](https://github.com/wine-compholio/wine-staging/wiki/CUDA)
* [PhysX acceleration support](https://github.com/wine-compholio/wine-staging/wiki/PhysX)
* [Some useful documentation on advanced wine-staging usage](https://github.com/wine-compholio/wine-staging/wiki/Usage)
| non_usab | enable graphical improvements on wine staging by default wine staging recently added performance enhancing settings which should most definitely be enabled by default the wrapper should enable csmt as well as ensure cuda and physx are working correctly this will hopefully improve the performance of the game for many and may even reduce the frequency of crashes here s a list of things that we can implement | 0 |
14,177 | 8,886,911,916 | IssuesEvent | 2019-01-15 02:51:13 | fnielsen/scholia | https://api.github.com/repos/fnielsen/scholia | opened | In topic aspect, add panel on organizations associated with authors publishing on the topic | P108-employer P50-author P921-main-subject SPARQL panels usability | Here we go for Zika:
```SPARQL
# #defaultView:Graph
SELECT ?citing_organization ?citing_organizationLabel ?cited_organization ?cited_organizationLabel
WITH {
SELECT DISTINCT ?citing_organization ?cited_organization WHERE {
?citing_author (wdt:P108|wdt:P1416) ?citing_organization .
?cited_author (wdt:P108|wdt:P1416) ?cited_organization .
?citing_work wdt:P50 ?citing_author .
?citing_work wdt:P921 wd:Q15794049 .
?cited_work wdt:P921 wd:Q15794049 .
?citing_work wdt:P2860 ?cited_work .
?cited_work wdt:P50 ?cited_author .
FILTER (?citing_work != ?cited_work)
FILTER NOT EXISTS {
?citing_work wdt:P50 ?author .
?citing_work wdt:P2860 ?cited_work .
?cited_work wdt:P50 ?author .
}
}
} AS %results
WHERE {
INCLUDE %results
SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }
}
```
Probably worth thinking about encoding the frequency of interaction in the colour of the arrows. | True | In topic aspect, add panel on organizations associated with authors publishing on the topic - Here we go for Zika:
```SPARQL
# #defaultView:Graph
SELECT ?citing_organization ?citing_organizationLabel ?cited_organization ?cited_organizationLabel
WITH {
SELECT DISTINCT ?citing_organization ?cited_organization WHERE {
?citing_author (wdt:P108|wdt:P1416) ?citing_organization .
?cited_author (wdt:P108|wdt:P1416) ?cited_organization .
?citing_work wdt:P50 ?citing_author .
?citing_work wdt:P921 wd:Q15794049 .
?cited_work wdt:P921 wd:Q15794049 .
?citing_work wdt:P2860 ?cited_work .
?cited_work wdt:P50 ?cited_author .
FILTER (?citing_work != ?cited_work)
FILTER NOT EXISTS {
?citing_work wdt:P50 ?author .
?citing_work wdt:P2860 ?cited_work .
?cited_work wdt:P50 ?author .
}
}
} AS %results
WHERE {
INCLUDE %results
SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }
}
```
Probably worth thinking about encoding the frequency of interaction in the colour of the arrows. | usab | in topic aspect add panel on organizations associated with authors publishing on the topic here we go for zika sparql defaultview graph select citing organization citing organizationlabel cited organization cited organizationlabel with select distinct citing organization cited organization where citing author wdt wdt citing organization cited author wdt wdt cited organization citing work wdt citing author citing work wdt wd cited work wdt wd citing work wdt cited work cited work wdt cited author filter citing work cited work filter not exists citing work wdt author citing work wdt cited work cited work wdt author as results where include results service wikibase label bd serviceparam wikibase language en probably worth thinking about encoding the frequency of interaction in the colour of the arrows | 1 |
13,676 | 8,638,305,415 | IssuesEvent | 2018-11-23 14:20:11 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | 3D Path - please add "select control point" button | enhancement topic:editor usability | path in 3D does not have select control point button. but it's handled by shift drag
| True | 3D Path - please add "select control point" button - path in 3D does not have select control point button. but it's handled by shift drag
| usab | path please add select control point button path in does not have select control point button but it s handled by shift drag | 1 |
26,858 | 27,275,839,003 | IssuesEvent | 2023-02-23 05:05:10 | ClickHouse/ClickHouse | https://api.github.com/repos/ClickHouse/ClickHouse | closed | `clickhouse-client` If the password is not specified in the command line, and no-password auth is rejected, ask for password interactively. | easy task usability | Currently it works as follows:
```
milovidov@milovidov-desktop:~$ clickhouse client --host xqr42pv6yb.eu-west-1.aws.clickhouse-staging.com --secure
ClickHouse client version 23.2.1.1.
Connecting to xqr42pv6yb.eu-west-1.aws.clickhouse-staging.com:9440 as user default.
Code: 516. DB::Exception: Received from xqr42pv6yb.eu-west-1.aws.clickhouse-staging.com:9440. DB::Exception: default: Authentication failed: password is incorrect, or there is no user with such name.
```
Should fall back to interactive password prompt (similarly if I added the `--password` argument):
```
milovidov@milovidov-desktop:~$ clickhouse client --host xqr42pv6yb.eu-west-1.aws.clickhouse-staging.com --secure --password
ClickHouse client version 23.2.1.1.
Password for user (default):
``` | True | `clickhouse-client` If the password is not specified in the command line, and no-password auth is rejected, ask for password interactively. - Currently it works as follows:
```
milovidov@milovidov-desktop:~$ clickhouse client --host xqr42pv6yb.eu-west-1.aws.clickhouse-staging.com --secure
ClickHouse client version 23.2.1.1.
Connecting to xqr42pv6yb.eu-west-1.aws.clickhouse-staging.com:9440 as user default.
Code: 516. DB::Exception: Received from xqr42pv6yb.eu-west-1.aws.clickhouse-staging.com:9440. DB::Exception: default: Authentication failed: password is incorrect, or there is no user with such name.
```
Should fall back to interactive password prompt (similarly if I added the `--password` argument):
```
milovidov@milovidov-desktop:~$ clickhouse client --host xqr42pv6yb.eu-west-1.aws.clickhouse-staging.com --secure --password
ClickHouse client version 23.2.1.1.
Password for user (default):
``` | usab | clickhouse client if the password is not specified in the command line and no password auth is rejected ask for password interactively currently it works as follows milovidov milovidov desktop clickhouse client host eu west aws clickhouse staging com secure clickhouse client version connecting to eu west aws clickhouse staging com as user default code db exception received from eu west aws clickhouse staging com db exception default authentication failed password is incorrect or there is no user with such name should fall back to interactive password prompt similarly if i added the password argument milovidov milovidov desktop clickhouse client host eu west aws clickhouse staging com secure password clickhouse client version password for user default | 1 |
196,554 | 22,442,140,348 | IssuesEvent | 2022-06-21 02:34:25 | valdisiljuconoks/AlloyTech | https://api.github.com/repos/valdisiljuconoks/AlloyTech | closed | WS-2019-0333 (High) detected in handlebars-1.3.0.tgz - autoclosed | security vulnerability | ## WS-2019-0333 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-1.3.0.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-1.3.0.tgz">https://registry.npmjs.org/handlebars/-/handlebars-1.3.0.tgz</a></p>
<p>Path to dependency file: AlloyTech/AlloyTechEpi10/modules/_protected/Shell/Shell/10.1.0.0/ClientResources/lib/xstyle/package.json</p>
<p>Path to vulnerable library: AlloyTech/AlloyTechEpi10/modules/_protected/Shell/Shell/10.1.0.0/ClientResources/lib/xstyle/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- intern-geezer-2.2.3.tgz (Root Library)
- istanbul-0.2.16.tgz
- :x: **handlebars-1.3.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In handlebars, versions prior to v4.5.3 are vulnerable to prototype pollution. Using a malicious template it's possbile to add or modify properties to the Object prototype. This can also lead to DOS and RCE in certain conditions.
<p>Publish Date: 2019-11-18
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/f7f05d7558e674856686b62a00cde5758f3b7a08>WS-2019-0333</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1325">https://www.npmjs.com/advisories/1325</a></p>
<p>Release Date: 2019-11-18</p>
<p>Fix Resolution: handlebars - 4.5.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2019-0333 (High) detected in handlebars-1.3.0.tgz - autoclosed - ## WS-2019-0333 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-1.3.0.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-1.3.0.tgz">https://registry.npmjs.org/handlebars/-/handlebars-1.3.0.tgz</a></p>
<p>Path to dependency file: AlloyTech/AlloyTechEpi10/modules/_protected/Shell/Shell/10.1.0.0/ClientResources/lib/xstyle/package.json</p>
<p>Path to vulnerable library: AlloyTech/AlloyTechEpi10/modules/_protected/Shell/Shell/10.1.0.0/ClientResources/lib/xstyle/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- intern-geezer-2.2.3.tgz (Root Library)
- istanbul-0.2.16.tgz
- :x: **handlebars-1.3.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In handlebars, versions prior to v4.5.3 are vulnerable to prototype pollution. Using a malicious template it's possbile to add or modify properties to the Object prototype. This can also lead to DOS and RCE in certain conditions.
<p>Publish Date: 2019-11-18
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/f7f05d7558e674856686b62a00cde5758f3b7a08>WS-2019-0333</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1325">https://www.npmjs.com/advisories/1325</a></p>
<p>Release Date: 2019-11-18</p>
<p>Fix Resolution: handlebars - 4.5.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_usab | ws high detected in handlebars tgz autoclosed ws high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file alloytech modules protected shell shell clientresources lib xstyle package json path to vulnerable library alloytech modules protected shell shell clientresources lib xstyle node modules handlebars package json dependency hierarchy intern geezer tgz root library istanbul tgz x handlebars tgz vulnerable library vulnerability details in handlebars versions prior to are vulnerable to prototype pollution using a malicious template it s possbile to add or modify properties to the object prototype this can also lead to dos and rce in certain conditions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource | 0 |
6,971 | 4,705,916,176 | IssuesEvent | 2016-10-13 15:42:17 | bitsquare/bitsquare | https://api.github.com/repos/bitsquare/bitsquare | closed | Make all UI screens safe for min. window size | re: usability [ui] | 1. Find which min. window size we want to support 760x560)?
2. Check all screens if there are problems, add scroll panes if needed
<!---
@huboard:{"order":321.6875002384186,"milestone_order":154,"custom_state":""}
-->
| True | Make all UI screens safe for min. window size - 1. Find which min. window size we want to support 760x560)?
2. Check all screens if there are problems, add scroll panes if needed
<!---
@huboard:{"order":321.6875002384186,"milestone_order":154,"custom_state":""}
-->
| usab | make all ui screens safe for min window size find which min window size we want to support check all screens if there are problems add scroll panes if needed huboard order milestone order custom state | 1 |
22,844 | 20,356,723,079 | IssuesEvent | 2022-02-20 03:25:42 | TravelMapping/DataProcessing | https://api.github.com/repos/TravelMapping/DataProcessing | closed | Produce a "things to check out" in user logs? | enhancement user statistics usability TravelerList class | Thinking about some of the emails I get from users when they send in .list updates, where they find the errors in their lists from highway data updates over the course of a few days, it might be nice to have a report for users indicating which routes they have in their lists that have recent updates entries. | True | Produce a "things to check out" in user logs? - Thinking about some of the emails I get from users when they send in .list updates, where they find the errors in their lists from highway data updates over the course of a few days, it might be nice to have a report for users indicating which routes they have in their lists that have recent updates entries. | usab | produce a things to check out in user logs thinking about some of the emails i get from users when they send in list updates where they find the errors in their lists from highway data updates over the course of a few days it might be nice to have a report for users indicating which routes they have in their lists that have recent updates entries | 1 |
562,867 | 16,671,376,537 | IssuesEvent | 2021-06-07 11:19:35 | vaticle/typedb-benchmark | https://api.github.com/repos/vaticle/typedb-benchmark | opened | Agents for read performance with reasoning | priority: blocker type: feature | ## Description
Subsequent to the significant refactor this year, we must introduce new agents. We particularly want to focus on creating a variety of queries that demonstrate the different challenges in reasoner. | 1.0 | Agents for read performance with reasoning - ## Description
Subsequent to the significant refactor this year, we must introduce new agents. We particularly want to focus on creating a variety of queries that demonstrate the different challenges in reasoner. | non_usab | agents for read performance with reasoning description subsequent to the significant refactor this year we must introduce new agents we particularly want to focus on creating a variety of queries that demonstrate the different challenges in reasoner | 0 |
290,651 | 8,902,100,876 | IssuesEvent | 2019-01-17 06:00:18 | abpframework/abp | https://api.github.com/repos/abpframework/abp | opened | Implement CorrelationId | feature framework priority:normal | That is passed between service calls to track the same request/operation/transaction. It's also logged. | 1.0 | Implement CorrelationId - That is passed between service calls to track the same request/operation/transaction. It's also logged. | non_usab | implement correlationid that is passed between service calls to track the same request operation transaction it s also logged | 0 |
380,571 | 11,267,812,728 | IssuesEvent | 2020-01-14 03:41:18 | crcn/tandem | https://api.github.com/repos/crcn/tandem | closed | Only allow elements to be dropped in slots | bug estimate: small priority: high | Users are able to add children to component instances which:
1. don't appear in the editor since the behavior isn't supported
2. break the react compiler | 1.0 | Only allow elements to be dropped in slots - Users are able to add children to component instances which:
1. don't appear in the editor since the behavior isn't supported
2. break the react compiler | non_usab | only allow elements to be dropped in slots users are able to add children to component instances which don t appear in the editor since the behavior isn t supported break the react compiler | 0 |
84,295 | 10,369,117,961 | IssuesEvent | 2019-09-07 23:13:22 | ProyectoIntegrador2018/dr_movil | https://api.github.com/repos/ProyectoIntegrador2018/dr_movil | opened | Bugs | documentation | ### Bug *nombre*
### *Descripción de qué es el bug*
### *Descripción de cómo recrear el bug* | 1.0 | Bugs - ### Bug *nombre*
### *Descripción de qué es el bug*
### *Descripción de cómo recrear el bug* | non_usab | bugs bug nombre descripción de qué es el bug descripción de cómo recrear el bug | 0 |
240,609 | 7,803,505,454 | IssuesEvent | 2018-06-11 00:48:00 | kubeflow/kubeflow | https://api.github.com/repos/kubeflow/kubeflow | closed | Deadlocks configuring envoy for IAP | area/bootstrap area/front-end platform/gcp priority/p1 release/0.2.0 | I'm noticing deadlocks and other problems configuring envoy using the IAP script. Some of the problems I observe
1. iap.sh is never able to acquire the lock and therefore able to write the envoy-config.json
1. envoy container is crash looping - prevents GCP loadbalancer from detecting the backend is heathy
I think we should make the following changes
1. There should be a single pod responsible for enabling IAP and updating the envoy-config map as needed
* We should move this out of the sidecar and into a separate deployment
* Locking should be less important because there won't be contention
1. The envoy sidecars are now just responsible for updating envoy config based on the config map
* They no longer need to acquire a lock
* They can periodically check the config map and compute a hash to know when it changes
1. We should provide a default config that will allow envoy to startup but ensure non secure traffic is blocked
* This way we can avoid the problems with the ingress thinking the backend is unhealthy. | 1.0 | Deadlocks configuring envoy for IAP - I'm noticing deadlocks and other problems configuring envoy using the IAP script. Some of the problems I observe
1. iap.sh is never able to acquire the lock and therefore able to write the envoy-config.json
1. envoy container is crash looping - prevents GCP loadbalancer from detecting the backend is heathy
I think we should make the following changes
1. There should be a single pod responsible for enabling IAP and updating the envoy-config map as needed
* We should move this out of the sidecar and into a separate deployment
* Locking should be less important because there won't be contention
1. The envoy sidecars are now just responsible for updating envoy config based on the config map
* They no longer need to acquire a lock
* They can periodically check the config map and compute a hash to know when it changes
1. We should provide a default config that will allow envoy to startup but ensure non secure traffic is blocked
* This way we can avoid the problems with the ingress thinking the backend is unhealthy. | non_usab | deadlocks configuring envoy for iap i m noticing deadlocks and other problems configuring envoy using the iap script some of the problems i observe iap sh is never able to acquire the lock and therefore able to write the envoy config json envoy container is crash looping prevents gcp loadbalancer from detecting the backend is heathy i think we should make the following changes there should be a single pod responsible for enabling iap and updating the envoy config map as needed we should move this out of the sidecar and into a separate deployment locking should be less important because there won t be contention the envoy sidecars are now just responsible for updating envoy config based on the config map they no longer need to acquire a lock they can periodically check the config map and compute a hash to know when it changes we should provide a default config that will allow envoy to startup but ensure non secure traffic is blocked this way we can avoid the problems with the ingress thinking the backend is unhealthy | 0 |
57,730 | 14,199,765,734 | IssuesEvent | 2020-11-16 03:22:10 | rust-lang/rust | https://api.github.com/repos/rust-lang/rust | closed | Can't use download-ci-llvm any more | A-LLVM A-rustbuild C-bug T-infra | Enabling download-ci-llvm used to work just fine for me, until today.
After rebasing on top of 75042566d1c90d912f22e4db43b6d3af98447986 , doing anything with rustc beyond stage 1 fails:
```
$ ./x.py build
[...]
Building stage1 std artifacts (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu)
[...]
error while loading shared libraries: libLLVM-11-rust-1.49.0-nightly.so: cannot open shared object file: No such file or directory
```
To verify it's not due to some caching issue caused by my experiments in #79043 that temporarily turned the feature off I deleted the entire build directory. It correctly downloaded the stage0 and llvm artifacts, but still failed when trying to run stage1 rustc. | 1.0 | Can't use download-ci-llvm any more - Enabling download-ci-llvm used to work just fine for me, until today.
After rebasing on top of 75042566d1c90d912f22e4db43b6d3af98447986 , doing anything with rustc beyond stage 1 fails:
```
$ ./x.py build
[...]
Building stage1 std artifacts (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu)
[...]
error while loading shared libraries: libLLVM-11-rust-1.49.0-nightly.so: cannot open shared object file: No such file or directory
```
To verify it's not due to some caching issue caused by my experiments in #79043 that temporarily turned the feature off I deleted the entire build directory. It correctly downloaded the stage0 and llvm artifacts, but still failed when trying to run stage1 rustc. | non_usab | can t use download ci llvm any more enabling download ci llvm used to work just fine for me until today after rebasing on top of doing anything with rustc beyond stage fails x py build building std artifacts unknown linux gnu unknown linux gnu error while loading shared libraries libllvm rust nightly so cannot open shared object file no such file or directory to verify it s not due to some caching issue caused by my experiments in that temporarily turned the feature off i deleted the entire build directory it correctly downloaded the and llvm artifacts but still failed when trying to run rustc | 0 |
468,723 | 13,489,310,464 | IssuesEvent | 2020-09-11 13:42:50 | Eastrall/Rhisis | https://api.github.com/repos/Eastrall/Rhisis | opened | Provide a configuration parameter to send the world server port to client | comp: network enhancement feature-request good first issue priority: low srv: login | The FLYFF client tries to connect to the world server using the default 5400 port. This implies that the world servers cannot be hosted on the same machine.
In order to improve flexibility, Rhisis should provide an option to send the world servers port to the client once the `CERTIFY` packet is handled in the `LoginServer`. | 1.0 | Provide a configuration parameter to send the world server port to client - The FLYFF client tries to connect to the world server using the default 5400 port. This implies that the world servers cannot be hosted on the same machine.
In order to improve flexibility, Rhisis should provide an option to send the world servers port to the client once the `CERTIFY` packet is handled in the `LoginServer`. | non_usab | provide a configuration parameter to send the world server port to client the flyff client tries to connect to the world server using the default port this implies that the world servers cannot be hosted on the same machine in order to improve flexibility rhisis should provide an option to send the world servers port to the client once the certify packet is handled in the loginserver | 0 |
130,501 | 10,617,491,263 | IssuesEvent | 2019-10-12 19:28:48 | tstreamDOTh/firebase-swiss | https://api.github.com/repos/tstreamDOTh/firebase-swiss | closed | Setup a basic test case suite | firebase hacktoberfest help wanted javascript test | Explore firebase emulator & create a basic test case suite. Should have a sample test case probably using `CREATE` fire function. | 1.0 | Setup a basic test case suite - Explore firebase emulator & create a basic test case suite. Should have a sample test case probably using `CREATE` fire function. | non_usab | setup a basic test case suite explore firebase emulator create a basic test case suite should have a sample test case probably using create fire function | 0 |
500,464 | 14,500,131,152 | IssuesEvent | 2020-12-11 17:37:09 | magento/magento2 | https://api.github.com/repos/magento/magento2 | closed | Input type of customizable option doesn't get returned through GraphQL | Component: QuoteGraphQl Issue: Format is valid Priority: P1 Progress: done Project: GraphQL | ### Preconditions
Github branches `magento2/2.4-develop` and `architecture/master`
### Steps to reproduce
The GraphQL type `SelectedCustomizableOption` is used in multiple places across the Magento GraphQL modules to get the currently selected customizable option of a product through GraphQL. There are various field types for this such as text, textarea, select, multiselect, checkbox, etc. Currently, there's no way for a frontend using GraphQL to know how to render the SelectedCustomizableOption. Yes, the format varies according to different types, but the format is the same for input types of the same category. I believe there's no way to differentiate between a text and a textarea of `SelectedCustomizableOption`. Same with the different select input types.
The associated resolver class _does_ return the 'type of input' from Magento (ref: https://github.com/magento/magento2/blob/2.4-develop/app/code/Magento/QuoteGraphQl/Model/CartItem/DataProvider/CustomizableOption.php#L60), but the associated GraphQL schema does not define a field for the same (ref: https://github.com/magento/magento2/blob/2.4-develop/app/code/Magento/QuoteGraphQl/etc/schema.graphqls#L346) and hence the 'type of input' doesn't get returned.
Upon some searching, the GraphQL coverage docs did have the 'type of input' as a field in `SelectedCustomizableOption` a couple of months ago (ref: https://github.com/magento/architecture/blob/673438109bbf63d819e96c373ef7622206ff7f9b/design-documents/graph-ql/coverage/add-items-to-cart/AddSimpleProductToCart.graphqls#L59) until the new coverage docs (ref: https://github.com/magento/architecture/blob/master/design-documents/graph-ql/coverage/Cart.graphqls#L114) was matched according to the current GraphQL schema and was thus, removed.
### Expected result
Consistency.
Is the field `type` required to render the input type on the frontend?
- If yes, we need to add the field back to the schema.
- If not, we need to remove it from the resolver for consistency and to avoid confusion for developers in the future.
### Actual result
The field `type` was removed from the QuoteGraphQl module in this commit: https://github.com/magento/magento2/commit/1315577e2099637207f02deec48607d81f7cde46#diff-795a33fde881f18aba5165a5a8c7513fL317 and from the architecture coverage docs a couple months ago and now the code is inconsistent and causes confusion for developers.
---
Please provide [Severity](https://devdocs.magento.com/guides/v2.3/contributor-guide/contributing.html#backlog) assessment for the Issue as Reporter. This information will help during Confirmation and Issue triage processes.
- [ ] Severity: **S0** _- Affects critical data or functionality and leaves users without workaround._
- [ ] Severity: **S1** _- Affects critical data or functionality and forces users to employ a workaround._
- [x] Severity: **S2** _- Affects non-critical data or functionality and forces users to employ a workaround._
- [ ] Severity: **S3** _- Affects non-critical data or functionality and does not force users to employ a workaround._
- [ ] Severity: **S4** _- Affects aesthetics, professional look and feel, “quality” or “usability”._
| 1.0 | Input type of customizable option doesn't get returned through GraphQL - ### Preconditions
Github branches `magento2/2.4-develop` and `architecture/master`
### Steps to reproduce
The GraphQL type `SelectedCustomizableOption` is used in multiple places across the Magento GraphQL modules to get the currently selected customizable option of a product through GraphQL. There are various field types for this such as text, textarea, select, multiselect, checkbox, etc. Currently, there's no way for a frontend using GraphQL to know how to render the SelectedCustomizableOption. Yes, the format varies according to different types, but the format is the same for input types of the same category. I believe there's no way to differentiate between a text and a textarea of `SelectedCustomizableOption`. Same with the different select input types.
The associated resolver class _does_ return the 'type of input' from Magento (ref: https://github.com/magento/magento2/blob/2.4-develop/app/code/Magento/QuoteGraphQl/Model/CartItem/DataProvider/CustomizableOption.php#L60), but the associated GraphQL schema does not define a field for the same (ref: https://github.com/magento/magento2/blob/2.4-develop/app/code/Magento/QuoteGraphQl/etc/schema.graphqls#L346) and hence the 'type of input' doesn't get returned.
Upon some searching, the GraphQL coverage docs did have the 'type of input' as a field in `SelectedCustomizableOption` a couple of months ago (ref: https://github.com/magento/architecture/blob/673438109bbf63d819e96c373ef7622206ff7f9b/design-documents/graph-ql/coverage/add-items-to-cart/AddSimpleProductToCart.graphqls#L59) until the new coverage docs (ref: https://github.com/magento/architecture/blob/master/design-documents/graph-ql/coverage/Cart.graphqls#L114) was matched according to the current GraphQL schema and was thus, removed.
### Expected result
Consistency.
Is the field `type` required to render the input type on the frontend?
- If yes, we need to add the field back to the schema.
- If not, we need to remove it from the resolver for consistency and to avoid confusion for developers in the future.
### Actual result
The field `type` was removed from the QuoteGraphQl module in this commit: https://github.com/magento/magento2/commit/1315577e2099637207f02deec48607d81f7cde46#diff-795a33fde881f18aba5165a5a8c7513fL317 and from the architecture coverage docs a couple months ago and now the code is inconsistent and causes confusion for developers.
---
Please provide [Severity](https://devdocs.magento.com/guides/v2.3/contributor-guide/contributing.html#backlog) assessment for the Issue as Reporter. This information will help during Confirmation and Issue triage processes.
- [ ] Severity: **S0** _- Affects critical data or functionality and leaves users without workaround._
- [ ] Severity: **S1** _- Affects critical data or functionality and forces users to employ a workaround._
- [x] Severity: **S2** _- Affects non-critical data or functionality and forces users to employ a workaround._
- [ ] Severity: **S3** _- Affects non-critical data or functionality and does not force users to employ a workaround._
- [ ] Severity: **S4** _- Affects aesthetics, professional look and feel, “quality” or “usability”._
| non_usab | input type of customizable option doesn t get returned through graphql preconditions github branches develop and architecture master steps to reproduce the graphql type selectedcustomizableoption is used in multiple places across the magento graphql modules to get the currently selected customizable option of a product through graphql there are various field types for this such as text textarea select multiselect checkbox etc currently there s no way for a frontend using graphql to know how to render the selectedcustomizableoption yes the format varies according to different types but the format is the same for input types of the same category i believe there s no way to differentiate between a text and a textarea of selectedcustomizableoption same with the different select input types the associated resolver class does return the type of input from magento ref but the associated graphql schema does not define a field for the same ref and hence the type of input doesn t get returned upon some searching the graphql coverage docs did have the type of input as a field in selectedcustomizableoption a couple of months ago ref until the new coverage docs ref was matched according to the current graphql schema and was thus removed expected result consistency is the field type required to render the input type on the frontend if yes we need to add the field back to the schema if not we need to remove it from the resolver for consistency and to avoid confusion for developers in the future actual result the field type was removed from the quotegraphql module in this commit and from the architecture coverage docs a couple months ago and now the code is inconsistent and causes confusion for developers please provide assessment for the issue as reporter this information will help during confirmation and issue triage processes severity affects critical data or functionality and leaves users without workaround severity affects critical data or functionality and forces users to employ a workaround severity affects non critical data or functionality and forces users to employ a workaround severity affects non critical data or functionality and does not force users to employ a workaround severity affects aesthetics professional look and feel “quality” or “usability” | 0 |
30,963 | 4,229,414,619 | IssuesEvent | 2016-07-04 07:39:19 | pythonapis/6ZJYP2PXGY5CWP2LWTZZFRIL | https://api.github.com/repos/pythonapis/6ZJYP2PXGY5CWP2LWTZZFRIL | closed | rXh9vNI66D3XRM3pGLYlngbN1/lxjcuJUgZOPz6pY/Q4mtbK/0vXkKTIY8aoPns6JkAteppcl8VMICiloOb6ErBOhHUZNaUKxa/ecyNqvysJ/ZbX+5kEKHkhtVCkqF7fZ2aSIvMtBLpLXfhdTJSt8MwSsVxK2ewgv+6L6C+Ttq0= | design | 80cNpX9EtylnOpmFLIUA1xaMYrzp2oYJQ6AhvSuoTOdUp3k9RjwSxd+g53egV9MIGmSjbbt2tjT2ZJK7/RB+X9IW2EnNREKFF3TDUB5/PVkGBKFbiCVYFaxAMaPnMlVgerV+zTIGbAbUtowF3IXzgo3epHXwzPDRGyvZCQoWzmX+C1hjyVK/UhOiKT4zUvOl8zxu4KGkBGoW4WgWQNQh2M4txS1CT63HviDAm53kpmKmL9jgBspPddZfxQPXLvUvwFLZAoamNicCrxW87Fx1DPVJZu2kdoQllvK9/5CocgjOPwXYetQR8ge0flIw9Y4Hif3a6Vij4K7U9E30AK9uo0yVCT05DzP3HAgyGzPeSdmXF3MV6ky0F+FPIO34cVQEewUJEP5FcyjiuSv+QI1ayCQSzW1QobA+Nj1UVaYQxf2Ck3pf83MX7GASHz1VjwS96s9sI6tWEq1fCZ1Mgr0COyi3TCFIBSCWtahom4t/7jK2OWWSZarSyNdM+wOs8crT470kuuGdSL9ATQS2dQOma+ShtyBuePfF4xhd8kQddoxcSapTcNpkba1JbX+FML6ApO7hP3LG9ZwO/5Liv5IlaA7RACsc2KWQC1PDygV7kz7CPi4AM4FelJ8FAMYPl7elYa/8XoMv0qzhx4yIY2LI6HxjBCX2kQcOFDYoaP4vi/NLVxpbpxSO18EyIk9PtiQUUS/lQ30k9JTMNlEtLoSoV9BDusGNhdWC9a3Ed6QFaw0wcmHNbh97fVGQAVzVqGiuvKqwPuapwtk8Io7n0noENI32qC/Bcj7BSfMOTVnVvwsJa2/wD5FztwNM5HD1AHBMN/4McvomXdaP+vUS+srJkGwt8EvudnQIYQIIv8gO8a144NID/4cmoTq70wAlBQ5QRzXGBstFMwph2nKV7bc6e09+deIHQU6wo9nGjz6Mbt+rH5q3ytxInkDrJ+OyCQ84so29L8zJjY5NDsJuIObZJ+O3v+GEb23yCquOoexeIaLvqx99CeGhxDs/sYOVOygTbWbQcSG82foBDfd4JRV26iNndDYpT4EnzSsoSk27gciJrk6GKyG/OJoQd5WnTb22YloTBPFX/yhSXbr/joWTX18UdyATGE2p923vluSDm/j3gnyYYmn6pBZ4/SVEr2jvxifVc76YdMLYy3mg1Ipm7Oq2rklb7mWC2RroARlBqvJ8YB7UAeVYn/WkWs09fa6c | 1.0 | rXh9vNI66D3XRM3pGLYlngbN1/lxjcuJUgZOPz6pY/Q4mtbK/0vXkKTIY8aoPns6JkAteppcl8VMICiloOb6ErBOhHUZNaUKxa/ecyNqvysJ/ZbX+5kEKHkhtVCkqF7fZ2aSIvMtBLpLXfhdTJSt8MwSsVxK2ewgv+6L6C+Ttq0= - 80cNpX9EtylnOpmFLIUA1xaMYrzp2oYJQ6AhvSuoTOdUp3k9RjwSxd+g53egV9MIGmSjbbt2tjT2ZJK7/RB+X9IW2EnNREKFF3TDUB5/PVkGBKFbiCVYFaxAMaPnMlVgerV+zTIGbAbUtowF3IXzgo3epHXwzPDRGyvZCQoWzmX+C1hjyVK/UhOiKT4zUvOl8zxu4KGkBGoW4WgWQNQh2M4txS1CT63HviDAm53kpmKmL9jgBspPddZfxQPXLvUvwFLZAoamNicCrxW87Fx1DPVJZu2kdoQllvK9/5CocgjOPwXYetQR8ge0flIw9Y4Hif3a6Vij4K7U9E30AK9uo0yVCT05DzP3HAgyGzPeSdmXF3MV6ky0F+FPIO34cVQEewUJEP5FcyjiuSv+QI1ayCQSzW1QobA+Nj1UVaYQxf2Ck3pf83MX7GASHz1VjwS96s9sI6tWEq1fCZ1Mgr0COyi3TCFIBSCWtahom4t/7jK2OWWSZarSyNdM+wOs8crT470kuuGdSL9ATQS2dQOma+ShtyBuePfF4xhd8kQddoxcSapTcNpkba1JbX+FML6ApO7hP3LG9ZwO/5Liv5IlaA7RACsc2KWQC1PDygV7kz7CPi4AM4FelJ8FAMYPl7elYa/8XoMv0qzhx4yIY2LI6HxjBCX2kQcOFDYoaP4vi/NLVxpbpxSO18EyIk9PtiQUUS/lQ30k9JTMNlEtLoSoV9BDusGNhdWC9a3Ed6QFaw0wcmHNbh97fVGQAVzVqGiuvKqwPuapwtk8Io7n0noENI32qC/Bcj7BSfMOTVnVvwsJa2/wD5FztwNM5HD1AHBMN/4McvomXdaP+vUS+srJkGwt8EvudnQIYQIIv8gO8a144NID/4cmoTq70wAlBQ5QRzXGBstFMwph2nKV7bc6e09+deIHQU6wo9nGjz6Mbt+rH5q3ytxInkDrJ+OyCQ84so29L8zJjY5NDsJuIObZJ+O3v+GEb23yCquOoexeIaLvqx99CeGhxDs/sYOVOygTbWbQcSG82foBDfd4JRV26iNndDYpT4EnzSsoSk27gciJrk6GKyG/OJoQd5WnTb22YloTBPFX/yhSXbr/joWTX18UdyATGE2p923vluSDm/j3gnyYYmn6pBZ4/SVEr2jvxifVc76YdMLYy3mg1Ipm7Oq2rklb7mWC2RroARlBqvJ8YB7UAeVYn/WkWs09fa6c | non_usab | ecynqvysj zbx rb pvkgbkfbicvyfaxamapnmlvgerv vus yhsxbr | 0 |
546,001 | 15,982,113,200 | IssuesEvent | 2021-04-18 02:03:39 | docker-mailserver/docker-mailserver | https://api.github.com/repos/docker-mailserver/docker-mailserver | closed | Runtime changes for Razor and Clam | area/dependency area/enhancement kind/feature request kind/improvement meta/closed due to age or inactivity meta/needs triage meta/stale priority/high | # I would like some feedback concerning a use case with the razor agent.
## Description
From what I understand of the razor configuration of docker-mailserver, the account/identity creation by the razor-admin is done in the Dockerfile and therefore at build time. The created identity then is the same for all instances (users) of this image. In my case the identity was created on January, 31 at 16:40.
Is this the intended way of using razor? Or should this not be done on a container/user basis? | 1.0 | Runtime changes for Razor and Clam - # I would like some feedback concerning a use case with the razor agent.
## Description
From what I understand of the razor configuration of docker-mailserver, the account/identity creation by the razor-admin is done in the Dockerfile and therefore at build time. The created identity then is the same for all instances (users) of this image. In my case the identity was created on January, 31 at 16:40.
Is this the intended way of using razor? Or should this not be done on a container/user basis? | non_usab | runtime changes for razor and clam i would like some feedback concerning a use case with the razor agent description from what i understand of the razor configuration of docker mailserver the account identity creation by the razor admin is done in the dockerfile and therefore at build time the created identity then is the same for all instances users of this image in my case the identity was created on january at is this the intended way of using razor or should this not be done on a container user basis | 0 |
9,767 | 6,412,558,501 | IssuesEvent | 2017-08-08 03:51:54 | FReBOmusic/FReBO | https://api.github.com/repos/FReBOmusic/FReBO | opened | Start Time Textbox | Usability | In the event that the user navigates to the Availability Screen.
**Expected Response**: The Start Time Textbox should be displayed on the Availability Screen. | True | Start Time Textbox - In the event that the user navigates to the Availability Screen.
**Expected Response**: The Start Time Textbox should be displayed on the Availability Screen. | usab | start time textbox in the event that the user navigates to the availability screen expected response the start time textbox should be displayed on the availability screen | 1 |
419,526 | 28,147,343,095 | IssuesEvent | 2023-04-02 16:34:43 | Mudlet/Mudlet | https://api.github.com/repos/Mudlet/Mudlet | closed | table.contains is not working as expected | needs documentation | #### Brief summary of issue / Description of requested feature:
table.contains is not working properly
#### Steps to reproduce the issue / Reasons for adding feature:
/lua table.contains({8}, 1)
/lua table.contains({8}, 2)
#### Error output / Expected result of feature
true
false
Should be false and false. Looks like it returns true if the number of elements is equal to secon parameter
| 1.0 | table.contains is not working as expected - #### Brief summary of issue / Description of requested feature:
table.contains is not working properly
#### Steps to reproduce the issue / Reasons for adding feature:
/lua table.contains({8}, 1)
/lua table.contains({8}, 2)
#### Error output / Expected result of feature
true
false
Should be false and false. Looks like it returns true if the number of elements is equal to secon parameter
| non_usab | table contains is not working as expected brief summary of issue description of requested feature table contains is not working properly steps to reproduce the issue reasons for adding feature lua table contains lua table contains error output expected result of feature true false should be false and false looks like it returns true if the number of elements is equal to secon parameter | 0 |
1,490 | 3,249,433,401 | IssuesEvent | 2015-10-18 05:35:03 | asciidoctor/asciidoctor | https://api.github.com/repos/asciidoctor/asciidoctor | closed | Move opal_ext from core to Asciidoctor.js | infrastructure wip | A complement of the issue https://github.com/asciidoctor/asciidoctor.js/issues/132 in Asciidoctor.js. Motivation for this migration can be found in that issue. | 1.0 | Move opal_ext from core to Asciidoctor.js - A complement of the issue https://github.com/asciidoctor/asciidoctor.js/issues/132 in Asciidoctor.js. Motivation for this migration can be found in that issue. | non_usab | move opal ext from core to asciidoctor js a complement of the issue in asciidoctor js motivation for this migration can be found in that issue | 0 |
23,693 | 22,595,320,749 | IssuesEvent | 2022-06-29 01:55:41 | pulumi/pulumi-hugo | https://api.github.com/repos/pulumi/pulumi-hugo | opened | not able to do the guide because of cloud account requirements | kind/enhancement impact/usability | in a recent usability study, a person was not able to even try out a guide because he was not prepared to use his work aws account, and had no plan for where to put the resources.
can we direct folks to the docker learn guide in this situation?
can we have a guide in the docs specifically for folks similar to this person?
or are there other things we can do to provide an experience for users similar to this?
_will ink to an internal recording of this in a bit_ | True | not able to do the guide because of cloud account requirements - in a recent usability study, a person was not able to even try out a guide because he was not prepared to use his work aws account, and had no plan for where to put the resources.
can we direct folks to the docker learn guide in this situation?
can we have a guide in the docs specifically for folks similar to this person?
or are there other things we can do to provide an experience for users similar to this?
_will ink to an internal recording of this in a bit_ | usab | not able to do the guide because of cloud account requirements in a recent usability study a person was not able to even try out a guide because he was not prepared to use his work aws account and had no plan for where to put the resources can we direct folks to the docker learn guide in this situation can we have a guide in the docs specifically for folks similar to this person or are there other things we can do to provide an experience for users similar to this will ink to an internal recording of this in a bit | 1 |
729,014 | 25,104,984,156 | IssuesEvent | 2022-11-08 16:01:39 | mantidproject/mantid | https://api.github.com/repos/mantidproject/mantid | closed | Fix cppCheck | High Priority Bug ISIS Team: Core | **Describe the bug**
Following some recent updates cppCheck CI tests are no longer running. It appears that the global hooks path may be incorrectly set. There also may be other as yet undiagnosed issues.
**To Reproduce**
See failed tests on `pull_requests_cppcheck` pipeline - https://builds.mantidproject.org/job/pull_requests-cppcheck/
**Expected behavior**
cppCheck should run and any failures should be because the PR has cpp errors rather than a configuration issue with the nodes.
**Platform/Version (please complete the following information):**
- all nodes labelled `cppcheck` in Jenkins
**Additional context**
Information about setting global hooks path correctly - https://github.com/pre-commit/pre-commit/issues/1198
| 1.0 | Fix cppCheck - **Describe the bug**
Following some recent updates cppCheck CI tests are no longer running. It appears that the global hooks path may be incorrectly set. There also may be other as yet undiagnosed issues.
**To Reproduce**
See failed tests on `pull_requests_cppcheck` pipeline - https://builds.mantidproject.org/job/pull_requests-cppcheck/
**Expected behavior**
cppCheck should run and any failures should be because the PR has cpp errors rather than a configuration issue with the nodes.
**Platform/Version (please complete the following information):**
- all nodes labelled `cppcheck` in Jenkins
**Additional context**
Information about setting global hooks path correctly - https://github.com/pre-commit/pre-commit/issues/1198
| non_usab | fix cppcheck describe the bug following some recent updates cppcheck ci tests are no longer running it appears that the global hooks path may be incorrectly set there also may be other as yet undiagnosed issues to reproduce see failed tests on pull requests cppcheck pipeline expected behavior cppcheck should run and any failures should be because the pr has cpp errors rather than a configuration issue with the nodes platform version please complete the following information all nodes labelled cppcheck in jenkins additional context information about setting global hooks path correctly | 0 |
6,702 | 6,598,710,068 | IssuesEvent | 2017-09-16 09:39:36 | OysteinAmundsen/gymsystems | https://api.github.com/repos/OysteinAmundsen/gymsystems | closed | Should prevent clubs from modifying teams after tournament has started | security | Only the organizer should have the privileges to manage the tournament, and even then they should get a warning stating that every modification affects the currently running tournament.
Only the schedule might be interesting to change during the tournament. Teams and such should perhaps be disabled for editing for all (with the exception of Admin). | True | Should prevent clubs from modifying teams after tournament has started - Only the organizer should have the privileges to manage the tournament, and even then they should get a warning stating that every modification affects the currently running tournament.
Only the schedule might be interesting to change during the tournament. Teams and such should perhaps be disabled for editing for all (with the exception of Admin). | non_usab | should prevent clubs from modifying teams after tournament has started only the organizer should have the privileges to manage the tournament and even then they should get a warning stating that every modification affects the currently running tournament only the schedule might be interesting to change during the tournament teams and such should perhaps be disabled for editing for all with the exception of admin | 0 |
24,718 | 5,098,466,796 | IssuesEvent | 2017-01-04 01:50:27 | OraOpenSource/orawrap | https://api.github.com/repos/OraOpenSource/orawrap | closed | Documentation on createPool | documentation | Need to show how to create a pool and pass in the login information,
| 1.0 | Documentation on createPool - Need to show how to create a pool and pass in the login information,
| non_usab | documentation on createpool need to show how to create a pool and pass in the login information | 0 |
26,740 | 27,146,532,841 | IssuesEvent | 2023-02-16 20:25:19 | internetarchive/wcdimportbot | https://api.github.com/repos/internetarchive/wcdimportbot | closed | as a devop I want to see the status in the UI when the backend is parsing the wikitext | backend nice to have usability | 
| True | as a devop I want to see the status in the UI when the backend is parsing the wikitext - 
| usab | as a devop i want to see the status in the ui when the backend is parsing the wikitext | 1 |
164,257 | 13,938,962,517 | IssuesEvent | 2020-10-22 15:53:03 | intelligent-environments-lab/utx000 | https://api.github.com/repos/intelligent-environments-lab/utx000 | closed | Issues Table | documentation | # Table summarizing all issues with data collection
Various components of the study are missing for some participants or lacking. This table will help to summarize these points and serve as a quick reference | 1.0 | Issues Table - # Table summarizing all issues with data collection
Various components of the study are missing for some participants or lacking. This table will help to summarize these points and serve as a quick reference | non_usab | issues table table summarizing all issues with data collection various components of the study are missing for some participants or lacking this table will help to summarize these points and serve as a quick reference | 0 |
119,160 | 25,479,986,374 | IssuesEvent | 2022-11-25 19:09:10 | astro-informatics/s2wav | https://api.github.com/repos/astro-informatics/s2wav | closed | Basic Math Functions | enhancement good first issue Core Code | **Translate basic math tiling functions from s2let (c) to s2wav (base python).**
Difficulty:
- Low (This is a nice first issue to pick up).
Background:
- The wavelet transform works by "tiling" the harmonic domain with sets of filters, which can be designed for particular applications. To compute the tiling filters we need to evaluate the functions which generate those filters, and so it is useful to have those functions stored and easily accessed by more complicated parts of the package.
S2let File to translate:
- can be found [here](https://github.com/astro-informatics/s2let/blob/main/src/main/c/s2let_math.c)
S2Wav File location:
- should be put in the main package directory [here](https://github.com/astro-informatics/s2wav/tree/main/s2wav)
Notes:
- There are also a few extra functions in the s2let file that are useful, but everything below [this line](https://github.com/astro-informatics/s2let/blob/292d268b60dbbe041ee8bbba638465f62be71413/src/main/c/s2let_math.c#L92) can be ignored for the time being!
| 1.0 | Basic Math Functions - **Translate basic math tiling functions from s2let (c) to s2wav (base python).**
Difficulty:
- Low (This is a nice first issue to pick up).
Background:
- The wavelet transform works by "tiling" the harmonic domain with sets of filters, which can be designed for particular applications. To compute the tiling filters we need to evaluate the functions which generate those filters, and so it is useful to have those functions stored and easily accessed by more complicated parts of the package.
S2let File to translate:
- can be found [here](https://github.com/astro-informatics/s2let/blob/main/src/main/c/s2let_math.c)
S2Wav File location:
- should be put in the main package directory [here](https://github.com/astro-informatics/s2wav/tree/main/s2wav)
Notes:
- There are also a few extra functions in the s2let file that are useful, but everything below [this line](https://github.com/astro-informatics/s2let/blob/292d268b60dbbe041ee8bbba638465f62be71413/src/main/c/s2let_math.c#L92) can be ignored for the time being!
| non_usab | basic math functions translate basic math tiling functions from c to base python difficulty low this is a nice first issue to pick up background the wavelet transform works by tiling the harmonic domain with sets of filters which can be designed for particular applications to compute the tiling filters we need to evaluate the functions which generate those filters and so it is useful to have those functions stored and easily accessed by more complicated parts of the package file to translate can be found file location should be put in the main package directory notes there are also a few extra functions in the file that are useful but everything below can be ignored for the time being | 0 |
71,725 | 3,367,617,929 | IssuesEvent | 2015-11-22 10:19:05 | music-encoding/music-encoding | https://api.github.com/repos/music-encoding/music-encoding | closed | Move <head> to MEI.shared | Component: Core Schema Priority: Medium Status: Needs Patch Type: Bug | _From [pd...@virginia.edu](https://code.google.com/u/103686026181985548448/) on January 28, 2015 11:26:12_
Since \<head> is used by members of more than one other module, it should be moved to MEI.shared.
_Original issue: http://code.google.com/p/music-encoding/issues/detail?id=220_ | 1.0 | Move <head> to MEI.shared - _From [pd...@virginia.edu](https://code.google.com/u/103686026181985548448/) on January 28, 2015 11:26:12_
Since \<head> is used by members of more than one other module, it should be moved to MEI.shared.
_Original issue: http://code.google.com/p/music-encoding/issues/detail?id=220_ | non_usab | move to mei shared from on january since is used by members of more than one other module it should be moved to mei shared original issue | 0 |
46,209 | 13,152,203,626 | IssuesEvent | 2020-08-09 20:53:18 | Jacksole/Learning-JavaScript | https://api.github.com/repos/Jacksole/Learning-JavaScript | closed | CVE-2018-19839 (Medium) detected in node-sass-4.9.3.tgz, CSS::Sass-v3.4.11 | security vulnerability | ## CVE-2018-19839 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.9.3.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.9.3.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.9.3.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.9.3.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/Learning-JavaScript/AngularJS/storfront/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/Learning-JavaScript/AngularJS/storfront/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.10.7.tgz (Root Library)
- :x: **node-sass-4.9.3.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/Jacksole/Learning-JavaScript/commit/c9ca295725f33eb0d8e03f930a5da88ebb01cedf">c9ca295725f33eb0d8e03f930a5da88ebb01cedf</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass prior to 3.5.5, the function handle_error in sass_context.cpp allows attackers to cause a denial-of-service resulting from a heap-based buffer over-read via a crafted sass file.
<p>Publish Date: 2018-12-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-19839>CVE-2018-19839</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19839">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19839</a></p>
<p>Release Date: 2018-12-04</p>
<p>Fix Resolution: Libsass:3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-19839 (Medium) detected in node-sass-4.9.3.tgz, CSS::Sass-v3.4.11 - ## CVE-2018-19839 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.9.3.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.9.3.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.9.3.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.9.3.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/Learning-JavaScript/AngularJS/storfront/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/Learning-JavaScript/AngularJS/storfront/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.10.7.tgz (Root Library)
- :x: **node-sass-4.9.3.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/Jacksole/Learning-JavaScript/commit/c9ca295725f33eb0d8e03f930a5da88ebb01cedf">c9ca295725f33eb0d8e03f930a5da88ebb01cedf</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass prior to 3.5.5, the function handle_error in sass_context.cpp allows attackers to cause a denial-of-service resulting from a heap-based buffer over-read via a crafted sass file.
<p>Publish Date: 2018-12-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-19839>CVE-2018-19839</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19839">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19839</a></p>
<p>Release Date: 2018-12-04</p>
<p>Fix Resolution: Libsass:3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_usab | cve medium detected in node sass tgz css sass cve medium severity vulnerability vulnerable libraries node sass tgz node sass tgz wrapper around libsass library home page a href path to dependency file tmp ws scm learning javascript angularjs storfront package json path to vulnerable library tmp ws scm learning javascript angularjs storfront node modules node sass package json dependency hierarchy build angular tgz root library x node sass tgz vulnerable library found in head commit a href vulnerability details in libsass prior to the function handle error in sass context cpp allows attackers to cause a denial of service resulting from a heap based buffer over read via a crafted sass file publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass step up your open source security game with whitesource | 0 |
17,356 | 11,948,815,793 | IssuesEvent | 2020-04-03 12:35:40 | TravelMapping/DataProcessing | https://api.github.com/repos/TravelMapping/DataProcessing | closed | NMP "looks intentional" error (marking FP) | nmps pending robustness usability | Allow nmpfps.log lines ending in [LOOKS INTENTIONAL] to be processed. | True | NMP "looks intentional" error (marking FP) - Allow nmpfps.log lines ending in [LOOKS INTENTIONAL] to be processed. | usab | nmp looks intentional error marking fp allow nmpfps log lines ending in to be processed | 1 |
2,806 | 3,197,037,462 | IssuesEvent | 2015-10-01 00:39:20 | cortoproject/corto | https://api.github.com/repos/cortoproject/corto | opened | Support for requested vs. actual parameters | Corto:DataReplication Corto:Tools Corto:TypeSystem Corto:Usability enhancement | A common idiom in distributed data centric systems is using target and actual values. For example, a thermostat might have a target temperature of 72, and an actual temperature of 68. This could be accomplished in corto today, with a definition like this:
```c++
struct Thermostat::
targetTemperature: int32
actualTemperature: int32, readonly
on update this: // do something
```
The behavior of the above two variables is completely up to the user. The downside of the above is that semantics of this pattern might vary from package to package (or even class to class). Additionally, the framework does not know about the semantic relation between the members, which would rule out the possibility to take advantage of this knowledge in dashboards or language bindings.
Proposed to introduce a new feature that enables this behavior on a member:
```c++
struct Thermostat::
temperature: int32, request
on update this: // do something
```
Semantics of this "request" field should be:
```
Thermostat t: 70 // Set target temperature to 70
"The current temperature is ${t.temperature}" // By default, read the actual temperature
t.temperature = 80 // Set target temperature to 80
"The current target is ${t.temperature.target}" // Read out target temperature
```
The actual temperature should not be writable from outside the object's implementation, and should behave exactly like a readonly member. | True | Support for requested vs. actual parameters - A common idiom in distributed data centric systems is using target and actual values. For example, a thermostat might have a target temperature of 72, and an actual temperature of 68. This could be accomplished in corto today, with a definition like this:
```c++
struct Thermostat::
targetTemperature: int32
actualTemperature: int32, readonly
on update this: // do something
```
The behavior of the above two variables is completely up to the user. The downside of the above is that semantics of this pattern might vary from package to package (or even class to class). Additionally, the framework does not know about the semantic relation between the members, which would rule out the possibility to take advantage of this knowledge in dashboards or language bindings.
Proposed to introduce a new feature that enables this behavior on a member:
```c++
struct Thermostat::
temperature: int32, request
on update this: // do something
```
Semantics of this "request" field should be:
```
Thermostat t: 70 // Set target temperature to 70
"The current temperature is ${t.temperature}" // By default, read the actual temperature
t.temperature = 80 // Set target temperature to 80
"The current target is ${t.temperature.target}" // Read out target temperature
```
The actual temperature should not be writable from outside the object's implementation, and should behave exactly like a readonly member. | usab | support for requested vs actual parameters a common idiom in distributed data centric systems is using target and actual values for example a thermostat might have a target temperature of and an actual temperature of this could be accomplished in corto today with a definition like this c struct thermostat targettemperature actualtemperature readonly on update this do something the behavior of the above two variables is completely up to the user the downside of the above is that semantics of this pattern might vary from package to package or even class to class additionally the framework does not know about the semantic relation between the members which would rule out the possibility to take advantage of this knowledge in dashboards or language bindings proposed to introduce a new feature that enables this behavior on a member c struct thermostat temperature request on update this do something semantics of this request field should be thermostat t set target temperature to the current temperature is t temperature by default read the actual temperature t temperature set target temperature to the current target is t temperature target read out target temperature the actual temperature should not be writable from outside the object s implementation and should behave exactly like a readonly member | 1 |
6,357 | 4,237,212,370 | IssuesEvent | 2016-07-05 21:01:17 | lionheart/openradar-mirror | https://api.github.com/repos/lionheart/openradar-mirror | opened | 27179267: Auto Unlock does not function with an iPhone | classification:ui/usability reproducible:always status:open | #### Description
Summary:
See title. For people who do not own an Apple Watch or choose not to wear one all of the time, this reduces usability for no apparent reason.
Steps to Reproduce:
1. Walk up to your locked computer with your iPhone.
Expected Results:
The computer unlocks automatically.
Actual Results:
Nothing happens.
Version:
macOS 10.11.5 (15F34)
Configuration:
iMac (Retina 5K, 27-inch, Late 2014)
4 GHz Intel Core i7
16 GB 1600 MHz DDR3
AMD Radeon R9 M295X 4096 MB
Attachments:
-
Product Version: 10.12
Created: 2016-07-05T20:04:18.982950
Originated: 2016-07-05T15:02:00
Open Radar Link: http://www.openradar.me/27179267 | True | 27179267: Auto Unlock does not function with an iPhone - #### Description
Summary:
See title. For people who do not own an Apple Watch or choose not to wear one all of the time, this reduces usability for no apparent reason.
Steps to Reproduce:
1. Walk up to your locked computer with your iPhone.
Expected Results:
The computer unlocks automatically.
Actual Results:
Nothing happens.
Version:
macOS 10.11.5 (15F34)
Configuration:
iMac (Retina 5K, 27-inch, Late 2014)
4 GHz Intel Core i7
16 GB 1600 MHz DDR3
AMD Radeon R9 M295X 4096 MB
Attachments:
-
Product Version: 10.12
Created: 2016-07-05T20:04:18.982950
Originated: 2016-07-05T15:02:00
Open Radar Link: http://www.openradar.me/27179267 | usab | auto unlock does not function with an iphone description summary see title for people who do not own an apple watch or choose not to wear one all of the time this reduces usability for no apparent reason steps to reproduce walk up to your locked computer with your iphone expected results the computer unlocks automatically actual results nothing happens version macos configuration imac retina inch late ghz intel core gb mhz amd radeon mb attachments product version created originated open radar link | 1 |
54,169 | 13,448,858,530 | IssuesEvent | 2020-09-08 16:01:56 | department-of-veterans-affairs/va.gov-cms | https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms | opened | Health services have the wrong URL setting | Defect VAMC system | **Describe the defect**
Facility health services have the following path: `facility-health-services/[node:title]`
System health services have `[node:field_administration:entity:name]/system-health-services/[node:title]`
**To Reproduce**
Steps to reproduce the behavior:
1. Go to /admin/content
2. Filter by Facility or VAMC system health services content type
3. Review URLs
**Expected behavior**
Facility health service URLs should be `[VAMC-system-path]/[health-services]/[node-title]`
VAMC system health service URLs should be `[VAMC-system-path]/[health-services]/[taxonomy-term]`
eg pittsburgh-health-care/health-services/system/radiology
AC
- [ ] New systems
- [ ] Existing nodes should be updated
| 1.0 | Health services have the wrong URL setting - **Describe the defect**
Facility health services have the following path: `facility-health-services/[node:title]`
System health services have `[node:field_administration:entity:name]/system-health-services/[node:title]`
**To Reproduce**
Steps to reproduce the behavior:
1. Go to /admin/content
2. Filter by Facility or VAMC system health services content type
3. Review URLs
**Expected behavior**
Facility health service URLs should be `[VAMC-system-path]/[health-services]/[node-title]`
VAMC system health service URLs should be `[VAMC-system-path]/[health-services]/[taxonomy-term]`
eg pittsburgh-health-care/health-services/system/radiology
AC
- [ ] New systems
- [ ] Existing nodes should be updated
| non_usab | health services have the wrong url setting describe the defect facility health services have the following path facility health services system health services have system health services to reproduce steps to reproduce the behavior go to admin content filter by facility or vamc system health services content type review urls expected behavior facility health service urls should be vamc system health service urls should be eg pittsburgh health care health services system radiology ac new systems existing nodes should be updated | 0 |
27,631 | 29,952,197,240 | IssuesEvent | 2023-06-23 02:51:54 | ClickHouse/ClickHouse | https://api.github.com/repos/ClickHouse/ClickHouse | closed | After upgrading to 23.1.3.5, I got a error named UNKNOWN_IDENTIFIER | st-wontfix usability unexpected behaviour | ClickHouse server version: 23.1.3.5
SQL:
```sql
SELECT
a.v
FROM
(SELECT arrayJoin(['v1', 'v2', 'v3']) AS v) AS a
JOIN (
SELECT 'v1' AS v) AS b
ON
a.v = b.v
```
Then `SELECT * FROM system.errors`,
the `last_error_message` of `UNKNOWN_IDENTIFIER` is
`Missing columns: 'b.v' 'v' while processing query: 'v = b.v', required columns: 'v' 'b.v' 'v' 'b.v'`
| True | After upgrading to 23.1.3.5, I got a error named UNKNOWN_IDENTIFIER - ClickHouse server version: 23.1.3.5
SQL:
```sql
SELECT
a.v
FROM
(SELECT arrayJoin(['v1', 'v2', 'v3']) AS v) AS a
JOIN (
SELECT 'v1' AS v) AS b
ON
a.v = b.v
```
Then `SELECT * FROM system.errors`,
the `last_error_message` of `UNKNOWN_IDENTIFIER` is
`Missing columns: 'b.v' 'v' while processing query: 'v = b.v', required columns: 'v' 'b.v' 'v' 'b.v'`
| usab | after upgrading to i got a error named unknown identifier clickhouse server version sql sql select a v from select arrayjoin as v as a join select as v as b on a v b v then select from system errors the last error message of unknown identifier is missing columns b v v while processing query v b v required columns v b v v b v | 1 |
10,124 | 6,575,954,524 | IssuesEvent | 2017-09-11 17:58:15 | dssg/architect | https://api.github.com/repos/dssg/architect | closed | Object-creation-time validation | usability-enhancement | Each component should implement basic validation of the creation-time arguments sent to it. Examples might be:
1. the given events table doesn't exist, or is empty (in the label generator)
2. collate object creation fails
The point is, when common problems will occur based on incorrect configuration, we want to be able to tell the user as soon as possible instead of waiting halfway through the experiment when the code fails. | True | Object-creation-time validation - Each component should implement basic validation of the creation-time arguments sent to it. Examples might be:
1. the given events table doesn't exist, or is empty (in the label generator)
2. collate object creation fails
The point is, when common problems will occur based on incorrect configuration, we want to be able to tell the user as soon as possible instead of waiting halfway through the experiment when the code fails. | usab | object creation time validation each component should implement basic validation of the creation time arguments sent to it examples might be the given events table doesn t exist or is empty in the label generator collate object creation fails the point is when common problems will occur based on incorrect configuration we want to be able to tell the user as soon as possible instead of waiting halfway through the experiment when the code fails | 1 |
11,236 | 7,122,531,161 | IssuesEvent | 2018-01-19 12:12:20 | coblo/gui-demo | https://api.github.com/repos/coblo/gui-demo | opened | Add support for private/public transaction comments/references. | enhancement feature usability | Currently the "comment" field of a transaction is only stored in the users local wallet. These comments are lost if the user recovers his account on a fresh installation. I propose to support at least 2 more types of transaction comments:
Public: Publicly readable comments via transaction metadata or custom stream
Private: Comment only readable by sender(s)/receiver(s) via encryption. | True | Add support for private/public transaction comments/references. - Currently the "comment" field of a transaction is only stored in the users local wallet. These comments are lost if the user recovers his account on a fresh installation. I propose to support at least 2 more types of transaction comments:
Public: Publicly readable comments via transaction metadata or custom stream
Private: Comment only readable by sender(s)/receiver(s) via encryption. | usab | add support for private public transaction comments references currently the comment field of a transaction is only stored in the users local wallet these comments are lost if the user recovers his account on a fresh installation i propose to support at least more types of transaction comments public publicly readable comments via transaction metadata or custom stream private comment only readable by sender s receiver s via encryption | 1 |
15,962 | 10,450,520,490 | IssuesEvent | 2019-09-19 10:44:21 | DMPRoadmap/roadmap | https://api.github.com/repos/DMPRoadmap/roadmap | opened | Improving search function in admin area | admin usability user group | The admin > plans view only allows search by title. We should expand the search so it also allows the author name to be search.

In addition, the author names should be links so you can navigate through to see all the plans created by one user.
| True | Improving search function in admin area - The admin > plans view only allows search by title. We should expand the search so it also allows the author name to be search.

In addition, the author names should be links so you can navigate through to see all the plans created by one user.
| usab | improving search function in admin area the admin plans view only allows search by title we should expand the search so it also allows the author name to be search in addition the author names should be links so you can navigate through to see all the plans created by one user | 1 |
19,801 | 14,577,889,337 | IssuesEvent | 2020-12-18 03:16:22 | TravelMapping/DataProcessing | https://api.github.com/repos/TravelMapping/DataProcessing | reopened | C++ flavored datacheck.sh | C++ datacheck usability | If we're to [eventually switch to the C++ version of the site update program](https://github.com/TravelMapping/DataProcessing/issues/375), it makes sense to also implement a C++ flavored datacheck.sh.
The first few thoughts that come to my head:
* how many threads to run, once we have threads
* `make` included as part of `datacheck.sh`
* special build to limit number of threads allowable?
* include `git pull` as part of `datacheck.sh` itself, or continue to input that manually as specified in [RUNNING.md](https://github.com/TravelMapping/DataProcessing/blob/dc6313183845632b350d9155b665766b55ef2eb9/RUNNING.md)?
* [x] `chmod +x datacheck.sh`?
* `sh datacheck.sh` -> `./datacheck.sh`?
* [x] some more options here (existing file too):
https://github.com/TravelMapping/DataProcessing/blob/6e98c786abf729bf195473d9a9a4d0171f2e8c82/siteupdate/python-teresco/datacheck.sh#L10-L15 | True | C++ flavored datacheck.sh - If we're to [eventually switch to the C++ version of the site update program](https://github.com/TravelMapping/DataProcessing/issues/375), it makes sense to also implement a C++ flavored datacheck.sh.
The first few thoughts that come to my head:
* how many threads to run, once we have threads
* `make` included as part of `datacheck.sh`
* special build to limit number of threads allowable?
* include `git pull` as part of `datacheck.sh` itself, or continue to input that manually as specified in [RUNNING.md](https://github.com/TravelMapping/DataProcessing/blob/dc6313183845632b350d9155b665766b55ef2eb9/RUNNING.md)?
* [x] `chmod +x datacheck.sh`?
* `sh datacheck.sh` -> `./datacheck.sh`?
* [x] some more options here (existing file too):
https://github.com/TravelMapping/DataProcessing/blob/6e98c786abf729bf195473d9a9a4d0171f2e8c82/siteupdate/python-teresco/datacheck.sh#L10-L15 | usab | c flavored datacheck sh if we re to it makes sense to also implement a c flavored datacheck sh the first few thoughts that come to my head how many threads to run once we have threads make included as part of datacheck sh special build to limit number of threads allowable include git pull as part of datacheck sh itself or continue to input that manually as specified in chmod x datacheck sh sh datacheck sh datacheck sh some more options here existing file too | 1 |
14,062 | 8,799,158,231 | IssuesEvent | 2018-12-24 12:21:42 | MarkBind/markbind | https://api.github.com/repos/MarkBind/markbind | closed | Panels: allow more space for panel heading | a-ReaderUsability p.Low | v1.9.2
Current: headings overflow to next line although there seems to be enough space to keep them in one line

Suggested: increase space allocated for the heading if possible. | True | Panels: allow more space for panel heading - v1.9.2
Current: headings overflow to next line although there seems to be enough space to keep them in one line

Suggested: increase space allocated for the heading if possible. | usab | panels allow more space for panel heading current headings overflow to next line although there seems to be enough space to keep them in one line suggested increase space allocated for the heading if possible | 1 |
25,437 | 25,185,762,715 | IssuesEvent | 2022-11-11 17:47:33 | FreeTubeApp/FreeTube | https://api.github.com/repos/FreeTubeApp/FreeTube | closed | [Bug]: ytming.com Redirect - Unable to Visit YT Images | bug third-party B: usability | ### Guidelines
- [X] I have encountered this bug in the [latest release of FreeTube](https://github.com/FreeTubeApp/FreeTube/releases).
- [X] I have searched the issue tracker for [open](https://github.com/FreeTubeApp/FreeTube/issues?q=is%3Aopen+is%3Aissue) and [closed](https://github.com/FreeTubeApp/FreeTube/issues?q=is%3Aissue+is%3Aclosed) issues that are similar to the bug report I want to file, without success.
- [X] I have searched the [documentation](https://docs.freetubeapp.io/) for information that matches the description of the bug I want to file, without success.
- [X] This issue contains only one bug.
### Describe the bug
Accessing any "i.ytimg.com" link will force redirect you to FreeTube, and since FreeTube isn't an image viewer it gives back obvious errors, and because of the redirect it doesn't show anything on the image page.
If you say no to the redirect it to FreeTube, it will just close the tab.
...
Leaving you unable to access any thumbnails hosted on YouTube
### Expected Behavior
To visit i.ytimg.com and actually be able to see an image.
### Issue Labels
usability issue
### FreeTube Version
v0.18.0 Beta (and below)
### Operating System Version
Windows 10 Pro 21H2
### Installation Method
.exe
### Primary API used
Local API
### Last Known Working FreeTube Version (If Any)
_No response_
### Additional Information
_No response_
### Nightly Build
- [x] I have encountered this bug in the latest [nightly build](https://docs.freetubeapp.io/development/nightly-builds). | True | [Bug]: ytming.com Redirect - Unable to Visit YT Images - ### Guidelines
- [X] I have encountered this bug in the [latest release of FreeTube](https://github.com/FreeTubeApp/FreeTube/releases).
- [X] I have searched the issue tracker for [open](https://github.com/FreeTubeApp/FreeTube/issues?q=is%3Aopen+is%3Aissue) and [closed](https://github.com/FreeTubeApp/FreeTube/issues?q=is%3Aissue+is%3Aclosed) issues that are similar to the bug report I want to file, without success.
- [X] I have searched the [documentation](https://docs.freetubeapp.io/) for information that matches the description of the bug I want to file, without success.
- [X] This issue contains only one bug.
### Describe the bug
Accessing any "i.ytimg.com" link will force redirect you to FreeTube, and since FreeTube isn't an image viewer it gives back obvious errors, and because of the redirect it doesn't show anything on the image page.
If you say no to the redirect it to FreeTube, it will just close the tab.
...
Leaving you unable to access any thumbnails hosted on YouTube
### Expected Behavior
To visit i.ytimg.com and actually be able to see an image.
### Issue Labels
usability issue
### FreeTube Version
v0.18.0 Beta (and below)
### Operating System Version
Windows 10 Pro 21H2
### Installation Method
.exe
### Primary API used
Local API
### Last Known Working FreeTube Version (If Any)
_No response_
### Additional Information
_No response_
### Nightly Build
- [x] I have encountered this bug in the latest [nightly build](https://docs.freetubeapp.io/development/nightly-builds). | usab | ytming com redirect unable to visit yt images guidelines i have encountered this bug in the i have searched the issue tracker for and issues that are similar to the bug report i want to file without success i have searched the for information that matches the description of the bug i want to file without success this issue contains only one bug describe the bug accessing any i ytimg com link will force redirect you to freetube and since freetube isn t an image viewer it gives back obvious errors and because of the redirect it doesn t show anything on the image page if you say no to the redirect it to freetube it will just close the tab leaving you unable to access any thumbnails hosted on youtube expected behavior to visit i ytimg com and actually be able to see an image issue labels usability issue freetube version beta and below operating system version windows pro installation method exe primary api used local api last known working freetube version if any no response additional information no response nightly build i have encountered this bug in the latest | 1 |
16,886 | 11,455,983,439 | IssuesEvent | 2020-02-06 20:14:23 | mekanism/Mekanism | https://api.github.com/repos/mekanism/Mekanism | closed | Mekanism Generators/Tools not mentioned in main CurseForge page | Fixed in dev Mekanism Mekanism: Additions Mekanism: Generators Mekanism: Tools Usability enhancement | The fact that Mekanism Generators and Mekanism Tools are entirely separate mods is not made clear on the main Mekanism CurseForge page, which may be confusing to players who download the mod themselves and find that it is missing all the power generation blocks (it certainly was for me and my brother). I think it would be helpful to people downloading Mekanism if you provided links to the tools and generators mods on the main mod's CurseForge page. | True | Mekanism Generators/Tools not mentioned in main CurseForge page - The fact that Mekanism Generators and Mekanism Tools are entirely separate mods is not made clear on the main Mekanism CurseForge page, which may be confusing to players who download the mod themselves and find that it is missing all the power generation blocks (it certainly was for me and my brother). I think it would be helpful to people downloading Mekanism if you provided links to the tools and generators mods on the main mod's CurseForge page. | usab | mekanism generators tools not mentioned in main curseforge page the fact that mekanism generators and mekanism tools are entirely separate mods is not made clear on the main mekanism curseforge page which may be confusing to players who download the mod themselves and find that it is missing all the power generation blocks it certainly was for me and my brother i think it would be helpful to people downloading mekanism if you provided links to the tools and generators mods on the main mod s curseforge page | 1 |
174,783 | 27,725,407,777 | IssuesEvent | 2023-03-15 01:31:02 | SOM-st/SOM | https://api.github.com/repos/SOM-st/SOM | closed | Check semantics of boolean messages like #or: and || to ensure that the block is evaluated only when expected. | spec language design | Found by @OctaveLarose | 1.0 | Check semantics of boolean messages like #or: and || to ensure that the block is evaluated only when expected. - Found by @OctaveLarose | non_usab | check semantics of boolean messages like or and to ensure that the block is evaluated only when expected found by octavelarose | 0 |
19,303 | 13,796,260,325 | IssuesEvent | 2020-10-09 19:30:46 | pangeo-data/climpred | https://api.github.com/repos/pangeo-data/climpred | closed | Alignment of skill dimension naming | cleanup usability | Handled in three different ways that can easily be aligned, but how?
- `hindcast.verify(reference=['historical','persistence']).skill # initialized, historical, persistence`
- PM allows both: historical and uninitialized `pm.verify(reference=['historical','unintialized','persistence']).skill # ['initialized','historical','unintialized','persistence']`
- bootstrap returns skill dimension: `init`, `uninit`, `pers`
proposed solution:
- hindcast: rename historical to uninitialized matching the dataset name
- PM: disallow historical
- rename bootstrap accordingly to initialized, uninitialized, persistence
are you OK with this? @bradyrx | True | Alignment of skill dimension naming - Handled in three different ways that can easily be aligned, but how?
- `hindcast.verify(reference=['historical','persistence']).skill # initialized, historical, persistence`
- PM allows both: historical and uninitialized `pm.verify(reference=['historical','unintialized','persistence']).skill # ['initialized','historical','unintialized','persistence']`
- bootstrap returns skill dimension: `init`, `uninit`, `pers`
proposed solution:
- hindcast: rename historical to uninitialized matching the dataset name
- PM: disallow historical
- rename bootstrap accordingly to initialized, uninitialized, persistence
are you OK with this? @bradyrx | usab | alignment of skill dimension naming handled in three different ways that can easily be aligned but how hindcast verify reference skill initialized historical persistence pm allows both historical and uninitialized pm verify reference skill bootstrap returns skill dimension init uninit pers proposed solution hindcast rename historical to uninitialized matching the dataset name pm disallow historical rename bootstrap accordingly to initialized uninitialized persistence are you ok with this bradyrx | 1 |
441,738 | 30,797,326,785 | IssuesEvent | 2023-07-31 21:01:33 | cilium/cilium | https://api.github.com/repos/cilium/cilium | opened | Document cilium-operator role in Gateway API | help-wanted area/documentation area/servicemesh | Gateway API doesn't seem to be documented here, but the operator plays a role in implementing Gateway API:
https://docs.cilium.io/en/latest/internals/cilium_operator/#cilium-operator
```[tasklist]
### Tasks
- [ ] Document the operator's role in Gateway API
```
| 1.0 | Document cilium-operator role in Gateway API - Gateway API doesn't seem to be documented here, but the operator plays a role in implementing Gateway API:
https://docs.cilium.io/en/latest/internals/cilium_operator/#cilium-operator
```[tasklist]
### Tasks
- [ ] Document the operator's role in Gateway API
```
| non_usab | document cilium operator role in gateway api gateway api doesn t seem to be documented here but the operator plays a role in implementing gateway api tasks document the operator s role in gateway api | 0 |
25,567 | 25,394,999,292 | IssuesEvent | 2022-11-22 07:57:22 | code-kern-ai/refinery | https://api.github.com/repos/code-kern-ai/refinery | closed | Improve compilation time and performances on the front-end | enhancement usability | **Is your feature request related to a problem? Please describe.**
Currently the front-end requires longer time to load and reduces the productivity time of the development. Possible solutions for this would be improving the compilation time and improving the performances on the front-end. For this purpose the tool Lighthouse can be used and according to the results of this tool the solution can be split into different sub-sections.
**Describe the solution you'd like**
## Compilation time
Tasks:
- Since version Angular 9, the AOT compiler is used (in the angular.json for the build process we should set this to true)
- Additionally, in the tsconfig.json we should enable the Ivy compiler
- Additional configurations in angular.json (buildOptimization should be set to false and replaced with other options since it is taking too much time) (sourceMap should be also set to true since it is used only for debugging) (vendorChunk, extractLicenses, namedChunks should be set to false)
- Increase Node's memory limit
- browserTarget should be changed to :build:development (changes from Angular 13)
- defaultConfiguration should be set to development
## Results from Lighthouse
Performance:
- First Contentful Paint and Largest Contentful Paint take lot time
- Enable text compression (text-based resources should be served with compression)
- Remove unused Javascript code (takes quite lot of time for something that is not used)
- Split code into modules that are lazy loaded (with this we can make sure that only the necessary modules are loaded on certain pages)
- OnPush Change detection strategy
- Unsubscribe from observables
- Static assets should be used with with efficient cache policy
- Reduce unused CSS code
- Remove function calls in template
Accessibility:
- Buttons should have an accessible name
- Form elements should have associated labels
- Links should have href attribute (we use the anchor tag on places where we do not link to other pages)
- Toggle elements should have an accessible name
Best practices:
- Images resolution should be improved
SEO:
- Document should have a meta description
- All images should have the alt attribute
-----
This issue is solved in two parts due to limited sprint time:
1. Compilation time - https://github.com/code-kern-ai/refinery-ui/pull/80
2. Performance - based on the measurement with Lighthouse (started with code split into lazy loaded modules, not part of this sprint)
| True | Improve compilation time and performances on the front-end - **Is your feature request related to a problem? Please describe.**
Currently the front-end requires longer time to load and reduces the productivity time of the development. Possible solutions for this would be improving the compilation time and improving the performances on the front-end. For this purpose the tool Lighthouse can be used and according to the results of this tool the solution can be split into different sub-sections.
**Describe the solution you'd like**
## Compilation time
Tasks:
- Since version Angular 9, the AOT compiler is used (in the angular.json for the build process we should set this to true)
- Additionally, in the tsconfig.json we should enable the Ivy compiler
- Additional configurations in angular.json (buildOptimization should be set to false and replaced with other options since it is taking too much time) (sourceMap should be also set to true since it is used only for debugging) (vendorChunk, extractLicenses, namedChunks should be set to false)
- Increase Node's memory limit
- browserTarget should be changed to :build:development (changes from Angular 13)
- defaultConfiguration should be set to development
## Results from Lighthouse
Performance:
- First Contentful Paint and Largest Contentful Paint take lot time
- Enable text compression (text-based resources should be served with compression)
- Remove unused Javascript code (takes quite lot of time for something that is not used)
- Split code into modules that are lazy loaded (with this we can make sure that only the necessary modules are loaded on certain pages)
- OnPush Change detection strategy
- Unsubscribe from observables
- Static assets should be used with with efficient cache policy
- Reduce unused CSS code
- Remove function calls in template
Accessibility:
- Buttons should have an accessible name
- Form elements should have associated labels
- Links should have href attribute (we use the anchor tag on places where we do not link to other pages)
- Toggle elements should have an accessible name
Best practices:
- Images resolution should be improved
SEO:
- Document should have a meta description
- All images should have the alt attribute
-----
This issue is solved in two parts due to limited sprint time:
1. Compilation time - https://github.com/code-kern-ai/refinery-ui/pull/80
2. Performance - based on the measurement with Lighthouse (started with code split into lazy loaded modules, not part of this sprint)
| usab | improve compilation time and performances on the front end is your feature request related to a problem please describe currently the front end requires longer time to load and reduces the productivity time of the development possible solutions for this would be improving the compilation time and improving the performances on the front end for this purpose the tool lighthouse can be used and according to the results of this tool the solution can be split into different sub sections describe the solution you d like compilation time tasks since version angular the aot compiler is used in the angular json for the build process we should set this to true additionally in the tsconfig json we should enable the ivy compiler additional configurations in angular json buildoptimization should be set to false and replaced with other options since it is taking too much time sourcemap should be also set to true since it is used only for debugging vendorchunk extractlicenses namedchunks should be set to false increase node s memory limit browsertarget should be changed to build development changes from angular defaultconfiguration should be set to development results from lighthouse performance first contentful paint and largest contentful paint take lot time enable text compression text based resources should be served with compression remove unused javascript code takes quite lot of time for something that is not used split code into modules that are lazy loaded with this we can make sure that only the necessary modules are loaded on certain pages onpush change detection strategy unsubscribe from observables static assets should be used with with efficient cache policy reduce unused css code remove function calls in template accessibility buttons should have an accessible name form elements should have associated labels links should have href attribute we use the anchor tag on places where we do not link to other pages toggle elements should have an accessible name best practices images resolution should be improved seo document should have a meta description all images should have the alt attribute this issue is solved in two parts due to limited sprint time compilation time performance based on the measurement with lighthouse started with code split into lazy loaded modules not part of this sprint | 1 |
23,180 | 21,313,960,673 | IssuesEvent | 2022-04-16 01:34:09 | ClickHouse/ClickHouse | https://api.github.com/repos/ClickHouse/ClickHouse | opened | Misleading error message in type inference when list of files is empty | feature usability | ```
ip-172-31-5-46.eu-central-1.compute.internal :) SELECT count() FROM file('**/*.json', JSONEachRow)
SELECT count()
FROM file('**/*.json', JSONEachRow)
Query id: 4501adb2-7556-47db-a296-3950db3aaeb3
0 rows in set. Elapsed: 0.014 sec.
Received exception:
Code: 636. DB::Exception: All attempts to extract table structure from files failed. Errors:
. (CANNOT_EXTRACT_TABLE_STRUCTURE)
```
It shows newline character instead list of errors. | True | Misleading error message in type inference when list of files is empty - ```
ip-172-31-5-46.eu-central-1.compute.internal :) SELECT count() FROM file('**/*.json', JSONEachRow)
SELECT count()
FROM file('**/*.json', JSONEachRow)
Query id: 4501adb2-7556-47db-a296-3950db3aaeb3
0 rows in set. Elapsed: 0.014 sec.
Received exception:
Code: 636. DB::Exception: All attempts to extract table structure from files failed. Errors:
. (CANNOT_EXTRACT_TABLE_STRUCTURE)
```
It shows newline character instead list of errors. | usab | misleading error message in type inference when list of files is empty ip eu central compute internal select count from file json jsoneachrow select count from file json jsoneachrow query id rows in set elapsed sec received exception code db exception all attempts to extract table structure from files failed errors cannot extract table structure it shows newline character instead list of errors | 1 |
22,050 | 18,408,640,793 | IssuesEvent | 2021-10-13 00:54:33 | matomo-org/matomo | https://api.github.com/repos/matomo-org/matomo | closed | More verbose error message when login nonce check fails | Help wanted c: Usability c: Onboarding | At the moment Matomo only shows the following error:
> Fehler : Sicherheitschecks fehlgeschlagen. Bitte laden Sie das Formular erneut und prüfen Sie, ob Ihr Browser Cookies zulässt. Wenn Sie einen Proxy Server verwenden, müssen Sie Matomo so einrichten, dass es Proxy Header akzeptiert.
https://github.com/matomo-org/matomo/blob/da456513866ea0e276c51b046af5139244968a23/plugins/Login/lang/en.json#L8
But when the user has Cookies enabled (which is pretty likely) and is sure that they don't use a reverse proxy (shouldn't it say reverse proxy instead of proxy in the message?), there is no way for them to troubleshoot this issue further, and they will most likely just give up on using Matomo.
Maybe all checks that could fail in `verifyNonce()` and `isLocalUrl()` should be logged or even help display a more helpful error message.
https://github.com/matomo-org/matomo/blob/679e73f1236969db0c2d767655cb84456a727d24/core/Nonce.php#L70
https://github.com/matomo-org/matomo/blob/06d43857c48ada2fa7f1ad18a8309e8826c0e413/core/Url.php#L547 | True | More verbose error message when login nonce check fails - At the moment Matomo only shows the following error:
> Fehler : Sicherheitschecks fehlgeschlagen. Bitte laden Sie das Formular erneut und prüfen Sie, ob Ihr Browser Cookies zulässt. Wenn Sie einen Proxy Server verwenden, müssen Sie Matomo so einrichten, dass es Proxy Header akzeptiert.
https://github.com/matomo-org/matomo/blob/da456513866ea0e276c51b046af5139244968a23/plugins/Login/lang/en.json#L8
But when the user has Cookies enabled (which is pretty likely) and is sure that they don't use a reverse proxy (shouldn't it say reverse proxy instead of proxy in the message?), there is no way for them to troubleshoot this issue further, and they will most likely just give up on using Matomo.
Maybe all checks that could fail in `verifyNonce()` and `isLocalUrl()` should be logged or even help display a more helpful error message.
https://github.com/matomo-org/matomo/blob/679e73f1236969db0c2d767655cb84456a727d24/core/Nonce.php#L70
https://github.com/matomo-org/matomo/blob/06d43857c48ada2fa7f1ad18a8309e8826c0e413/core/Url.php#L547 | usab | more verbose error message when login nonce check fails at the moment matomo only shows the following error fehler sicherheitschecks fehlgeschlagen bitte laden sie das formular erneut und prüfen sie ob ihr browser cookies zulässt wenn sie einen proxy server verwenden müssen sie matomo so einrichten dass es proxy header akzeptiert but when the user has cookies enabled which is pretty likely and is sure that they don t use a reverse proxy shouldn t it say reverse proxy instead of proxy in the message there is no way for them to troubleshoot this issue further and they will most likely just give up on using matomo maybe all checks that could fail in verifynonce and islocalurl should be logged or even help display a more helpful error message | 1 |
380,018 | 26,398,129,298 | IssuesEvent | 2023-01-12 21:32:24 | microsoft/onnxruntime | https://api.github.com/repos/microsoft/onnxruntime | closed | [Documentation] Convert torch model to onnx in half precision | documentation | ### Describe the documentation issue
Is there an example about how to convert half precision torch models to onnx?
### Page / URL
_No response_ | 1.0 | [Documentation] Convert torch model to onnx in half precision - ### Describe the documentation issue
Is there an example about how to convert half precision torch models to onnx?
### Page / URL
_No response_ | non_usab | convert torch model to onnx in half precision describe the documentation issue is there an example about how to convert half precision torch models to onnx page url no response | 0 |
17,416 | 11,995,620,952 | IssuesEvent | 2020-04-08 15:28:33 | solo-io/service-mesh-hub | https://api.github.com/repos/solo-io/service-mesh-hub | opened | Additional Printer Columns on CRD | Area: Usability | We should be adding additional printer columns to CRDs when we write them.
In order to accomplish this, autopilot will need to be changed slightly. Currently the code to add AdditionalPrinterColumns in autopilot is located in the `Versions` section of the CRD.
https://github.com/solo-io/autopilot/blob/697f7d42905400b4c53ceae1d7a51d84a19096b6/codegen/templates/deploy/crd.go#L52
However, this causes an error when there is only one version. If there is only one version, the Additional Columns should be at the top level of the CRD definition. | True | Additional Printer Columns on CRD - We should be adding additional printer columns to CRDs when we write them.
In order to accomplish this, autopilot will need to be changed slightly. Currently the code to add AdditionalPrinterColumns in autopilot is located in the `Versions` section of the CRD.
https://github.com/solo-io/autopilot/blob/697f7d42905400b4c53ceae1d7a51d84a19096b6/codegen/templates/deploy/crd.go#L52
However, this causes an error when there is only one version. If there is only one version, the Additional Columns should be at the top level of the CRD definition. | usab | additional printer columns on crd we should be adding additional printer columns to crds when we write them in order to accomplish this autopilot will need to be changed slightly currently the code to add additionalprintercolumns in autopilot is located in the versions section of the crd however this causes an error when there is only one version if there is only one version the additional columns should be at the top level of the crd definition | 1 |
148,864 | 19,552,579,463 | IssuesEvent | 2022-01-03 01:15:11 | madhans23/linux-4.15 | https://api.github.com/repos/madhans23/linux-4.15 | opened | WS-2021-0482 (Medium) detected in linux-stagingv5.15 | security vulnerability | ## WS-2021-0482 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stagingv5.15</b></p></summary>
<p>
<p>hwmon staging tree</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging.git>https://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/pm8001/pm8001_init.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Linux/Kernel is vulnerable to memory leak during rmmod in drivers/scsi/pm8001/pm8001_init.c
<p>Publish Date: 2021-11-29
<p>URL: <a href=https://github.com/gregkh/linux/commit/269a4311b15f68d24e816f43f123888f241ed13d>WS-2021-0482</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/UVI-2021-1002349">https://osv.dev/vulnerability/UVI-2021-1002349</a></p>
<p>Release Date: 2021-11-29</p>
<p>Fix Resolution: Linux/Kernel - v5.15.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2021-0482 (Medium) detected in linux-stagingv5.15 - ## WS-2021-0482 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stagingv5.15</b></p></summary>
<p>
<p>hwmon staging tree</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging.git>https://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/pm8001/pm8001_init.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Linux/Kernel is vulnerable to memory leak during rmmod in drivers/scsi/pm8001/pm8001_init.c
<p>Publish Date: 2021-11-29
<p>URL: <a href=https://github.com/gregkh/linux/commit/269a4311b15f68d24e816f43f123888f241ed13d>WS-2021-0482</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/UVI-2021-1002349">https://osv.dev/vulnerability/UVI-2021-1002349</a></p>
<p>Release Date: 2021-11-29</p>
<p>Fix Resolution: Linux/Kernel - v5.15.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_usab | ws medium detected in linux ws medium severity vulnerability vulnerable library linux hwmon staging tree library home page a href found in base branch master vulnerable source files drivers scsi init c vulnerability details in linux kernel is vulnerable to memory leak during rmmod in drivers scsi init c publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux kernel step up your open source security game with whitesource | 0 |
5,856 | 21,472,427,634 | IssuesEvent | 2022-04-26 10:43:32 | Budibase/budibase | https://api.github.com/repos/Budibase/budibase | closed | Automations does not save multiple relationship | bug automations | I have the following table structure
`One Contact has many Deals`
I then set up an automation to update a contact with multiple deals. For the `Deals` column, I put the following.
```javascript
return [
"ro_ta_20165c6196654925a9cc74dfec158c3f_2485b71e8e714764b35d890fc63e60bb",
"ro_ta_20165c6196654925a9cc74dfec158c3f_f3775cf97f774f9a9533c939f163a0e1"
];
```
When I do this, the the Deals column is completely wiped out of any previous data.
The `Input` to the `Update Row` step is below. As you can see, my array is nested inside an array, like: `[ [ "id1", "id2" ] ]`. I have tried a few different methods and can't find a way to assign multiple relationships at once.
```json
{
"row": {
"tableId": "ta_5047dfad5b24426ab3fd0b350fe1780c",
"Contact": [],
"Deals": [
[
"ro_ta_20165c6196654925a9cc74dfec158c3f_2485b71e8e714764b35d890fc63e60bb",
"ro_ta_20165c6196654925a9cc74dfec158c3f_f3775cf97f774f9a9533c939f163a0e1"
]
]
},
"rowId": "ro_ta_5047dfad5b24426ab3fd0b350fe1780c_dc9b54563ebe4921a68e066a9877868f"
}
``` | 1.0 | Automations does not save multiple relationship - I have the following table structure
`One Contact has many Deals`
I then set up an automation to update a contact with multiple deals. For the `Deals` column, I put the following.
```javascript
return [
"ro_ta_20165c6196654925a9cc74dfec158c3f_2485b71e8e714764b35d890fc63e60bb",
"ro_ta_20165c6196654925a9cc74dfec158c3f_f3775cf97f774f9a9533c939f163a0e1"
];
```
When I do this, the the Deals column is completely wiped out of any previous data.
The `Input` to the `Update Row` step is below. As you can see, my array is nested inside an array, like: `[ [ "id1", "id2" ] ]`. I have tried a few different methods and can't find a way to assign multiple relationships at once.
```json
{
"row": {
"tableId": "ta_5047dfad5b24426ab3fd0b350fe1780c",
"Contact": [],
"Deals": [
[
"ro_ta_20165c6196654925a9cc74dfec158c3f_2485b71e8e714764b35d890fc63e60bb",
"ro_ta_20165c6196654925a9cc74dfec158c3f_f3775cf97f774f9a9533c939f163a0e1"
]
]
},
"rowId": "ro_ta_5047dfad5b24426ab3fd0b350fe1780c_dc9b54563ebe4921a68e066a9877868f"
}
``` | non_usab | automations does not save multiple relationship i have the following table structure one contact has many deals i then set up an automation to update a contact with multiple deals for the deals column i put the following javascript return ro ta ro ta when i do this the the deals column is completely wiped out of any previous data the input to the update row step is below as you can see my array is nested inside an array like i have tried a few different methods and can t find a way to assign multiple relationships at once json row tableid ta contact deals ro ta ro ta rowid ro ta | 0 |
23,157 | 21,186,511,312 | IssuesEvent | 2022-04-08 13:16:36 | VirtusLab/git-machete | https://api.github.com/repos/VirtusLab/git-machete | closed | `github checkout-prs --by=...` switches to ALL checked out branches one by one | bug docs github usability | Installed version from PR #435 (commit `6ee6681e33cf037e8c86b1cf340a443b4d77fcee`).
Testing on this very repo:
```
➜ 12:29 ~/git-machete develop $ g m s
master v3.7.2
│
└─develop v3.8.0
➜ 12:29 ~/git-machete develop $ gb
* develop
master
➜ 12:30 ~/git-machete develop $ g m github checkout-prs --by=amalota
Checking for open GitHub PRs...
branch 'feature/command_clean' set up to track 'origin/feature/command_clean'.
Switched to a new branch 'feature/command_clean'
Pull request #435 checked out at local branch feature/command_clean
branch 'code_quality/github_token_msg' set up to track 'origin/code_quality/github_token_msg'.
Switched to a new branch 'code_quality/github_token_msg'
Pull request #469 checked out at local branch code_quality/github_token_msg
branch 'feature/advance_with_push_to_remote' set up to track 'origin/feature/advance_with_push_to_remote'.
Switched to a new branch 'feature/advance_with_push_to_remote'
Pull request #473 checked out at local branch feature/advance_with_push_to_remote
➜ 12:30 ~/git-machete feature/advance_with_push_to_remote PR #473 (amalota) $
```
So... first HEAD hsa been switched to `feature/command_clean`, then to `code_quality/github_token_msg`, finally to `feature/advance_with_push_to_remote`.
This is contrary to docs in github.rst:
```
If only one PR is given, then switch the local repository's HEAD to its head branch.
```
In fact, the doc should probably say:
```
If only one PR has been checked out, then switch the local repository's HEAD to its head branch.
```
and `github checkout-prs` should behave adequately (i.e. for the sample above, it should **never** switch HEAD).
Pls provide regression tests as well. | True | `github checkout-prs --by=...` switches to ALL checked out branches one by one - Installed version from PR #435 (commit `6ee6681e33cf037e8c86b1cf340a443b4d77fcee`).
Testing on this very repo:
```
➜ 12:29 ~/git-machete develop $ g m s
master v3.7.2
│
└─develop v3.8.0
➜ 12:29 ~/git-machete develop $ gb
* develop
master
➜ 12:30 ~/git-machete develop $ g m github checkout-prs --by=amalota
Checking for open GitHub PRs...
branch 'feature/command_clean' set up to track 'origin/feature/command_clean'.
Switched to a new branch 'feature/command_clean'
Pull request #435 checked out at local branch feature/command_clean
branch 'code_quality/github_token_msg' set up to track 'origin/code_quality/github_token_msg'.
Switched to a new branch 'code_quality/github_token_msg'
Pull request #469 checked out at local branch code_quality/github_token_msg
branch 'feature/advance_with_push_to_remote' set up to track 'origin/feature/advance_with_push_to_remote'.
Switched to a new branch 'feature/advance_with_push_to_remote'
Pull request #473 checked out at local branch feature/advance_with_push_to_remote
➜ 12:30 ~/git-machete feature/advance_with_push_to_remote PR #473 (amalota) $
```
So... first HEAD hsa been switched to `feature/command_clean`, then to `code_quality/github_token_msg`, finally to `feature/advance_with_push_to_remote`.
This is contrary to docs in github.rst:
```
If only one PR is given, then switch the local repository's HEAD to its head branch.
```
In fact, the doc should probably say:
```
If only one PR has been checked out, then switch the local repository's HEAD to its head branch.
```
and `github checkout-prs` should behave adequately (i.e. for the sample above, it should **never** switch HEAD).
Pls provide regression tests as well. | usab | github checkout prs by switches to all checked out branches one by one installed version from pr commit testing on this very repo ➜ git machete develop g m s master │ └─develop ➜ git machete develop gb develop master ➜ git machete develop g m github checkout prs by amalota checking for open github prs branch feature command clean set up to track origin feature command clean switched to a new branch feature command clean pull request checked out at local branch feature command clean branch code quality github token msg set up to track origin code quality github token msg switched to a new branch code quality github token msg pull request checked out at local branch code quality github token msg branch feature advance with push to remote set up to track origin feature advance with push to remote switched to a new branch feature advance with push to remote pull request checked out at local branch feature advance with push to remote ➜ git machete feature advance with push to remote pr amalota so first head hsa been switched to feature command clean then to code quality github token msg finally to feature advance with push to remote this is contrary to docs in github rst if only one pr is given then switch the local repository s head to its head branch in fact the doc should probably say if only one pr has been checked out then switch the local repository s head to its head branch and github checkout prs should behave adequately i e for the sample above it should never switch head pls provide regression tests as well | 1 |
370,261 | 25,896,012,691 | IssuesEvent | 2022-12-14 22:32:22 | USACE/cumulus | https://api.github.com/repos/USACE/cumulus | closed | Create Developer-Focused Documentation Starter | documentation Maintenance | A primary goal for the Cumulus project this year is to facilitate ways for other developer(s) to contribute technically to the project. To date, it has been a small core team developing. As the platform has grown, the amount of know-how one needs to contribute to the codebase, implement a feature end-to-end, etc. has likewise grown.
The aim of this task is to:
(1) Create a basic documentation website, similar to what was done here (https://github.com/USACE/water), which is a simple docs site built with Docsify. The documentation text will be stored in this repository (https://github.com/USACE/cumulus). This will start as a skeleton and be continually updated as part of ongoing maintenance and development of new features.
(2) Enable CI/CD so that docs are rebuilt and deployed anytime changes are made | 1.0 | Create Developer-Focused Documentation Starter - A primary goal for the Cumulus project this year is to facilitate ways for other developer(s) to contribute technically to the project. To date, it has been a small core team developing. As the platform has grown, the amount of know-how one needs to contribute to the codebase, implement a feature end-to-end, etc. has likewise grown.
The aim of this task is to:
(1) Create a basic documentation website, similar to what was done here (https://github.com/USACE/water), which is a simple docs site built with Docsify. The documentation text will be stored in this repository (https://github.com/USACE/cumulus). This will start as a skeleton and be continually updated as part of ongoing maintenance and development of new features.
(2) Enable CI/CD so that docs are rebuilt and deployed anytime changes are made | non_usab | create developer focused documentation starter a primary goal for the cumulus project this year is to facilitate ways for other developer s to contribute technically to the project to date it has been a small core team developing as the platform has grown the amount of know how one needs to contribute to the codebase implement a feature end to end etc has likewise grown the aim of this task is to create a basic documentation website similar to what was done here which is a simple docs site built with docsify the documentation text will be stored in this repository this will start as a skeleton and be continually updated as part of ongoing maintenance and development of new features enable ci cd so that docs are rebuilt and deployed anytime changes are made | 0 |
2,224 | 2,600,038,550 | IssuesEvent | 2015-02-23 13:57:30 | jasp-stats/jasp-desktop | https://api.github.com/repos/jasp-stats/jasp-desktop | closed | Changing "Crosstabs" to "Contingency Tables" | design decision enhancement | Hi @Tahiraj,
We're going to change the names of the "Crosstabs" to "Contingency Tables"
Sound good to you?
If you're not in today, I'm happy to make the changes. | 1.0 | Changing "Crosstabs" to "Contingency Tables" - Hi @Tahiraj,
We're going to change the names of the "Crosstabs" to "Contingency Tables"
Sound good to you?
If you're not in today, I'm happy to make the changes. | non_usab | changing crosstabs to contingency tables hi tahiraj we re going to change the names of the crosstabs to contingency tables sound good to you if you re not in today i m happy to make the changes | 0 |
61,167 | 6,726,880,221 | IssuesEvent | 2017-10-17 11:35:56 | QubesOS/updates-status | https://api.github.com/repos/QubesOS/updates-status | closed | gui-agent-linux v4.0.4 (r4.0) | r4.0-fc24-testing r4.0-fc25-testing r4.0-fc26-testing r4.0-jessie-testing r4.0-stretch-testing | Update of gui-agent-linux to v4.0.4 for Qubes r4.0, see comments below for details.
Built from: https://github.com/QubesOS/qubes-gui-agent-linux/commit/36154f40cf15a0579a8c2dc3a9403f43bec6c14d
[Changes since previous version](https://github.com/QubesOS/qubes-gui-agent-linux/compare/v4.0.3...v4.0.4):
QubesOS/qubes-gui-agent-linux@36154f4 version 4.0.4
QubesOS/qubes-gui-agent-linux@b6c620f Use X module name, not filename
QubesOS/qubes-gui-agent-linux@35903c7 Merge remote-tracking branch 'qubesos/pr/17'
QubesOS/qubes-gui-agent-linux@5337372 xorg: ensure libfb.so is loaded before dummyqbs_drv.so (Issue #3093)
Referenced issues:
QubesOS/qubes-issues#3093
If you're release manager, you can issue GPG-inline signed command:
* `Upload gui-agent-linux 36154f40cf15a0579a8c2dc3a9403f43bec6c14d r4.0 current repo` (available 7 days from now)
* `Upload gui-agent-linux 36154f40cf15a0579a8c2dc3a9403f43bec6c14d r4.0 current (dists) repo`, you can choose subset of distributions, like `vm-fc24 vm-fc25` (available 7 days from now)
* `Upload gui-agent-linux 36154f40cf15a0579a8c2dc3a9403f43bec6c14d r4.0 security-testing repo`
Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it).
| 5.0 | gui-agent-linux v4.0.4 (r4.0) - Update of gui-agent-linux to v4.0.4 for Qubes r4.0, see comments below for details.
Built from: https://github.com/QubesOS/qubes-gui-agent-linux/commit/36154f40cf15a0579a8c2dc3a9403f43bec6c14d
[Changes since previous version](https://github.com/QubesOS/qubes-gui-agent-linux/compare/v4.0.3...v4.0.4):
QubesOS/qubes-gui-agent-linux@36154f4 version 4.0.4
QubesOS/qubes-gui-agent-linux@b6c620f Use X module name, not filename
QubesOS/qubes-gui-agent-linux@35903c7 Merge remote-tracking branch 'qubesos/pr/17'
QubesOS/qubes-gui-agent-linux@5337372 xorg: ensure libfb.so is loaded before dummyqbs_drv.so (Issue #3093)
Referenced issues:
QubesOS/qubes-issues#3093
If you're release manager, you can issue GPG-inline signed command:
* `Upload gui-agent-linux 36154f40cf15a0579a8c2dc3a9403f43bec6c14d r4.0 current repo` (available 7 days from now)
* `Upload gui-agent-linux 36154f40cf15a0579a8c2dc3a9403f43bec6c14d r4.0 current (dists) repo`, you can choose subset of distributions, like `vm-fc24 vm-fc25` (available 7 days from now)
* `Upload gui-agent-linux 36154f40cf15a0579a8c2dc3a9403f43bec6c14d r4.0 security-testing repo`
Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it).
| non_usab | gui agent linux update of gui agent linux to for qubes see comments below for details built from qubesos qubes gui agent linux version qubesos qubes gui agent linux use x module name not filename qubesos qubes gui agent linux merge remote tracking branch qubesos pr qubesos qubes gui agent linux xorg ensure libfb so is loaded before dummyqbs drv so issue referenced issues qubesos qubes issues if you re release manager you can issue gpg inline signed command upload gui agent linux current repo available days from now upload gui agent linux current dists repo you can choose subset of distributions like vm vm available days from now upload gui agent linux security testing repo above commands will work only if packages in current testing repository were built from given commit i e no new version superseded it | 0 |
11,090 | 7,055,089,365 | IssuesEvent | 2018-01-04 05:59:08 | cortoproject/corto | https://api.github.com/repos/cortoproject/corto | opened | Improve implementation of overridable functions | Corto:Performance Corto:Usability | Overridable functions are currently implemented with a generated wrapper function that does a lookup for method id and then uses the method id to do a lookup in the method table of the type of the object that the method is being invoked on. Example code:
```c
void someMethod(someType this) {
corto_type type = corto_typeof(this);
static int methodId = corto_interface_resolveMethodId(baseClass, "someMethod()");
corto_method m = corto_interface_resolveMethodById(type, methodId);
corto_invoke(m, NULL, this);
}
```
This mechanism introduces an extra frame on the stack, requires generated code even if the method isn't called and adds a little bit of performance overhead to every call. To improve this, the overridable method should be called with a macro.
This will improve performance and remove a stackframe for every overridable method, which makes it easier to debug corto applications. The methodId should be stored on the method object itself, so that it doesn't require a separate lookup. | True | Improve implementation of overridable functions - Overridable functions are currently implemented with a generated wrapper function that does a lookup for method id and then uses the method id to do a lookup in the method table of the type of the object that the method is being invoked on. Example code:
```c
void someMethod(someType this) {
corto_type type = corto_typeof(this);
static int methodId = corto_interface_resolveMethodId(baseClass, "someMethod()");
corto_method m = corto_interface_resolveMethodById(type, methodId);
corto_invoke(m, NULL, this);
}
```
This mechanism introduces an extra frame on the stack, requires generated code even if the method isn't called and adds a little bit of performance overhead to every call. To improve this, the overridable method should be called with a macro.
This will improve performance and remove a stackframe for every overridable method, which makes it easier to debug corto applications. The methodId should be stored on the method object itself, so that it doesn't require a separate lookup. | usab | improve implementation of overridable functions overridable functions are currently implemented with a generated wrapper function that does a lookup for method id and then uses the method id to do a lookup in the method table of the type of the object that the method is being invoked on example code c void somemethod sometype this corto type type corto typeof this static int methodid corto interface resolvemethodid baseclass somemethod corto method m corto interface resolvemethodbyid type methodid corto invoke m null this this mechanism introduces an extra frame on the stack requires generated code even if the method isn t called and adds a little bit of performance overhead to every call to improve this the overridable method should be called with a macro this will improve performance and remove a stackframe for every overridable method which makes it easier to debug corto applications the methodid should be stored on the method object itself so that it doesn t require a separate lookup | 1 |
67,523 | 7,049,797,820 | IssuesEvent | 2018-01-03 00:33:01 | mkenney/go-chrome | https://api.github.com/repos/mkenney/go-chrome | closed | Add unit tests for `socket/protocol.cache_storage.go` | good first issue help wanted ✓ tests | ### What version of `go-chrome` are you using (tagged commit, commit hash or `master`)?
`master`
### What behavior do you expect? What are you trying to accomplish?
Unit tests exist with complete code coverage for `protocol.cache_storage.go`
### What behavior are you experiencing instead?
Tests do not exist or do not pass
### How can this behavior be reproduced?
`go test github.com/mkenney/go-chrome/socket` | 1.0 | Add unit tests for `socket/protocol.cache_storage.go` - ### What version of `go-chrome` are you using (tagged commit, commit hash or `master`)?
`master`
### What behavior do you expect? What are you trying to accomplish?
Unit tests exist with complete code coverage for `protocol.cache_storage.go`
### What behavior are you experiencing instead?
Tests do not exist or do not pass
### How can this behavior be reproduced?
`go test github.com/mkenney/go-chrome/socket` | non_usab | add unit tests for socket protocol cache storage go what version of go chrome are you using tagged commit commit hash or master master what behavior do you expect what are you trying to accomplish unit tests exist with complete code coverage for protocol cache storage go what behavior are you experiencing instead tests do not exist or do not pass how can this behavior be reproduced go test github com mkenney go chrome socket | 0 |
17,611 | 12,196,295,985 | IssuesEvent | 2020-04-29 18:50:15 | coreos/ignition | https://api.github.com/repos/coreos/ignition | closed | Remote file verification hash support | area/usability kind/enhancement spec change | Currently, [SHA-512 is the only supported hash function](https://github.com/coreos/ignition/blob/master/internal/util/verification.go#L67-L74) for remote file verification.
Many resources (in my case, Docker Compose) are only released with SHA-256 sums, making verification currently impossible without generating my own compatible sums, which kind of defeats the purpose.
Is there a reason SHA-256 was excluded? If not, would it be possible to include it? | True | Remote file verification hash support - Currently, [SHA-512 is the only supported hash function](https://github.com/coreos/ignition/blob/master/internal/util/verification.go#L67-L74) for remote file verification.
Many resources (in my case, Docker Compose) are only released with SHA-256 sums, making verification currently impossible without generating my own compatible sums, which kind of defeats the purpose.
Is there a reason SHA-256 was excluded? If not, would it be possible to include it? | usab | remote file verification hash support currently for remote file verification many resources in my case docker compose are only released with sha sums making verification currently impossible without generating my own compatible sums which kind of defeats the purpose is there a reason sha was excluded if not would it be possible to include it | 1 |
14,324 | 9,022,900,411 | IssuesEvent | 2019-02-07 04:13:54 | MarkBind/markbind | https://api.github.com/repos/MarkBind/markbind | closed | include: cyclic references cause stack overflow | a-AuthorUsability a-FaultTolerance c.Bug p.Low | **Tell us about your environment**
* NodeJS v9.3.0
* **MarkBind Version:** v1.16.1
**What did you do? Please include the actual source code causing the issue.**
<!-- Paste the source code below: -->
**a.md**
```html
<include src="b.md" />
```
**b.md**
```html
<include src="a.md" />
```
**What did you expect to happen?**
The includes should be resolved automatically. At least, a user-friendly error should given, to allow the user to understand what happens.
**What actually happened? Please include the actual, raw output.**
MarkBind errored with a RangeError.
```
error: RangeError: Maximum call stack size exceeded
Error while preprocessing '/path/to/file/a.md'
```
However, the RangeError and "Maximum call stack size" error isn't helpful in determining what went wrong.
<!-- You are encouraged to submit a PR that reproduces this in `test/test_site/bugs/`. -->
| True | include: cyclic references cause stack overflow - **Tell us about your environment**
* NodeJS v9.3.0
* **MarkBind Version:** v1.16.1
**What did you do? Please include the actual source code causing the issue.**
<!-- Paste the source code below: -->
**a.md**
```html
<include src="b.md" />
```
**b.md**
```html
<include src="a.md" />
```
**What did you expect to happen?**
The includes should be resolved automatically. At least, a user-friendly error should given, to allow the user to understand what happens.
**What actually happened? Please include the actual, raw output.**
MarkBind errored with a RangeError.
```
error: RangeError: Maximum call stack size exceeded
Error while preprocessing '/path/to/file/a.md'
```
However, the RangeError and "Maximum call stack size" error isn't helpful in determining what went wrong.
<!-- You are encouraged to submit a PR that reproduces this in `test/test_site/bugs/`. -->
| usab | include cyclic references cause stack overflow tell us about your environment nodejs markbind version what did you do please include the actual source code causing the issue a md html b md html what did you expect to happen the includes should be resolved automatically at least a user friendly error should given to allow the user to understand what happens what actually happened please include the actual raw output markbind errored with a rangeerror error rangeerror maximum call stack size exceeded error while preprocessing path to file a md however the rangeerror and maximum call stack size error isn t helpful in determining what went wrong | 1 |
27,873 | 30,589,803,582 | IssuesEvent | 2023-07-21 16:02:22 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | 4.0.alpha5 Tilemap Editor: Possible visual Inconsistency between rectangle tool and select tool | discussion topic:editor usability topic:2d | ### Godot version
4.0.alpha5
### System information
Windows 10
### Issue description

_Perhaps this just an issue of taste, but I'm not sure where else to report this._
As seen the gif above, the rectangle erase tool shows a clear rectangle where it will erase tiles. But when making a selection with the select tool it only outlines tiles covered in the current pending selection, as opposed to just drawing a rectangle.
I would prefer it if it would just draw a rectangle (and possibly also highlight tiles covered in the select). I thought it was a bug at first when I saw that nothing was happening when I was clicking.
### Steps to reproduce
Use the selection tool
### Minimal reproduction project
_No response_ | True | 4.0.alpha5 Tilemap Editor: Possible visual Inconsistency between rectangle tool and select tool - ### Godot version
4.0.alpha5
### System information
Windows 10
### Issue description

_Perhaps this just an issue of taste, but I'm not sure where else to report this._
As seen the gif above, the rectangle erase tool shows a clear rectangle where it will erase tiles. But when making a selection with the select tool it only outlines tiles covered in the current pending selection, as opposed to just drawing a rectangle.
I would prefer it if it would just draw a rectangle (and possibly also highlight tiles covered in the select). I thought it was a bug at first when I saw that nothing was happening when I was clicking.
### Steps to reproduce
Use the selection tool
### Minimal reproduction project
_No response_ | usab | tilemap editor possible visual inconsistency between rectangle tool and select tool godot version system information windows issue description perhaps this just an issue of taste but i m not sure where else to report this as seen the gif above the rectangle erase tool shows a clear rectangle where it will erase tiles but when making a selection with the select tool it only outlines tiles covered in the current pending selection as opposed to just drawing a rectangle i would prefer it if it would just draw a rectangle and possibly also highlight tiles covered in the select i thought it was a bug at first when i saw that nothing was happening when i was clicking steps to reproduce use the selection tool minimal reproduction project no response | 1 |
19,762 | 14,531,641,631 | IssuesEvent | 2020-12-14 21:06:43 | godotengine/godot | https://api.github.com/repos/godotengine/godot | opened | Snapped node is not moved, but scene is modified after slight drag | bug topic:editor usability | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if using non-official build. -->
3.2.4 beta4
**Issue description:**
<!-- What happened, and what was expected. -->

**Steps to reproduce:**
1. Create a 2D node (Sprite for best effect)
2. Enable grid snapping and set it to some relatively high value
3. Click on your node and drag it very gently few pixels
4. When you release button, the node will stay in place (because it's snapped), but Godot will still register it as movement and mark your scene as modified | True | Snapped node is not moved, but scene is modified after slight drag - <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if using non-official build. -->
3.2.4 beta4
**Issue description:**
<!-- What happened, and what was expected. -->

**Steps to reproduce:**
1. Create a 2D node (Sprite for best effect)
2. Enable grid snapping and set it to some relatively high value
3. Click on your node and drag it very gently few pixels
4. When you release button, the node will stay in place (because it's snapped), but Godot will still register it as movement and mark your scene as modified | usab | snapped node is not moved but scene is modified after slight drag please search existing issues for potential duplicates before filing yours godot version issue description steps to reproduce create a node sprite for best effect enable grid snapping and set it to some relatively high value click on your node and drag it very gently few pixels when you release button the node will stay in place because it s snapped but godot will still register it as movement and mark your scene as modified | 1 |
25,229 | 24,915,914,737 | IssuesEvent | 2022-10-30 11:54:59 | tailscale/tailscale | https://api.github.com/repos/tailscale/tailscale | closed | On Win7 Tailscale is creating new tunnel on each restart. | OS-windows L2 Few P3 Can't get started T5 Usability | Steps to reproduce:
1. Tailscale 1.9.185 install and connect on Win7 64bit
2.Exit it
3. Restart Windows
4. Tailscale will be creating wintun (i.e tunnel interface 213)
5. exit Tailscale
6. restart windows
7. Tailscale will be creating wintun (i.e tunnel interface 214)
Windows Version : Win 7 64-bit
Tailscale : 1.9.185 / 1.8.7
| True | On Win7 Tailscale is creating new tunnel on each restart. - Steps to reproduce:
1. Tailscale 1.9.185 install and connect on Win7 64bit
2.Exit it
3. Restart Windows
4. Tailscale will be creating wintun (i.e tunnel interface 213)
5. exit Tailscale
6. restart windows
7. Tailscale will be creating wintun (i.e tunnel interface 214)
Windows Version : Win 7 64-bit
Tailscale : 1.9.185 / 1.8.7
| usab | on tailscale is creating new tunnel on each restart steps to reproduce tailscale install and connect on exit it restart windows tailscale will be creating wintun i e tunnel interface exit tailscale restart windows tailscale will be creating wintun i e tunnel interface windows version win bit tailscale | 1 |
24,861 | 24,394,490,537 | IssuesEvent | 2022-10-04 17:58:45 | citusdata/citus | https://api.github.com/repos/citusdata/citus | closed | Rebalancer might try using logical replication when replicating a reference table that doesn't have a primary key | usability | ```sql
SET citus.replicate_reference_tables_on_activate TO off;
CREATE TABLE ref_table(a int);
SELECT create_reference_table('ref_table');
SELECT citus_add_node('localhost', 10701);
SELECT rebalance_table_shards();
NOTICE: replicating reference table 'ref_table' to localhost:10701 ...
ERROR: cannot use logical replication to transfer shards of the relation ref_table since it doesn't have a REPLICA IDENTITY or PRIMARY KEY
DETAIL: UPDATE and DELETE commands on the shard will error out during logical replication unless there is a REPLICA IDENTITY or PRIMARY KEY.
HINT: If you wish to continue without a replica identity set the shard_transfer_mode to 'force_logical' or 'block_writes'.
CONTEXT: while executing command on localhost:10700
```
I would expect this to automatically chose `block_writes` mode because reference table doesn't have a primary key; though this would cause rebalance operation to block writes for distributed tables etc., so maybe this is a design choice ?
| True | Rebalancer might try using logical replication when replicating a reference table that doesn't have a primary key - ```sql
SET citus.replicate_reference_tables_on_activate TO off;
CREATE TABLE ref_table(a int);
SELECT create_reference_table('ref_table');
SELECT citus_add_node('localhost', 10701);
SELECT rebalance_table_shards();
NOTICE: replicating reference table 'ref_table' to localhost:10701 ...
ERROR: cannot use logical replication to transfer shards of the relation ref_table since it doesn't have a REPLICA IDENTITY or PRIMARY KEY
DETAIL: UPDATE and DELETE commands on the shard will error out during logical replication unless there is a REPLICA IDENTITY or PRIMARY KEY.
HINT: If you wish to continue without a replica identity set the shard_transfer_mode to 'force_logical' or 'block_writes'.
CONTEXT: while executing command on localhost:10700
```
I would expect this to automatically chose `block_writes` mode because reference table doesn't have a primary key; though this would cause rebalance operation to block writes for distributed tables etc., so maybe this is a design choice ?
| usab | rebalancer might try using logical replication when replicating a reference table that doesn t have a primary key sql set citus replicate reference tables on activate to off create table ref table a int select create reference table ref table select citus add node localhost select rebalance table shards notice replicating reference table ref table to localhost error cannot use logical replication to transfer shards of the relation ref table since it doesn t have a replica identity or primary key detail update and delete commands on the shard will error out during logical replication unless there is a replica identity or primary key hint if you wish to continue without a replica identity set the shard transfer mode to force logical or block writes context while executing command on localhost i would expect this to automatically chose block writes mode because reference table doesn t have a primary key though this would cause rebalance operation to block writes for distributed tables etc so maybe this is a design choice | 1 |
10,207 | 6,626,052,069 | IssuesEvent | 2017-09-22 17:54:45 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | RichTextLabel: a method to get the number of visible lines | enhancement topic:core usability | This is a feature request for RichTextLabel. Currently, RichTextLabel only offers to set/get the number of visible characters, but it does not offer how many lines are visible. As far as I see, this is not possible to calculate in GDScript when the font is a not a fixed-width one.
### Context:
The way RichTextLabel handles long texts (scrollbars) is useful when building desktop-like GUIs, but many games (Visual Novels, JRPGs) don't work that way.
I'm trying to implement a text box for dialogue. Typically, a a text box has a fixed width and fixed number of lines visible, so a long text that doesn't fit into the text box is broken into parts (each part contains only enough characters to fill the textbox), and the next part is not shown until the user presses a button.
I essentially need to implement something along the lines of [this](https://github.com/henriquelalves/GodotTIE), but I need to use RichTextLabel for italics and bold, and the font isn't a fixed width one. A way to obtain the number of visible lines will enable implemention.
| True | RichTextLabel: a method to get the number of visible lines - This is a feature request for RichTextLabel. Currently, RichTextLabel only offers to set/get the number of visible characters, but it does not offer how many lines are visible. As far as I see, this is not possible to calculate in GDScript when the font is a not a fixed-width one.
### Context:
The way RichTextLabel handles long texts (scrollbars) is useful when building desktop-like GUIs, but many games (Visual Novels, JRPGs) don't work that way.
I'm trying to implement a text box for dialogue. Typically, a a text box has a fixed width and fixed number of lines visible, so a long text that doesn't fit into the text box is broken into parts (each part contains only enough characters to fill the textbox), and the next part is not shown until the user presses a button.
I essentially need to implement something along the lines of [this](https://github.com/henriquelalves/GodotTIE), but I need to use RichTextLabel for italics and bold, and the font isn't a fixed width one. A way to obtain the number of visible lines will enable implemention.
| usab | richtextlabel a method to get the number of visible lines this is a feature request for richtextlabel currently richtextlabel only offers to set get the number of visible characters but it does not offer how many lines are visible as far as i see this is not possible to calculate in gdscript when the font is a not a fixed width one context the way richtextlabel handles long texts scrollbars is useful when building desktop like guis but many games visual novels jrpgs don t work that way i m trying to implement a text box for dialogue typically a a text box has a fixed width and fixed number of lines visible so a long text that doesn t fit into the text box is broken into parts each part contains only enough characters to fill the textbox and the next part is not shown until the user presses a button i essentially need to implement something along the lines of but i need to use richtextlabel for italics and bold and the font isn t a fixed width one a way to obtain the number of visible lines will enable implemention | 1 |
20,526 | 4,565,429,464 | IssuesEvent | 2016-09-15 00:22:59 | networkupstools/nut | https://api.github.com/repos/networkupstools/nut | opened | Ensure that all drivers print version information before they go into the background | documentation enhancement | In the "Starting the driver(s)" section in the user manual, the example says `Detected EATON - Ellipse MAX 1100 [...]`.
Not all drivers do this. For instance, `usbhid-ups` uses upsdebugx(1, ...) which won't print to the console unless a user has debugging turned on.
After fixing the drivers, update the documentation in case the format has changed. | 1.0 | Ensure that all drivers print version information before they go into the background - In the "Starting the driver(s)" section in the user manual, the example says `Detected EATON - Ellipse MAX 1100 [...]`.
Not all drivers do this. For instance, `usbhid-ups` uses upsdebugx(1, ...) which won't print to the console unless a user has debugging turned on.
After fixing the drivers, update the documentation in case the format has changed. | non_usab | ensure that all drivers print version information before they go into the background in the starting the driver s section in the user manual the example says detected eaton ellipse max not all drivers do this for instance usbhid ups uses upsdebugx which won t print to the console unless a user has debugging turned on after fixing the drivers update the documentation in case the format has changed | 0 |
60,875 | 7,403,856,484 | IssuesEvent | 2018-03-20 01:11:18 | gitcoinco/web | https://api.github.com/repos/gitcoinco/web | closed | Consistent button styles across the app | Medium Impact community member design-help | *Not Ready to Be Worked On*
As a user I want to see consistent button styles across the app so that I know what to expect when I interact with this UI element | 1.0 | Consistent button styles across the app - *Not Ready to Be Worked On*
As a user I want to see consistent button styles across the app so that I know what to expect when I interact with this UI element | non_usab | consistent button styles across the app not ready to be worked on as a user i want to see consistent button styles across the app so that i know what to expect when i interact with this ui element | 0 |
10,008 | 6,543,085,945 | IssuesEvent | 2017-09-02 17:07:23 | OpenOrienteering/mapper | https://api.github.com/repos/OpenOrienteering/mapper | closed | OpenFileDialog's extension string is too long | usability | ### Steps to reproduce
1. Open map/template dialog
### Actual behaviour


### Expected behaviour
At least, no need to duplicate each extension in uppercase. Anyway all variants (e.g. *.foo, *.FOO, *.Foo, *foO, etc) are accepted.
### Configuration
Mapper Version: latest master
Operating System: Arch Linux
| True | OpenFileDialog's extension string is too long - ### Steps to reproduce
1. Open map/template dialog
### Actual behaviour


### Expected behaviour
At least, no need to duplicate each extension in uppercase. Anyway all variants (e.g. *.foo, *.FOO, *.Foo, *foO, etc) are accepted.
### Configuration
Mapper Version: latest master
Operating System: Arch Linux
| usab | openfiledialog s extension string is too long steps to reproduce open map template dialog actual behaviour expected behaviour at least no need to duplicate each extension in uppercase anyway all variants e g foo foo foo foo etc are accepted configuration mapper version latest master operating system arch linux | 1 |
24,968 | 24,532,311,746 | IssuesEvent | 2022-10-11 17:30:47 | LLNL/RAJA | https://api.github.com/repos/LLNL/RAJA | closed | Add ability to turn off vectorization. | API/usability compilation vectorization API | Something like `RAJA_ENABLE_VECTORIZATION` to CMake for turning this off for non-vector builds. | True | Add ability to turn off vectorization. - Something like `RAJA_ENABLE_VECTORIZATION` to CMake for turning this off for non-vector builds. | usab | add ability to turn off vectorization something like raja enable vectorization to cmake for turning this off for non vector builds | 1 |
56,412 | 14,078,429,567 | IssuesEvent | 2020-11-04 13:33:42 | themagicalmammal/android_kernel_samsung_j7xlte | https://api.github.com/repos/themagicalmammal/android_kernel_samsung_j7xlte | opened | CVE-2018-11508 (Medium) detected in linuxv3.10 | security vulnerability | ## CVE-2018-11508 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.10</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/themagicalmammal/android_kernel_samsung_j7xlte/commit/439d18b77a020411b95770ba08a9229eed466cde">439d18b77a020411b95770ba08a9229eed466cde</a></p>
<p>Found in base branch: <b>xsentinel-1.6-clean</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (0)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The compat_get_timex function in kernel/compat.c in the Linux kernel before 4.16.9 allows local users to obtain sensitive information from kernel memory via adjtimex.
<p>Publish Date: 2018-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11508>CVE-2018-11508</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-11508">https://nvd.nist.gov/vuln/detail/CVE-2018-11508</a></p>
<p>Release Date: 2018-05-28</p>
<p>Fix Resolution: 4.16.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-11508 (Medium) detected in linuxv3.10 - ## CVE-2018-11508 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.10</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/themagicalmammal/android_kernel_samsung_j7xlte/commit/439d18b77a020411b95770ba08a9229eed466cde">439d18b77a020411b95770ba08a9229eed466cde</a></p>
<p>Found in base branch: <b>xsentinel-1.6-clean</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (0)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The compat_get_timex function in kernel/compat.c in the Linux kernel before 4.16.9 allows local users to obtain sensitive information from kernel memory via adjtimex.
<p>Publish Date: 2018-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11508>CVE-2018-11508</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-11508">https://nvd.nist.gov/vuln/detail/CVE-2018-11508</a></p>
<p>Release Date: 2018-05-28</p>
<p>Fix Resolution: 4.16.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_usab | cve medium detected in cve medium severity vulnerability vulnerable library linux kernel source tree library home page a href found in head commit a href found in base branch xsentinel clean vulnerable source files vulnerability details the compat get timex function in kernel compat c in the linux kernel before allows local users to obtain sensitive information from kernel memory via adjtimex publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
8,918 | 6,033,521,752 | IssuesEvent | 2017-06-09 08:33:52 | Caleydo/taggle | https://api.github.com/repos/Caleydo/taggle | opened | Matrix de-aggregation button inconsistent behavior | matrix Usability | There are two toolbars for aggregated matrix - matrix one and vector one. Both contain the button for matrix de-aggregation.
After clicking on the button in matrix toolbar, the matrix is de-aggregated, which is correct behavior.
After clicking on the button in vector toolbar, de-aggregated matrix is added to the table, but the aggregated vector stays there as well.

Either remove one of the buttons, or at least make the behavior consistent. | True | Matrix de-aggregation button inconsistent behavior - There are two toolbars for aggregated matrix - matrix one and vector one. Both contain the button for matrix de-aggregation.
After clicking on the button in matrix toolbar, the matrix is de-aggregated, which is correct behavior.
After clicking on the button in vector toolbar, de-aggregated matrix is added to the table, but the aggregated vector stays there as well.

Either remove one of the buttons, or at least make the behavior consistent. | usab | matrix de aggregation button inconsistent behavior there are two toolbars for aggregated matrix matrix one and vector one both contain the button for matrix de aggregation after clicking on the button in matrix toolbar the matrix is de aggregated which is correct behavior after clicking on the button in vector toolbar de aggregated matrix is added to the table but the aggregated vector stays there as well either remove one of the buttons or at least make the behavior consistent | 1 |
12,694 | 4,513,677,157 | IssuesEvent | 2016-09-04 12:30:39 | owncloud/gallery | https://api.github.com/repos/owncloud/gallery | closed | Show useful information about the current album | 10 coder wanted design designer wanted feature:photowall | ## Feature request
**User type**: All
**User level**: intermediate
### Description
Sometimes you'd like to quickly have the answers to these questions
* Can I upload new images to the current album?
* Do I have free space in the current album?
* Am I inside a shared album?
Sharing information is available in the share dialogue, but not the rest.
Maybe we could add that at the top of the infoPanel, shown when clicking on the infoButton?
* Lock icon to let users know they don't have write permission
* Share icon to let them know that the current album is shared
* Album size
* Free space
* Modification time
### Benefit / value
<!--
Please explain how it could benefit users of the app, other apps or 3rd party services
-->
Avoids having to switch to the files side to have access to the information
### Risk / caveats
<!--
Please explain the risks and caveats associated with this request
-->
It would make the infoButton permanent
### Sponsorship
<!--
This greatly accelerates the delivery of a feature
-->
None
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/31789304-show-useful-information-about-the-current-album?utm_campaign=plugin&utm_content=tracker%2F9328526&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F9328526&utm_medium=issues&utm_source=github).
</bountysource-plugin> | 1.0 | Show useful information about the current album - ## Feature request
**User type**: All
**User level**: intermediate
### Description
Sometimes you'd like to quickly have the answers to these questions
* Can I upload new images to the current album?
* Do I have free space in the current album?
* Am I inside a shared album?
Sharing information is available in the share dialogue, but not the rest.
Maybe we could add that at the top of the infoPanel, shown when clicking on the infoButton?
* Lock icon to let users know they don't have write permission
* Share icon to let them know that the current album is shared
* Album size
* Free space
* Modification time
### Benefit / value
<!--
Please explain how it could benefit users of the app, other apps or 3rd party services
-->
Avoids having to switch to the files side to have access to the information
### Risk / caveats
<!--
Please explain the risks and caveats associated with this request
-->
It would make the infoButton permanent
### Sponsorship
<!--
This greatly accelerates the delivery of a feature
-->
None
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/31789304-show-useful-information-about-the-current-album?utm_campaign=plugin&utm_content=tracker%2F9328526&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F9328526&utm_medium=issues&utm_source=github).
</bountysource-plugin> | non_usab | show useful information about the current album feature request user type all user level intermediate description sometimes you d like to quickly have the answers to these questions can i upload new images to the current album do i have free space in the current album am i inside a shared album sharing information is available in the share dialogue but not the rest maybe we could add that at the top of the infopanel shown when clicking on the infobutton lock icon to let users know they don t have write permission share icon to let them know that the current album is shared album size free space modification time benefit value please explain how it could benefit users of the app other apps or party services avoids having to switch to the files side to have access to the information risk caveats please explain the risks and caveats associated with this request it would make the infobutton permanent sponsorship this greatly accelerates the delivery of a feature none want to back this issue we accept bounties via | 0 |
17,539 | 12,134,739,149 | IssuesEvent | 2020-04-23 11:16:59 | Homebrew/brew | https://api.github.com/repos/Homebrew/brew | closed | Migrate Homebrew/homebrew-core CI to GitHub Actions CI | discussion features in progress usability | # A detailed description of the proposed feature
We should migrate the CI for Homebrew/homebrew-core from our own hosted Linux Jenkins instance with our own ESXi clients to Azure Pipelines. This should be done in several stages:
1. We configure Azure Pipelines to test and verify our workflow (@ladislas has offered to do/start this)
1. We will need to write an `azure-pipelines.yml` for Homebrew/homebrew-core which triggers Microsoft's build agents and upload bottles from all PRs to Azure Pipelines.
2. We will need to make our `azure-pipelines.yml` upload bottles to Bintray through a release pipeline in a form that `brew pull --bottle` can access/publish for testing.
- This is currently blocked on https://developercommunity.visualstudio.com/content/problem/337049/unable-to-create-pull-request-release-trigger.html?childToView=593066#comment-593066 CC @ethomson for an ETA on a fix for that.
2. We migrate our Linux Jenkins server to use the Azure Pipelines hosted service. This will mean our workers remain on our MacStadium ESXi hosted nodes but builds are triggered by Azure Pipelines build agents.
1. We will adjust the `azure-pipelines.yml` to use our own build agents. We will test this with some PRs to ensure that things can be built, tested and bottles published with this new system.
2. We will switch over the default Homebrew/homebrew-core CI to use Azure Pipelines with our own build agents
3. When we are happy with this setup: we tear down the https://jenkins.brew.sh service and all related machines.
3. We migrate our own build agents to use Microsoft's build agents
1. We will use a single Microsoft build agent alongside one of our own build agents to verify that it can handle e.g. our timeouts and workflow
- This will require Microsoft permitting multi-day timeouts on their build agents for our usage
2. We will migrate all of our build agents to use Microsoft build agents
- This will require Microsoft supporting all the macOS versions we currently support (newest and the two prior) as well as figuring out a solution for future public beta versions
3. We will tear down our build agents
4. @mikemcquaid will cry with joy
# The motivation for the feature
The current CI for Homebrew/homebrew-core works good enough but has the following issues:
- Updating the base images is a manual process that’s required every time there is a macOS/Xcode/CLT/cask dependencies (e.g. Java, XQuartz, OSXFuse) update.
- There should not be shared state between builds but some edge cases can result in this. This is undesirable both for CI reliability and security.
- When any aspect of CI breaks or needs updated we are reliant on a small subset of maintainers to fix things. When it’s a complicated issue we are reliant on a single maintainer (me, @mikemcquaid) to fix things. If I get hit by a bus: this would be bad.
- No-one on the project particularly enjoys or specialises in managing macOS CI
- At this point the usability of Jenkins is painful for both administrators (e.g. you cannot auto-update plugins) and users (viewing logs for a build is not trivial)
- We have to maintain our own Linux server for Jenkins, ESXi VM servers and macOS VMs
- The environment used for Homebrew/brew is not the same as Homebrew/homebrew-core. This sometimes means PRs merged on Homebrew/brew will break when tested on Homebrew/homebrew-core.
It would be good to avoid all of the above work and outsource all (or: as much as possible) system administration work to third-parties who specialise at doing that at scale.
# How the feature would be relevant to at least 90% of Homebrew users
- Maintainers will have more time to do non-CI things
- Our CI no longer will have a [bus factor](https://en.wikipedia.org/wiki/Bus_factor) of one.
- CI will be more consistent and more reliable
# What alternatives to the feature have been considered
- Staying with Jenkins as-is
- Hopefully the above sufficiently explains why this status quo is unsustainable
- Moving Jenkins to build pipelines so we are on the newest Jenkins way of doing things
- This would solve some problems (us using essentially an unsupported/old Jenkins co. nfiguration) while not solving others (Jenkins UI not auto-updating plugins, maintaining our own infrastructure)
- Moving to something like BuildKite which would involve continuing to use our own servers
- Worst case we end up doing this with Azure Pipelines but at least this provides us with the future option to not have to do this
- Moving to another cloud provider e.g. Travis CI/Circle CI rather than Azure
- Azure currently has by far the best performance as well as having the most positive direct relationship with Homebrew (Mike has previously tried to convince both Travis CI and Circle CI to help with this and they've been extremely unhelpful whereas Microsoft has a direct product management relationship with us)
---
@Homebrew/maintainers any thoughts on any of the above? Anything I've missed? Any major problems I've not anticipated? | True | Migrate Homebrew/homebrew-core CI to GitHub Actions CI - # A detailed description of the proposed feature
We should migrate the CI for Homebrew/homebrew-core from our own hosted Linux Jenkins instance with our own ESXi clients to Azure Pipelines. This should be done in several stages:
1. We configure Azure Pipelines to test and verify our workflow (@ladislas has offered to do/start this)
1. We will need to write an `azure-pipelines.yml` for Homebrew/homebrew-core which triggers Microsoft's build agents and upload bottles from all PRs to Azure Pipelines.
2. We will need to make our `azure-pipelines.yml` upload bottles to Bintray through a release pipeline in a form that `brew pull --bottle` can access/publish for testing.
- This is currently blocked on https://developercommunity.visualstudio.com/content/problem/337049/unable-to-create-pull-request-release-trigger.html?childToView=593066#comment-593066 CC @ethomson for an ETA on a fix for that.
2. We migrate our Linux Jenkins server to use the Azure Pipelines hosted service. This will mean our workers remain on our MacStadium ESXi hosted nodes but builds are triggered by Azure Pipelines build agents.
1. We will adjust the `azure-pipelines.yml` to use our own build agents. We will test this with some PRs to ensure that things can be built, tested and bottles published with this new system.
2. We will switch over the default Homebrew/homebrew-core CI to use Azure Pipelines with our own build agents
3. When we are happy with this setup: we tear down the https://jenkins.brew.sh service and all related machines.
3. We migrate our own build agents to use Microsoft's build agents
1. We will use a single Microsoft build agent alongside one of our own build agents to verify that it can handle e.g. our timeouts and workflow
- This will require Microsoft permitting multi-day timeouts on their build agents for our usage
2. We will migrate all of our build agents to use Microsoft build agents
- This will require Microsoft supporting all the macOS versions we currently support (newest and the two prior) as well as figuring out a solution for future public beta versions
3. We will tear down our build agents
4. @mikemcquaid will cry with joy
# The motivation for the feature
The current CI for Homebrew/homebrew-core works good enough but has the following issues:
- Updating the base images is a manual process that’s required every time there is a macOS/Xcode/CLT/cask dependencies (e.g. Java, XQuartz, OSXFuse) update.
- There should not be shared state between builds but some edge cases can result in this. This is undesirable both for CI reliability and security.
- When any aspect of CI breaks or needs updated we are reliant on a small subset of maintainers to fix things. When it’s a complicated issue we are reliant on a single maintainer (me, @mikemcquaid) to fix things. If I get hit by a bus: this would be bad.
- No-one on the project particularly enjoys or specialises in managing macOS CI
- At this point the usability of Jenkins is painful for both administrators (e.g. you cannot auto-update plugins) and users (viewing logs for a build is not trivial)
- We have to maintain our own Linux server for Jenkins, ESXi VM servers and macOS VMs
- The environment used for Homebrew/brew is not the same as Homebrew/homebrew-core. This sometimes means PRs merged on Homebrew/brew will break when tested on Homebrew/homebrew-core.
It would be good to avoid all of the above work and outsource all (or: as much as possible) system administration work to third-parties who specialise at doing that at scale.
# How the feature would be relevant to at least 90% of Homebrew users
- Maintainers will have more time to do non-CI things
- Our CI no longer will have a [bus factor](https://en.wikipedia.org/wiki/Bus_factor) of one.
- CI will be more consistent and more reliable
# What alternatives to the feature have been considered
- Staying with Jenkins as-is
- Hopefully the above sufficiently explains why this status quo is unsustainable
- Moving Jenkins to build pipelines so we are on the newest Jenkins way of doing things
- This would solve some problems (us using essentially an unsupported/old Jenkins co. nfiguration) while not solving others (Jenkins UI not auto-updating plugins, maintaining our own infrastructure)
- Moving to something like BuildKite which would involve continuing to use our own servers
- Worst case we end up doing this with Azure Pipelines but at least this provides us with the future option to not have to do this
- Moving to another cloud provider e.g. Travis CI/Circle CI rather than Azure
- Azure currently has by far the best performance as well as having the most positive direct relationship with Homebrew (Mike has previously tried to convince both Travis CI and Circle CI to help with this and they've been extremely unhelpful whereas Microsoft has a direct product management relationship with us)
---
@Homebrew/maintainers any thoughts on any of the above? Anything I've missed? Any major problems I've not anticipated? | usab | migrate homebrew homebrew core ci to github actions ci a detailed description of the proposed feature we should migrate the ci for homebrew homebrew core from our own hosted linux jenkins instance with our own esxi clients to azure pipelines this should be done in several stages we configure azure pipelines to test and verify our workflow ladislas has offered to do start this we will need to write an azure pipelines yml for homebrew homebrew core which triggers microsoft s build agents and upload bottles from all prs to azure pipelines we will need to make our azure pipelines yml upload bottles to bintray through a release pipeline in a form that brew pull bottle can access publish for testing this is currently blocked on cc ethomson for an eta on a fix for that we migrate our linux jenkins server to use the azure pipelines hosted service this will mean our workers remain on our macstadium esxi hosted nodes but builds are triggered by azure pipelines build agents we will adjust the azure pipelines yml to use our own build agents we will test this with some prs to ensure that things can be built tested and bottles published with this new system we will switch over the default homebrew homebrew core ci to use azure pipelines with our own build agents when we are happy with this setup we tear down the service and all related machines we migrate our own build agents to use microsoft s build agents we will use a single microsoft build agent alongside one of our own build agents to verify that it can handle e g our timeouts and workflow this will require microsoft permitting multi day timeouts on their build agents for our usage we will migrate all of our build agents to use microsoft build agents this will require microsoft supporting all the macos versions we currently support newest and the two prior as well as figuring out a solution for future public beta versions we will tear down our build agents mikemcquaid will cry with joy the motivation for the feature the current ci for homebrew homebrew core works good enough but has the following issues updating the base images is a manual process that’s required every time there is a macos xcode clt cask dependencies e g java xquartz osxfuse update there should not be shared state between builds but some edge cases can result in this this is undesirable both for ci reliability and security when any aspect of ci breaks or needs updated we are reliant on a small subset of maintainers to fix things when it’s a complicated issue we are reliant on a single maintainer me mikemcquaid to fix things if i get hit by a bus this would be bad no one on the project particularly enjoys or specialises in managing macos ci at this point the usability of jenkins is painful for both administrators e g you cannot auto update plugins and users viewing logs for a build is not trivial we have to maintain our own linux server for jenkins esxi vm servers and macos vms the environment used for homebrew brew is not the same as homebrew homebrew core this sometimes means prs merged on homebrew brew will break when tested on homebrew homebrew core it would be good to avoid all of the above work and outsource all or as much as possible system administration work to third parties who specialise at doing that at scale how the feature would be relevant to at least of homebrew users maintainers will have more time to do non ci things our ci no longer will have a of one ci will be more consistent and more reliable what alternatives to the feature have been considered staying with jenkins as is hopefully the above sufficiently explains why this status quo is unsustainable moving jenkins to build pipelines so we are on the newest jenkins way of doing things this would solve some problems us using essentially an unsupported old jenkins co nfiguration while not solving others jenkins ui not auto updating plugins maintaining our own infrastructure moving to something like buildkite which would involve continuing to use our own servers worst case we end up doing this with azure pipelines but at least this provides us with the future option to not have to do this moving to another cloud provider e g travis ci circle ci rather than azure azure currently has by far the best performance as well as having the most positive direct relationship with homebrew mike has previously tried to convince both travis ci and circle ci to help with this and they ve been extremely unhelpful whereas microsoft has a direct product management relationship with us homebrew maintainers any thoughts on any of the above anything i ve missed any major problems i ve not anticipated | 1 |
235,557 | 18,051,950,368 | IssuesEvent | 2021-09-19 22:18:07 | lcpulzone/tea_time | https://api.github.com/repos/lcpulzone/tea_time | closed | Create Project Board | documentation | - [x] Add issues for minimum requirements
- [x] Create issues for step by step process
- [x] Add any foreseeable 'sticky points' | 1.0 | Create Project Board - - [x] Add issues for minimum requirements
- [x] Create issues for step by step process
- [x] Add any foreseeable 'sticky points' | non_usab | create project board add issues for minimum requirements create issues for step by step process add any foreseeable sticky points | 0 |
9,442 | 6,304,810,587 | IssuesEvent | 2017-07-21 16:47:56 | openstreetmap/iD | https://api.github.com/repos/openstreetmap/iD | closed | Distinguish Service Roads from Parking Aisle | preset usability | I don't know the best way of doing this (color, thickness) but have them distinguished would be very helpful. | True | Distinguish Service Roads from Parking Aisle - I don't know the best way of doing this (color, thickness) but have them distinguished would be very helpful. | usab | distinguish service roads from parking aisle i don t know the best way of doing this color thickness but have them distinguished would be very helpful | 1 |
55,052 | 13,507,515,196 | IssuesEvent | 2020-09-14 06:07:50 | rsx-labs/aide-frontend | https://api.github.com/repos/rsx-labs/aide-frontend | closed | [Daily Workplace Audit] Application error on first access | Bug Fixed - ready for build For Next Build High Priority | **Describe the bug**
Application error on first access
[4:54 PM] Trilles, Marvin
2020-09-03 16:46:04.7696::ERROR::UI_AIDE_CommCellServices.DailyAuditPage::System.NullReferenceException: Object reference not set to an instance of an object. at UI_AIDE_CommCellServices.DailyAuditPage.GenerateQuestions()
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Version (please complete the following information):**
- Version 3.40
**Additional context**
Add any other context about the problem here.
| 2.0 | [Daily Workplace Audit] Application error on first access - **Describe the bug**
Application error on first access
[4:54 PM] Trilles, Marvin
2020-09-03 16:46:04.7696::ERROR::UI_AIDE_CommCellServices.DailyAuditPage::System.NullReferenceException: Object reference not set to an instance of an object. at UI_AIDE_CommCellServices.DailyAuditPage.GenerateQuestions()
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Version (please complete the following information):**
- Version 3.40
**Additional context**
Add any other context about the problem here.
| non_usab | application error on first access describe the bug application error on first access trilles marvin error ui aide commcellservices dailyauditpage system nullreferenceexception object reference not set to an instance of an object at ui aide commcellservices dailyauditpage generatequestions to reproduce steps to reproduce the behavior go to click on scroll down to see error expected behavior a clear and concise description of what you expected to happen screenshots if applicable add screenshots to help explain your problem version please complete the following information version additional context add any other context about the problem here | 0 |
115,695 | 4,680,089,590 | IssuesEvent | 2016-10-08 02:09:09 | CS2103AUG2016-T16-C3/main | https://api.github.com/repos/CS2103AUG2016-T16-C3/main | opened | New User - View how to create a task | priority.low type.enhancement type.story | Think about how we can implement this... Type `tutorial` and it shows how to create a task? But how would we show this? An animation? Actually add a task by typing something into the GUI? | 1.0 | New User - View how to create a task - Think about how we can implement this... Type `tutorial` and it shows how to create a task? But how would we show this? An animation? Actually add a task by typing something into the GUI? | non_usab | new user view how to create a task think about how we can implement this type tutorial and it shows how to create a task but how would we show this an animation actually add a task by typing something into the gui | 0 |
283,259 | 30,913,246,136 | IssuesEvent | 2023-08-05 01:26:33 | hshivhare67/kernel_v4.19.72_CVE-2022-42896_new | https://api.github.com/repos/hshivhare67/kernel_v4.19.72_CVE-2022-42896_new | reopened | CVE-2019-19768 (High) detected in linuxlinux-4.19.279 | Mend: dependency security vulnerability | ## CVE-2019-19768 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.279</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/trace/blktrace.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/trace/blktrace.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In the Linux kernel 5.4.0-rc2, there is a use-after-free (read) in the __blk_add_trace function in kernel/trace/blktrace.c (which is used to fill out a blk_io_trace structure and place it in a per-cpu sub-buffer).
<p>Publish Date: 2019-12-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-19768>CVE-2019-19768</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2019-19768">https://nvd.nist.gov/vuln/detail/CVE-2019-19768</a></p>
<p>Release Date: 2020-06-10</p>
<p>Fix Resolution: kernel-doc - 3.10.0-514.76.1,3.10.0-957.54.1,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-327.88.1,4.18.0-80.18.1,4.18.0-193,3.10.0-1062.26.1,3.10.0-693.67.1;kernel-rt-core - 4.18.0-193.rt13.51;kernel-rt-debug-debuginfo - 4.18.0-193.rt13.51;kernel-abi-whitelists - 3.10.0-327.88.1,3.10.0-1062.26.1,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-514.76.1,3.10.0-1127.8.2,3.10.0-957.54.1,4.18.0-193,3.10.0-693.67.1;kernel-zfcpdump-modules - 4.18.0-193,4.18.0-147.13.2;kernel-rt-trace-devel - 3.10.0-1127.8.2.rt56.1103;kernel-debug-modules-extra - 4.18.0-147.13.2,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-193,4.18.0-193,4.18.0-147.13.2;kernel-rt-debug-kvm - 4.18.0-193.rt13.51,3.10.0-1127.8.2.rt56.1103;kernel-bootwrapper - 3.10.0-1062.26.1,3.10.0-1127.8.2,3.10.0-693.67.1,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-957.54.1;kernel-rt-debuginfo - 4.18.0-193.rt13.51;kernel-rt-debug-modules - 4.18.0-193.rt13.51;kernel-zfcpdump-devel - 4.18.0-193,4.18.0-147.13.2;perf - 3.10.0-514.76.1,3.10.0-957.54.1,3.10.0-1062.26.1,4.18.0-147.13.2,3.10.0-957.54.1,4.18.0-80.18.1,4.18.0-193,4.18.0-193,3.10.0-327.88.1,4.18.0-147.13.2,3.10.0-1062.26.1,3.10.0-1127.8.2,4.18.0-193,4.18.0-80.18.1,4.18.0-193,3.10.0-1127.8.2,4.18.0-147.13.2,3.10.0-1062.26.1,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-957.54.1;kernel-zfcpdump-modules-extra - 4.18.0-193,4.18.0-147.13.2;kernel-debuginfo - 3.10.0-514.76.1,4.18.0-80.18.1,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-1127.8.2,3.10.0-693.67.1,4.18.0-193,3.10.0-957.54.1,4.18.0-147.13.2,3.10.0-327.88.1,3.10.0-1062.26.1;kernel-debug-devel - 3.10.0-514.76.1,4.18.0-147.13.2,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-957.54.1,4.18.0-193,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-1127.8.2,4.18.0-147.13.2,3.10.0-327.88.1,4.18.0-193,4.18.0-80.18.1,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-1062.26.1,4.18.0-193,3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-514.76.1,4.18.0-147.13.2,4.18.0-193,3.10.0-1062.26.1,4.18.0-80.18.1;bpftool - 3.10.0-1127.8.2,3.10.0-1062.26.1,4.18.0-147.13.2,4.18.0-193,3.10.0-1062.26.1,4.18.0-147.13.2,3.10.0-1062.26.1,3.10.0-1127.8.2,4.18.0-193,4.18.0-147.13.2,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-193,3.10.0-957.54.1,4.18.0-80.18.1,4.18.0-193,3.10.0-1127.8.2;kernel-rt-debug-core - 4.18.0-193.rt13.51;kernel-tools-libs - 3.10.0-1062.26.1,3.10.0-1062.26.1,3.10.0-327.88.1,3.10.0-1127.8.2,4.18.0-193,3.10.0-693.67.1,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-957.54.1,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,3.10.0-957.54.1,3.10.0-1062.26.1,4.18.0-193,3.10.0-514.76.1,3.10.0-957.54.1,3.10.0-514.76.1,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-1127.8.2;perf-debuginfo - 3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-1062.26.1,3.10.0-1062.26.1,4.18.0-193,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-327.88.1;kernel-cross-headers - 4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-193,4.18.0-193,4.18.0-193,4.18.0-147.13.2;kernel-debug-debuginfo - 3.10.0-1127.8.2,3.10.0-1062.26.1,3.10.0-693.67.1,4.18.0-193,3.10.0-514.76.1,3.10.0-327.88.1,3.10.0-957.54.1,3.10.0-1062.26.1,3.10.0-957.54.1,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-1127.8.2;kernel-debug - 3.10.0-514.76.1,3.10.0-327.88.1,4.18.0-193,3.10.0-1127.8.2,3.10.0-693.67.1,3.10.0-957.54.1,4.18.0-193,4.18.0-193,3.10.0-1062.26.1,3.10.0-1062.26.1,4.18.0-80.18.1,3.10.0-957.54.1,4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-1127.8.2,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-1062.26.1,4.18.0-193,3.10.0-957.54.1,4.18.0-147.13.2;kernel-devel - 4.18.0-193,3.10.0-957.54.1,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-1127.8.2,3.10.0-957.54.1,4.18.0-147.13.2,3.10.0-514.76.1,4.18.0-193,4.18.0-80.18.1,3.10.0-1062.26.1,4.18.0-193,3.10.0-957.54.1,4.18.0-80.18.1,3.10.0-1062.26.1,4.18.0-147.13.2,3.10.0-327.88.1,3.10.0-1062.26.1,4.18.0-193,3.10.0-693.67.1,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-1127.8.2;kernel - 3.10.0-1062.26.1,3.10.0-1062.26.1,3.10.0-693.67.1,3.10.0-327.88.1,3.10.0-327.88.1,4.18.0-147.13.2,4.18.0-147.13.2,3.10.0-957.54.1,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-514.76.1,3.10.0-1127.8.2,3.10.0-693.67.1,4.18.0-193,4.18.0-193,3.10.0-1127.8.2,4.18.0-147.13.2,3.10.0-1062.26.1,4.18.0-80.18.1,3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-514.76.1,4.18.0-147.13.2,4.18.0-193,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-1127.8.2,4.18.0-193,3.10.0-514.76.1,3.10.0-693.67.1,4.18.0-193,3.10.0-1127.8.2;bpftool-debuginfo - 4.18.0-193,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-1062.26.1,4.18.0-80.18.1;kpatch-patch-3_10_0-1062_12_1 - 1-2,1-2;kernel-zfcpdump-core - 4.18.0-147.13.2,4.18.0-193;kernel-debug-core - 4.18.0-80.18.1,4.18.0-193,4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-193;kernel-modules-extra - 4.18.0-147.13.2,4.18.0-193,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-193,4.18.0-147.13.2;kernel-rt-debug-devel - 3.10.0-1127.8.2.rt56.1103,4.18.0-193.rt13.51;python-perf - 3.10.0-514.76.1,3.10.0-1127.8.2,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-327.88.1,3.10.0-1062.26.1,3.10.0-693.67.1,3.10.0-514.76.1,3.10.0-1127.8.2,3.10.0-1062.26.1,3.10.0-957.54.1;kernel-core - 4.18.0-147.13.2,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-193,4.18.0-147.13.2,4.18.0-80.18.1,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2;kernel-rt-debug - 3.10.0-1127.8.2.rt56.1103,4.18.0-193.rt13.51;kernel-rt-devel - 3.10.0-1127.8.2.rt56.1103,4.18.0-193.rt13.51;kernel-debuginfo-common-ppc64 - 3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-1062.26.1;python3-perf - 4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-193,4.18.0-147.13.2,4.18.0-147.13.2;kernel-tools - 3.10.0-514.76.1,3.10.0-957.54.1,3.10.0-957.54.1,4.18.0-193,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-1127.8.2,3.10.0-514.76.1,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,3.10.0-1062.26.1,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-1062.26.1,3.10.0-327.88.1,3.10.0-1062.26.1,4.18.0-193,3.10.0-957.54.1,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-1127.8.2;kernel-debug-modules - 4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-193,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2;kernel-rt-trace-kvm - 3.10.0-1127.8.2.rt56.1103;kernel-rt-debuginfo-common-x86_64 - 4.18.0-193.rt13.51;kernel-tools-libs-devel - 3.10.0-514.76.1,3.10.0-327.88.1,3.10.0-693.67.1,3.10.0-1062.26.1,3.10.0-1062.26.1,3.10.0-1127.8.2,3.10.0-957.54.1,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-957.54.1,3.10.0-1062.26.1,3.10.0-957.54.1;kernel-modules - 4.18.0-147.13.2,4.18.0-80.18.1,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-193,4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-193;kernel-tools-debuginfo - 3.10.0-1062.26.1,4.18.0-193,3.10.0-1127.8.2,4.18.0-80.18.1,3.10.0-327.88.1,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-514.76.1,3.10.0-693.67.1;kernel-rt-modules - 4.18.0-193.rt13.51;kernel-rt-doc - 3.10.0-1127.8.2.rt56.1103;kernel-rt-kvm - 4.18.0-193.rt13.51,3.10.0-1127.8.2.rt56.1103;python-perf-debuginfo - 3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-327.88.1,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-514.76.1,3.10.0-1062.26.1;kernel-headers - 3.10.0-1062.26.1,4.18.0-147.13.2,3.10.0-957.54.1,3.10.0-514.76.1,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,3.10.0-327.88.1,3.10.0-1127.8.2,4.18.0-147.13.2,4.18.0-193,3.10.0-1062.26.1,3.10.0-693.67.1,4.18.0-193,3.10.0-1127.8.2,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-957.54.1,3.10.0-514.76.1,3.10.0-1062.26.1,4.18.0-80.18.1,3.10.0-957.54.1,4.18.0-193,3.10.0-1127.8.2;kernel-rt-trace - 3.10.0-1127.8.2.rt56.1103;kernel-debuginfo-common-x86_64 - 3.10.0-1127.8.2,3.10.0-693.67.1,3.10.0-327.88.1,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-1062.26.1,3.10.0-514.76.1,4.18.0-193,3.10.0-957.54.1;kernel-rt - 3.10.0-1127.8.2.rt56.1103,4.18.0-193.rt13.51,3.10.0-1127.8.2.rt56.1103,4.18.0-193.rt13.51;kernel-zfcpdump - 4.18.0-147.13.2,4.18.0-193;kernel-rt-debug-modules-extra - 4.18.0-193.rt13.51;python3-perf-debuginfo - 4.18.0-147.13.2,4.18.0-80.18.1,4.18.0-193;kernel-rt-modules-extra - 4.18.0-193.rt13.51</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-19768 (High) detected in linuxlinux-4.19.279 - ## CVE-2019-19768 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.279</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/trace/blktrace.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/trace/blktrace.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In the Linux kernel 5.4.0-rc2, there is a use-after-free (read) in the __blk_add_trace function in kernel/trace/blktrace.c (which is used to fill out a blk_io_trace structure and place it in a per-cpu sub-buffer).
<p>Publish Date: 2019-12-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-19768>CVE-2019-19768</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2019-19768">https://nvd.nist.gov/vuln/detail/CVE-2019-19768</a></p>
<p>Release Date: 2020-06-10</p>
<p>Fix Resolution: kernel-doc - 3.10.0-514.76.1,3.10.0-957.54.1,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-327.88.1,4.18.0-80.18.1,4.18.0-193,3.10.0-1062.26.1,3.10.0-693.67.1;kernel-rt-core - 4.18.0-193.rt13.51;kernel-rt-debug-debuginfo - 4.18.0-193.rt13.51;kernel-abi-whitelists - 3.10.0-327.88.1,3.10.0-1062.26.1,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-514.76.1,3.10.0-1127.8.2,3.10.0-957.54.1,4.18.0-193,3.10.0-693.67.1;kernel-zfcpdump-modules - 4.18.0-193,4.18.0-147.13.2;kernel-rt-trace-devel - 3.10.0-1127.8.2.rt56.1103;kernel-debug-modules-extra - 4.18.0-147.13.2,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-193,4.18.0-193,4.18.0-147.13.2;kernel-rt-debug-kvm - 4.18.0-193.rt13.51,3.10.0-1127.8.2.rt56.1103;kernel-bootwrapper - 3.10.0-1062.26.1,3.10.0-1127.8.2,3.10.0-693.67.1,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-957.54.1;kernel-rt-debuginfo - 4.18.0-193.rt13.51;kernel-rt-debug-modules - 4.18.0-193.rt13.51;kernel-zfcpdump-devel - 4.18.0-193,4.18.0-147.13.2;perf - 3.10.0-514.76.1,3.10.0-957.54.1,3.10.0-1062.26.1,4.18.0-147.13.2,3.10.0-957.54.1,4.18.0-80.18.1,4.18.0-193,4.18.0-193,3.10.0-327.88.1,4.18.0-147.13.2,3.10.0-1062.26.1,3.10.0-1127.8.2,4.18.0-193,4.18.0-80.18.1,4.18.0-193,3.10.0-1127.8.2,4.18.0-147.13.2,3.10.0-1062.26.1,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-957.54.1;kernel-zfcpdump-modules-extra - 4.18.0-193,4.18.0-147.13.2;kernel-debuginfo - 3.10.0-514.76.1,4.18.0-80.18.1,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-1127.8.2,3.10.0-693.67.1,4.18.0-193,3.10.0-957.54.1,4.18.0-147.13.2,3.10.0-327.88.1,3.10.0-1062.26.1;kernel-debug-devel - 3.10.0-514.76.1,4.18.0-147.13.2,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-957.54.1,4.18.0-193,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-1127.8.2,4.18.0-147.13.2,3.10.0-327.88.1,4.18.0-193,4.18.0-80.18.1,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-1062.26.1,4.18.0-193,3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-514.76.1,4.18.0-147.13.2,4.18.0-193,3.10.0-1062.26.1,4.18.0-80.18.1;bpftool - 3.10.0-1127.8.2,3.10.0-1062.26.1,4.18.0-147.13.2,4.18.0-193,3.10.0-1062.26.1,4.18.0-147.13.2,3.10.0-1062.26.1,3.10.0-1127.8.2,4.18.0-193,4.18.0-147.13.2,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-193,3.10.0-957.54.1,4.18.0-80.18.1,4.18.0-193,3.10.0-1127.8.2;kernel-rt-debug-core - 4.18.0-193.rt13.51;kernel-tools-libs - 3.10.0-1062.26.1,3.10.0-1062.26.1,3.10.0-327.88.1,3.10.0-1127.8.2,4.18.0-193,3.10.0-693.67.1,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-957.54.1,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,3.10.0-957.54.1,3.10.0-1062.26.1,4.18.0-193,3.10.0-514.76.1,3.10.0-957.54.1,3.10.0-514.76.1,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-1127.8.2;perf-debuginfo - 3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-1062.26.1,3.10.0-1062.26.1,4.18.0-193,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-327.88.1;kernel-cross-headers - 4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-193,4.18.0-193,4.18.0-193,4.18.0-147.13.2;kernel-debug-debuginfo - 3.10.0-1127.8.2,3.10.0-1062.26.1,3.10.0-693.67.1,4.18.0-193,3.10.0-514.76.1,3.10.0-327.88.1,3.10.0-957.54.1,3.10.0-1062.26.1,3.10.0-957.54.1,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-1127.8.2;kernel-debug - 3.10.0-514.76.1,3.10.0-327.88.1,4.18.0-193,3.10.0-1127.8.2,3.10.0-693.67.1,3.10.0-957.54.1,4.18.0-193,4.18.0-193,3.10.0-1062.26.1,3.10.0-1062.26.1,4.18.0-80.18.1,3.10.0-957.54.1,4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-1127.8.2,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-1062.26.1,4.18.0-193,3.10.0-957.54.1,4.18.0-147.13.2;kernel-devel - 4.18.0-193,3.10.0-957.54.1,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-1127.8.2,3.10.0-957.54.1,4.18.0-147.13.2,3.10.0-514.76.1,4.18.0-193,4.18.0-80.18.1,3.10.0-1062.26.1,4.18.0-193,3.10.0-957.54.1,4.18.0-80.18.1,3.10.0-1062.26.1,4.18.0-147.13.2,3.10.0-327.88.1,3.10.0-1062.26.1,4.18.0-193,3.10.0-693.67.1,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-1127.8.2;kernel - 3.10.0-1062.26.1,3.10.0-1062.26.1,3.10.0-693.67.1,3.10.0-327.88.1,3.10.0-327.88.1,4.18.0-147.13.2,4.18.0-147.13.2,3.10.0-957.54.1,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-514.76.1,3.10.0-1127.8.2,3.10.0-693.67.1,4.18.0-193,4.18.0-193,3.10.0-1127.8.2,4.18.0-147.13.2,3.10.0-1062.26.1,4.18.0-80.18.1,3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-514.76.1,4.18.0-147.13.2,4.18.0-193,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-1127.8.2,4.18.0-193,3.10.0-514.76.1,3.10.0-693.67.1,4.18.0-193,3.10.0-1127.8.2;bpftool-debuginfo - 4.18.0-193,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-1062.26.1,4.18.0-80.18.1;kpatch-patch-3_10_0-1062_12_1 - 1-2,1-2;kernel-zfcpdump-core - 4.18.0-147.13.2,4.18.0-193;kernel-debug-core - 4.18.0-80.18.1,4.18.0-193,4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-193;kernel-modules-extra - 4.18.0-147.13.2,4.18.0-193,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-193,4.18.0-147.13.2;kernel-rt-debug-devel - 3.10.0-1127.8.2.rt56.1103,4.18.0-193.rt13.51;python-perf - 3.10.0-514.76.1,3.10.0-1127.8.2,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-327.88.1,3.10.0-1062.26.1,3.10.0-693.67.1,3.10.0-514.76.1,3.10.0-1127.8.2,3.10.0-1062.26.1,3.10.0-957.54.1;kernel-core - 4.18.0-147.13.2,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-193,4.18.0-147.13.2,4.18.0-80.18.1,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2;kernel-rt-debug - 3.10.0-1127.8.2.rt56.1103,4.18.0-193.rt13.51;kernel-rt-devel - 3.10.0-1127.8.2.rt56.1103,4.18.0-193.rt13.51;kernel-debuginfo-common-ppc64 - 3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-1062.26.1;python3-perf - 4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-193,4.18.0-147.13.2,4.18.0-147.13.2;kernel-tools - 3.10.0-514.76.1,3.10.0-957.54.1,3.10.0-957.54.1,4.18.0-193,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-1127.8.2,3.10.0-514.76.1,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,3.10.0-1062.26.1,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-1062.26.1,3.10.0-327.88.1,3.10.0-1062.26.1,4.18.0-193,3.10.0-957.54.1,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-1127.8.2;kernel-debug-modules - 4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-193,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2;kernel-rt-trace-kvm - 3.10.0-1127.8.2.rt56.1103;kernel-rt-debuginfo-common-x86_64 - 4.18.0-193.rt13.51;kernel-tools-libs-devel - 3.10.0-514.76.1,3.10.0-327.88.1,3.10.0-693.67.1,3.10.0-1062.26.1,3.10.0-1062.26.1,3.10.0-1127.8.2,3.10.0-957.54.1,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-957.54.1,3.10.0-1062.26.1,3.10.0-957.54.1;kernel-modules - 4.18.0-147.13.2,4.18.0-80.18.1,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-193,4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-193;kernel-tools-debuginfo - 3.10.0-1062.26.1,4.18.0-193,3.10.0-1127.8.2,4.18.0-80.18.1,3.10.0-327.88.1,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-514.76.1,3.10.0-693.67.1;kernel-rt-modules - 4.18.0-193.rt13.51;kernel-rt-doc - 3.10.0-1127.8.2.rt56.1103;kernel-rt-kvm - 4.18.0-193.rt13.51,3.10.0-1127.8.2.rt56.1103;python-perf-debuginfo - 3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-327.88.1,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-514.76.1,3.10.0-1062.26.1;kernel-headers - 3.10.0-1062.26.1,4.18.0-147.13.2,3.10.0-957.54.1,3.10.0-514.76.1,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,3.10.0-327.88.1,3.10.0-1127.8.2,4.18.0-147.13.2,4.18.0-193,3.10.0-1062.26.1,3.10.0-693.67.1,4.18.0-193,3.10.0-1127.8.2,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-957.54.1,3.10.0-514.76.1,3.10.0-1062.26.1,4.18.0-80.18.1,3.10.0-957.54.1,4.18.0-193,3.10.0-1127.8.2;kernel-rt-trace - 3.10.0-1127.8.2.rt56.1103;kernel-debuginfo-common-x86_64 - 3.10.0-1127.8.2,3.10.0-693.67.1,3.10.0-327.88.1,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-1062.26.1,3.10.0-514.76.1,4.18.0-193,3.10.0-957.54.1;kernel-rt - 3.10.0-1127.8.2.rt56.1103,4.18.0-193.rt13.51,3.10.0-1127.8.2.rt56.1103,4.18.0-193.rt13.51;kernel-zfcpdump - 4.18.0-147.13.2,4.18.0-193;kernel-rt-debug-modules-extra - 4.18.0-193.rt13.51;python3-perf-debuginfo - 4.18.0-147.13.2,4.18.0-80.18.1,4.18.0-193;kernel-rt-modules-extra - 4.18.0-193.rt13.51</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_usab | cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch master vulnerable source files kernel trace blktrace c kernel trace blktrace c vulnerability details in the linux kernel there is a use after free read in the blk add trace function in kernel trace blktrace c which is used to fill out a blk io trace structure and place it in a per cpu sub buffer publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution kernel doc kernel rt core kernel rt debug debuginfo kernel abi whitelists kernel zfcpdump modules kernel rt trace devel kernel debug modules extra kernel rt debug kvm kernel bootwrapper kernel rt debuginfo kernel rt debug modules kernel zfcpdump devel perf kernel zfcpdump modules extra kernel debuginfo kernel debug devel bpftool kernel rt debug core kernel tools libs perf debuginfo kernel cross headers kernel debug debuginfo kernel debug kernel devel kernel bpftool debuginfo kpatch patch kernel zfcpdump core kernel debug core kernel modules extra kernel rt debug devel python perf kernel core kernel rt debug kernel rt devel kernel debuginfo common perf kernel tools kernel debug modules kernel rt trace kvm kernel rt debuginfo common kernel tools libs devel kernel modules kernel tools debuginfo kernel rt modules kernel rt doc kernel rt kvm python perf debuginfo kernel headers kernel rt trace kernel debuginfo common kernel rt kernel zfcpdump kernel rt debug modules extra perf debuginfo kernel rt modules extra step up your open source security game with mend | 0 |
14,651 | 9,393,458,194 | IssuesEvent | 2019-04-07 11:49:45 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | Custom shortcuts for menus and everything inside the editor | enhancement junior job topic:editor usability | # Feature proposal
**Godot version:** 3.0.6
**TL;DR: Please, extend the list of available shortcuts in the Editor -> Editor Settings -> Shortcuts so that it contains main window menu items as well**
Hi all,
I'm following the tutorial and suddenly I realizied that I had to navigate to Project -> Project Settings quite often during the last two days. That led me to the idea of learning the shortcut for this menu item. Though, there's no such, I hoped to find something related at Editor -> Editor Settings -> Shortcuts, but there's totally nothing related to menus.
Most IDEs (like ones from JetBrains) offer you an ability to customize shortcuts for almost everything, and there's even a default shortcut for, say, project settings. I find that quite convenient, but failed to find such an ability in Godot.
Maybe I just overlooked something? | True | Custom shortcuts for menus and everything inside the editor - # Feature proposal
**Godot version:** 3.0.6
**TL;DR: Please, extend the list of available shortcuts in the Editor -> Editor Settings -> Shortcuts so that it contains main window menu items as well**
Hi all,
I'm following the tutorial and suddenly I realizied that I had to navigate to Project -> Project Settings quite often during the last two days. That led me to the idea of learning the shortcut for this menu item. Though, there's no such, I hoped to find something related at Editor -> Editor Settings -> Shortcuts, but there's totally nothing related to menus.
Most IDEs (like ones from JetBrains) offer you an ability to customize shortcuts for almost everything, and there's even a default shortcut for, say, project settings. I find that quite convenient, but failed to find such an ability in Godot.
Maybe I just overlooked something? | usab | custom shortcuts for menus and everything inside the editor feature proposal godot version tl dr please extend the list of available shortcuts in the editor editor settings shortcuts so that it contains main window menu items as well hi all i m following the tutorial and suddenly i realizied that i had to navigate to project project settings quite often during the last two days that led me to the idea of learning the shortcut for this menu item though there s no such i hoped to find something related at editor editor settings shortcuts but there s totally nothing related to menus most ides like ones from jetbrains offer you an ability to customize shortcuts for almost everything and there s even a default shortcut for say project settings i find that quite convenient but failed to find such an ability in godot maybe i just overlooked something | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.