Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 957 | labels stringlengths 4 795 | body stringlengths 1 259k | index stringclasses 12
values | text_combine stringlengths 96 259k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
804,211 | 29,479,297,472 | IssuesEvent | 2023-06-02 03:06:34 | returntocorp/semgrep | https://api.github.com/repos/returntocorp/semgrep | closed | aliengrep: add an option for caseless matching | priority:medium lang:aliengrep | **Is your feature request related to a problem? Please describe.**
Caseless matching allows matching case-insensitive languages like HTML and derivatives, HTTP headers, English text, etc. It is possible and easy to implement in aliengrep thanks to PCRE having options for case-insensitive matching.
**Describe the solution you'd like**
Support a new option in the `options` section of Semgrep rules:
```
options:
generic_caseless: true
```
| 1.0 | aliengrep: add an option for caseless matching - **Is your feature request related to a problem? Please describe.**
Caseless matching allows matching case-insensitive languages like HTML and derivatives, HTTP headers, English text, etc. It is possible and easy to implement in aliengrep thanks to PCRE having options for case-insensitive matching.
**Describe the solution you'd like**
Support a new option in the `options` section of Semgrep rules:
```
options:
generic_caseless: true
```
| priority | aliengrep add an option for caseless matching is your feature request related to a problem please describe caseless matching allows matching case insensitive languages like html and derivatives http headers english text etc it is possible and easy to implement in aliengrep thanks to pcre having options for case insensitive matching describe the solution you d like support a new option in the options section of semgrep rules options generic caseless true | 1 |
474,598 | 13,672,430,792 | IssuesEvent | 2020-09-29 08:29:43 | bcgov/ols-router | https://api.github.com/repos/bcgov/ols-router | closed | Create a Route Planner Sandbox application | Route Planner Sandbox enhancement estimate needed ferries functional route planner medium priority restriction-aware routing road events task time-dependent routing truck routing turn costs | Route Planner Sandbox is an application that lets you interactively test the Route Planner API. It can be used to create and update test and benchmark routes stored in a web-accessible document.
RPS will also let you study the effects of route planner API changes on the ability to effectively work out commercial vehicle routes. RPS may also be useful as a rapid prototyping environment in permit application requirements analysis.
RPS will have no user authentication or access control and will have a disclaimer that it is not to be used for actual commercial vehicle routing.
Initially, RPS will be used to interactively create and edit test and benchmark routes for the TransLink Commercial Vehicle Route Planner.
Areas of study include:
* ways to use local knowledge of the road network to improve the route planner and/or create tailored, but sub-optimal wrt time/distance, routes.
* effectively assisting permit clerks in search of efficient routes that involve manual intervention such as counterflow maneuvers (e.g., having flag people close a section of road to allow the vehicle to travel down the middle)
* supporting designated truck route and oversize corridor design | 1.0 | Create a Route Planner Sandbox application - Route Planner Sandbox is an application that lets you interactively test the Route Planner API. It can be used to create and update test and benchmark routes stored in a web-accessible document.
RPS will also let you study the effects of route planner API changes on the ability to effectively work out commercial vehicle routes. RPS may also be useful as a rapid prototyping environment in permit application requirements analysis.
RPS will have no user authentication or access control and will have a disclaimer that it is not to be used for actual commercial vehicle routing.
Initially, RPS will be used to interactively create and edit test and benchmark routes for the TransLink Commercial Vehicle Route Planner.
Areas of study include:
* ways to use local knowledge of the road network to improve the route planner and/or create tailored, but sub-optimal wrt time/distance, routes.
* effectively assisting permit clerks in search of efficient routes that involve manual intervention such as counterflow maneuvers (e.g., having flag people close a section of road to allow the vehicle to travel down the middle)
* supporting designated truck route and oversize corridor design | priority | create a route planner sandbox application route planner sandbox is an application that lets you interactively test the route planner api it can be used to create and update test and benchmark routes stored in a web accessible document rps will also let you study the effects of route planner api changes on the ability to effectively work out commercial vehicle routes rps may also be useful as a rapid prototyping environment in permit application requirements analysis rps will have no user authentication or access control and will have a disclaimer that it is not to be used for actual commercial vehicle routing initially rps will be used to interactively create and edit test and benchmark routes for the translink commercial vehicle route planner areas of study include ways to use local knowledge of the road network to improve the route planner and or create tailored but sub optimal wrt time distance routes effectively assisting permit clerks in search of efficient routes that involve manual intervention such as counterflow maneuvers e g having flag people close a section of road to allow the vehicle to travel down the middle supporting designated truck route and oversize corridor design | 1 |
460,763 | 13,217,876,310 | IssuesEvent | 2020-08-17 07:41:37 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.0 staging-1693] Part of UI isn't updated immediately | Category: UI Priority: Medium Status: Fixed | 
> * 5. some elements in law don't upgrade if you add/delete something. you need to select fields or do something else. Step to reproduce:
> * start a new law:
> 
> * add some trigger:
> 
> * reselect trigger and this section:
> 
> * add some condition, we have overlap here:
> 
> * selecting something else will fix it:
> 
> * but if you delete this condition then you will have a lot of space here:
> 
_Originally posted by @SlayksWood in https://github.com/StrangeLoopGames/EcoIssues/issues/16954#issuecomment-652828277_ | 1.0 | [0.9.0 staging-1693] Part of UI isn't updated immediately - 
> * 5. some elements in law don't upgrade if you add/delete something. you need to select fields or do something else. Step to reproduce:
> * start a new law:
> 
> * add some trigger:
> 
> * reselect trigger and this section:
> 
> * add some condition, we have overlap here:
> 
> * selecting something else will fix it:
> 
> * but if you delete this condition then you will have a lot of space here:
> 
_Originally posted by @SlayksWood in https://github.com/StrangeLoopGames/EcoIssues/issues/16954#issuecomment-652828277_ | priority | part of ui isn t updated immediately some elements in law don t upgrade if you add delete something you need to select fields or do something else step to reproduce start a new law add some trigger reselect trigger and this section add some condition we have overlap here selecting something else will fix it but if you delete this condition then you will have a lot of space here originally posted by slaykswood in | 1 |
370,221 | 10,927,010,075 | IssuesEvent | 2019-11-22 15:49:47 | she-code-africa/SCA-Website | https://api.github.com/repos/she-code-africa/SCA-Website | closed | Set up staging environment | priority: medium type: task | This is to deploy the website for test purposes and also the environment we will use for test continuously.
We are yet to decide on a hosting provider. Maybe Heroku? | 1.0 | Set up staging environment - This is to deploy the website for test purposes and also the environment we will use for test continuously.
We are yet to decide on a hosting provider. Maybe Heroku? | priority | set up staging environment this is to deploy the website for test purposes and also the environment we will use for test continuously we are yet to decide on a hosting provider maybe heroku | 1 |
381,923 | 11,297,925,960 | IssuesEvent | 2020-01-17 07:38:45 | dnnsoftware/Dnn.Platform | https://api.github.com/repos/dnnsoftware/Dnn.Platform | closed | Processed count for import extension is showing 0 after import site is completed | Area: AE > PersonaBar Ext > SiteImportExport.Web Effort: Medium Priority: Medium Status: Closed Type: Bug | <!--
If you need community support or would like to solicit a Request for Comments (RFC), please post to the DNN Community forums at https://dnncommunity.org/forums for now. In the future, we are planning to implement a more robust solution for cultivating new ideas and nuturing these from concept to creation. We will update this template when this solution is generally available. In the meantime, we appreciate your patience as we endeavor to streamline our GitHub focus and efforts.
Please read the CONTRIBUTING guidelines at https://github.com/dnnsoftware/Dnn.Platform/blob/development/CONTRIBUTING.md prior to submitting an issue.
Any potential security issues SHOULD NOT be posted on GitHub. Instead, please send an email to security@dnnsoftware.com.
-->
## Description of bug
Processed count for import extension is showing 0 after import site is completed
## Steps to reproduce
1. Login as host user into DNN Platform.
2. Go to Persona Bar > Settings > Import/Export
3. Select the default site and export it with all the options enabled and mode FULL. Check the no of extension, no of Pages & No of Assets displayed on the export summary report.
4. Login to new site, Go to Persona Bar > Settings > Import/Export
5. Import the Exported file into new site.
6. Check the successful import summary.
## Current behavior
Extensions is showing 0/7 after import is completed
## Expected behavior
Extensions should show 7/7 after import is completed.
## Screenshots

## Error information
Provide any error information (console errors, error logs, etc.) related to this bug.
## Additional context
The processed count show always show the number of item processed even import extension doesn't happened.
## Affected version
<!--
Please add X in at least one of the boxes as appropriate. In order for an issue to be accepted, a developer needs to be able to reproduce the issue on a currently supported version. If you are looking for a workaround for an issue with an older version, please visit the forums at https://dnncommunity.org/forums
-->
* [x] 10.0.0 alpha build
* [x] 9.5.0 alpha build
* [x] 9.4.4 latest supported release
## Affected browser
<!--
Check all that apply, and add more if necessary. As appropriate, please specify the exact version(s) of the browser and operating system.
-->
* [x] Chrome
* [x] Firefox
* [x] Safari
* [x] Internet Explorer 11
* [x] Microsoft Edge (Classic)
* [x] Microsoft Edge Chromium
| 1.0 | Processed count for import extension is showing 0 after import site is completed - <!--
If you need community support or would like to solicit a Request for Comments (RFC), please post to the DNN Community forums at https://dnncommunity.org/forums for now. In the future, we are planning to implement a more robust solution for cultivating new ideas and nuturing these from concept to creation. We will update this template when this solution is generally available. In the meantime, we appreciate your patience as we endeavor to streamline our GitHub focus and efforts.
Please read the CONTRIBUTING guidelines at https://github.com/dnnsoftware/Dnn.Platform/blob/development/CONTRIBUTING.md prior to submitting an issue.
Any potential security issues SHOULD NOT be posted on GitHub. Instead, please send an email to security@dnnsoftware.com.
-->
## Description of bug
Processed count for import extension is showing 0 after import site is completed
## Steps to reproduce
1. Login as host user into DNN Platform.
2. Go to Persona Bar > Settings > Import/Export
3. Select the default site and export it with all the options enabled and mode FULL. Check the no of extension, no of Pages & No of Assets displayed on the export summary report.
4. Login to new site, Go to Persona Bar > Settings > Import/Export
5. Import the Exported file into new site.
6. Check the successful import summary.
## Current behavior
Extensions is showing 0/7 after import is completed
## Expected behavior
Extensions should show 7/7 after import is completed.
## Screenshots

## Error information
Provide any error information (console errors, error logs, etc.) related to this bug.
## Additional context
The processed count show always show the number of item processed even import extension doesn't happened.
## Affected version
<!--
Please add X in at least one of the boxes as appropriate. In order for an issue to be accepted, a developer needs to be able to reproduce the issue on a currently supported version. If you are looking for a workaround for an issue with an older version, please visit the forums at https://dnncommunity.org/forums
-->
* [x] 10.0.0 alpha build
* [x] 9.5.0 alpha build
* [x] 9.4.4 latest supported release
## Affected browser
<!--
Check all that apply, and add more if necessary. As appropriate, please specify the exact version(s) of the browser and operating system.
-->
* [x] Chrome
* [x] Firefox
* [x] Safari
* [x] Internet Explorer 11
* [x] Microsoft Edge (Classic)
* [x] Microsoft Edge Chromium
| priority | processed count for import extension is showing after import site is completed if you need community support or would like to solicit a request for comments rfc please post to the dnn community forums at for now in the future we are planning to implement a more robust solution for cultivating new ideas and nuturing these from concept to creation we will update this template when this solution is generally available in the meantime we appreciate your patience as we endeavor to streamline our github focus and efforts please read the contributing guidelines at prior to submitting an issue any potential security issues should not be posted on github instead please send an email to security dnnsoftware com description of bug processed count for import extension is showing after import site is completed steps to reproduce login as host user into dnn platform go to persona bar settings import export select the default site and export it with all the options enabled and mode full check the no of extension no of pages no of assets displayed on the export summary report login to new site go to persona bar settings import export import the exported file into new site check the successful import summary current behavior extensions is showing after import is completed expected behavior extensions should show after import is completed screenshots error information provide any error information console errors error logs etc related to this bug additional context the processed count show always show the number of item processed even import extension doesn t happened affected version please add x in at least one of the boxes as appropriate in order for an issue to be accepted a developer needs to be able to reproduce the issue on a currently supported version if you are looking for a workaround for an issue with an older version please visit the forums at alpha build alpha build latest supported release affected browser check all that apply and add more if necessary as appropriate please specify the exact version s of the browser and operating system chrome firefox safari internet explorer microsoft edge classic microsoft edge chromium | 1 |
766,420 | 26,883,091,369 | IssuesEvent | 2023-02-05 21:35:49 | schemathesis/schemathesis | https://api.github.com/repos/schemathesis/schemathesis | closed | [FEATURE] Do not copy schema components that are not referenced | Priority: Medium Type: Enhancement Difficulty: Medium | Now, all components are copied to intermediate schemas passed to `hypothesis-jsonschema` so all references can be resolved. The problem is that if the schema is huge, it might be problematic in terms of performance (a lot of things are serailized to JSON in `hypothesis-jsonschema`) or storage (if the end-user stores all failures). The idea is to copy only components that can be reached from the intermediate schema. | 1.0 | [FEATURE] Do not copy schema components that are not referenced - Now, all components are copied to intermediate schemas passed to `hypothesis-jsonschema` so all references can be resolved. The problem is that if the schema is huge, it might be problematic in terms of performance (a lot of things are serailized to JSON in `hypothesis-jsonschema`) or storage (if the end-user stores all failures). The idea is to copy only components that can be reached from the intermediate schema. | priority | do not copy schema components that are not referenced now all components are copied to intermediate schemas passed to hypothesis jsonschema so all references can be resolved the problem is that if the schema is huge it might be problematic in terms of performance a lot of things are serailized to json in hypothesis jsonschema or storage if the end user stores all failures the idea is to copy only components that can be reached from the intermediate schema | 1 |
469,054 | 13,496,999,422 | IssuesEvent | 2020-09-12 05:44:14 | csesoc/csesoc.unsw.edu.au | https://api.github.com/repos/csesoc/csesoc.unsw.edu.au | opened | Adoption of WEBP format | Priority: Medium Type: Enhancement | We should look into using webp files for our static website images instead of png files. The main reason for this switch are:
- Smaller size
- Faster loading
Source https://insanelab.com/blog/web-development/webp-web-design-vs-jpeg-gif-png | 1.0 | Adoption of WEBP format - We should look into using webp files for our static website images instead of png files. The main reason for this switch are:
- Smaller size
- Faster loading
Source https://insanelab.com/blog/web-development/webp-web-design-vs-jpeg-gif-png | priority | adoption of webp format we should look into using webp files for our static website images instead of png files the main reason for this switch are smaller size faster loading source | 1 |
506,321 | 14,662,742,385 | IssuesEvent | 2020-12-29 08:03:06 | airbytehq/airbyte | https://api.github.com/repos/airbytehq/airbyte | closed | Wrong display in attempts status? | area/frontend priority/medium type/bug | ## Expected Behavior
I am testing:
- a source setup with ExchangeRateAPI
- a destination with Local CSV
- a frequency of every 5 mins
After letting the scheduler run successfully a few trials, I changed the permission to remove write access to the local directory and observe how the system behaves with failures...
## Current Behavior
I observe 3 attempts and for some reason, the third one is marked with a green check (even though it is still failing without write permissions):

(Sorry i forgot to turn off my "dark" theme mode in my browser for the screenshot..)
| 1.0 | Wrong display in attempts status? - ## Expected Behavior
I am testing:
- a source setup with ExchangeRateAPI
- a destination with Local CSV
- a frequency of every 5 mins
After letting the scheduler run successfully a few trials, I changed the permission to remove write access to the local directory and observe how the system behaves with failures...
## Current Behavior
I observe 3 attempts and for some reason, the third one is marked with a green check (even though it is still failing without write permissions):

(Sorry i forgot to turn off my "dark" theme mode in my browser for the screenshot..)
| priority | wrong display in attempts status expected behavior i am testing a source setup with exchangerateapi a destination with local csv a frequency of every mins after letting the scheduler run successfully a few trials i changed the permission to remove write access to the local directory and observe how the system behaves with failures current behavior i observe attempts and for some reason the third one is marked with a green check even though it is still failing without write permissions sorry i forgot to turn off my dark theme mode in my browser for the screenshot | 1 |
320,345 | 9,779,634,620 | IssuesEvent | 2019-06-07 14:56:21 | canonical-web-and-design/mir-server.io | https://api.github.com/repos/canonical-web-and-design/mir-server.io | closed | The main image is of Gnome desktop | Priority: Medium | The image should be Mir related, or at least related to digital signage/kiosk.
Even a picture of Ubuntu Touch would be relevant. | 1.0 | The main image is of Gnome desktop - The image should be Mir related, or at least related to digital signage/kiosk.
Even a picture of Ubuntu Touch would be relevant. | priority | the main image is of gnome desktop the image should be mir related or at least related to digital signage kiosk even a picture of ubuntu touch would be relevant | 1 |
289,333 | 8,869,140,312 | IssuesEvent | 2019-01-11 03:31:50 | Hyracan/48532854823523 | https://api.github.com/repos/Hyracan/48532854823523 | closed | Добавить IP адреса в сетевой порт для ротации (см. комментарии) | Medium priority enchancement | Список IP адресов которые должны быть в ротации помимо прокси:
92.53.89.114 - сейчас используется сайтом и уже находится в ротации.
92.53.89.115
92.53.89.116
92.53.89.117
92.53.89.118 | 1.0 | Добавить IP адреса в сетевой порт для ротации (см. комментарии) - Список IP адресов которые должны быть в ротации помимо прокси:
92.53.89.114 - сейчас используется сайтом и уже находится в ротации.
92.53.89.115
92.53.89.116
92.53.89.117
92.53.89.118 | priority | добавить ip адреса в сетевой порт для ротации см комментарии список ip адресов которые должны быть в ротации помимо прокси сейчас используется сайтом и уже находится в ротации | 1 |
282,123 | 8,703,891,923 | IssuesEvent | 2018-12-05 17:50:07 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | USER ISSUE: minimap is not refresh if players use cart\vechicle. | Medium Priority | **Version:** 0.7.6.1 beta

| 1.0 | USER ISSUE: minimap is not refresh if players use cart\vechicle. - **Version:** 0.7.6.1 beta

| priority | user issue minimap is not refresh if players use cart vechicle version beta | 1 |
642,759 | 20,912,643,024 | IssuesEvent | 2022-03-24 10:42:32 | ASE-Projekte-WS-2021/ase-ws-21-unser-horsaal | https://api.github.com/repos/ASE-Projekte-WS-2021/ase-ws-21-unser-horsaal | opened | (ONBOARDING) Onboarding beim ersten Nutzen der App | Medium Priority | Als Nutzer möchte ich beim ersten Nutzen der App in die App eingeführt werden, damit ich verstehe wie die App zu nutzen ist und ich motiviert bin die Features der App zu nutzen. | 1.0 | (ONBOARDING) Onboarding beim ersten Nutzen der App - Als Nutzer möchte ich beim ersten Nutzen der App in die App eingeführt werden, damit ich verstehe wie die App zu nutzen ist und ich motiviert bin die Features der App zu nutzen. | priority | onboarding onboarding beim ersten nutzen der app als nutzer möchte ich beim ersten nutzen der app in die app eingeführt werden damit ich verstehe wie die app zu nutzen ist und ich motiviert bin die features der app zu nutzen | 1 |
664,068 | 22,238,665,919 | IssuesEvent | 2022-06-09 01:06:46 | Cockatrice/Cockatrice | https://api.github.com/repos/Cockatrice/Cockatrice | closed | Sound Do Not Overlap | App - Cockatrice Bug Medium Priority | **System Information:**
Client Version: 2.8.0 (2021-01-26)
Client Operating System: Windows 10 (10.0)
Build Architecture: 64-bit
Qt Version: 5.12.9
System Locale: en_US
Install Mode: Standard
_______________________________________________________________________________________
I have Uploaded a custom sound pack, but I've noticed that after I have one sound play, I need to wait until that sound file ends completely before any other sound plays instead of overlapping the sounds.
_______________________________________________________________________________________
**Steps to reproduce:**
- Do any action that produces a sound
- Do a second action that should produce a sound while the previous sound is still playing
| 1.0 | Sound Do Not Overlap - **System Information:**
Client Version: 2.8.0 (2021-01-26)
Client Operating System: Windows 10 (10.0)
Build Architecture: 64-bit
Qt Version: 5.12.9
System Locale: en_US
Install Mode: Standard
_______________________________________________________________________________________
I have Uploaded a custom sound pack, but I've noticed that after I have one sound play, I need to wait until that sound file ends completely before any other sound plays instead of overlapping the sounds.
_______________________________________________________________________________________
**Steps to reproduce:**
- Do any action that produces a sound
- Do a second action that should produce a sound while the previous sound is still playing
| priority | sound do not overlap system information client version client operating system windows build architecture bit qt version system locale en us install mode standard i have uploaded a custom sound pack but i ve noticed that after i have one sound play i need to wait until that sound file ends completely before any other sound plays instead of overlapping the sounds steps to reproduce do any action that produces a sound do a second action that should produce a sound while the previous sound is still playing | 1 |
115,550 | 4,675,789,507 | IssuesEvent | 2016-10-07 09:15:39 | BinPar/PPD | https://api.github.com/repos/BinPar/PPD | opened | INFORME PLANIFICACIÓN: INCORPORACIÓN CAMPOS FECHAS PANTALLA Y EXCEL | Priority: Medium | Incorporar los campos de fechas que son filtro, tanto en pantalla como en Excel.
Actualmente aparecen:
Fecha de publicación real en país de impresión
Fecha prevista de venta en filal
Fecha Servicio Novedad
Añadir:
- Fecha de publicación inicial:
- Fecha de entrada en Depto. Producción:
- Fecha estimada país de impresión:
- Fecha de entrada en almacén:
- Fecha de alta en SAP:
- Fecha de puesta a la venta real en filal:
CORREGIR *filal por filial | 1.0 | INFORME PLANIFICACIÓN: INCORPORACIÓN CAMPOS FECHAS PANTALLA Y EXCEL - Incorporar los campos de fechas que son filtro, tanto en pantalla como en Excel.
Actualmente aparecen:
Fecha de publicación real en país de impresión
Fecha prevista de venta en filal
Fecha Servicio Novedad
Añadir:
- Fecha de publicación inicial:
- Fecha de entrada en Depto. Producción:
- Fecha estimada país de impresión:
- Fecha de entrada en almacén:
- Fecha de alta en SAP:
- Fecha de puesta a la venta real en filal:
CORREGIR *filal por filial | priority | informe planificación incorporación campos fechas pantalla y excel incorporar los campos de fechas que son filtro tanto en pantalla como en excel actualmente aparecen fecha de publicación real en país de impresión fecha prevista de venta en filal fecha servicio novedad añadir fecha de publicación inicial fecha de entrada en depto producción fecha estimada país de impresión fecha de entrada en almacén fecha de alta en sap fecha de puesta a la venta real en filal corregir filal por filial | 1 |
798,253 | 28,241,173,028 | IssuesEvent | 2023-04-06 07:17:28 | yunki-kim/card-monkey-BE-refactor | https://api.github.com/repos/yunki-kim/card-monkey-BE-refactor | opened | [refactor] 비밀번호 변경 기능에 대한 예외처리 추가 | Status: In Progress For: Backend Priority: Medium Type: Feature | ## Description
비밀번호 변경 메서드에 대한 예외처리 추가
## Tasks
- [ ] 예외처리 추가
## Reference | 1.0 | [refactor] 비밀번호 변경 기능에 대한 예외처리 추가 - ## Description
비밀번호 변경 메서드에 대한 예외처리 추가
## Tasks
- [ ] 예외처리 추가
## Reference | priority | 비밀번호 변경 기능에 대한 예외처리 추가 description 비밀번호 변경 메서드에 대한 예외처리 추가 tasks 예외처리 추가 reference | 1 |
489,377 | 14,105,494,006 | IssuesEvent | 2020-11-06 13:35:38 | strapi/strapi | https://api.github.com/repos/strapi/strapi | closed | Break Media Lib when deleting a Media field from an Content Type | priority: medium source: plugin:upload status: confirmed type: bug | # **Bug**
In Strapi, after deleting a **Media** Field from a **Collection-Type**, a **GET request to /upload/files** throws an **Error**.
This should not happened.
**Steps to reproduce the behavior**
1. Create a new Strapi App.
2. Create a **Collection-Type** with a **Media** Field.
3. Create a new **Entry** from this **Collection-Type** with a **File** uploaded.
4. Delete the **Media** Field from the **Collection-Type**.
5. Finally going to the Media Library (**GET request to /upload/files**) will result in an **ERROR**.
**Expected behavior**
When deleting a **Media** Field from a **Collection-Type** should remove any relation between the Entries that used **Media** as a Fields and the uploaded files.
**Screenshots**

**Code snippets**
This Bug can be achieved **without** using code, this scenario **only uses the Strapi CMS**.
**System**
- Node.js version: v12.16.1.
- NPM version: 6.13.2.
- Strapi version: v3.0.0-beta.20.1
- Database: Default (SQLlite I believe).
- Operating system: Windows.
# **FIX**
I did find a "solution" to this issue.
1. Add the Media field back to the Collection-Type.
2. Remove the Media field manually from every Entry.
3. Now you can remove the Media field from the collection-type without triggering this error.
## **Possible solution**
As stated before, when deleting the Media field from the Collection-Type should remove the Media attached to every Entry that uses it.
| 1.0 | Break Media Lib when deleting a Media field from an Content Type - # **Bug**
In Strapi, after deleting a **Media** Field from a **Collection-Type**, a **GET request to /upload/files** throws an **Error**.
This should not happened.
**Steps to reproduce the behavior**
1. Create a new Strapi App.
2. Create a **Collection-Type** with a **Media** Field.
3. Create a new **Entry** from this **Collection-Type** with a **File** uploaded.
4. Delete the **Media** Field from the **Collection-Type**.
5. Finally going to the Media Library (**GET request to /upload/files**) will result in an **ERROR**.
**Expected behavior**
When deleting a **Media** Field from a **Collection-Type** should remove any relation between the Entries that used **Media** as a Fields and the uploaded files.
**Screenshots**

**Code snippets**
This Bug can be achieved **without** using code, this scenario **only uses the Strapi CMS**.
**System**
- Node.js version: v12.16.1.
- NPM version: 6.13.2.
- Strapi version: v3.0.0-beta.20.1
- Database: Default (SQLlite I believe).
- Operating system: Windows.
# **FIX**
I did find a "solution" to this issue.
1. Add the Media field back to the Collection-Type.
2. Remove the Media field manually from every Entry.
3. Now you can remove the Media field from the collection-type without triggering this error.
## **Possible solution**
As stated before, when deleting the Media field from the Collection-Type should remove the Media attached to every Entry that uses it.
| priority | break media lib when deleting a media field from an content type bug in strapi after deleting a media field from a collection type a get request to upload files throws an error this should not happened steps to reproduce the behavior create a new strapi app create a collection type with a media field create a new entry from this collection type with a file uploaded delete the media field from the collection type finally going to the media library get request to upload files will result in an error expected behavior when deleting a media field from a collection type should remove any relation between the entries that used media as a fields and the uploaded files screenshots code snippets this bug can be achieved without using code this scenario only uses the strapi cms system node js version npm version strapi version beta database default sqllite i believe operating system windows fix i did find a solution to this issue add the media field back to the collection type remove the media field manually from every entry now you can remove the media field from the collection type without triggering this error possible solution as stated before when deleting the media field from the collection type should remove the media attached to every entry that uses it | 1 |
351,487 | 10,519,400,623 | IssuesEvent | 2019-09-29 17:44:01 | cuappdev/ithaca-transit-ios | https://api.github.com/repos/cuappdev/ithaca-transit-ios | closed | Take out "Teleportation" alert if start and end location == each other | Priority: Medium Type: Bug | Seems to annoy a lot of ppl | 1.0 | Take out "Teleportation" alert if start and end location == each other - Seems to annoy a lot of ppl | priority | take out teleportation alert if start and end location each other seems to annoy a lot of ppl | 1 |
479,643 | 13,804,164,802 | IssuesEvent | 2020-10-11 07:33:42 | AY2021S1-TIC4001-2/tp | https://api.github.com/repos/AY2021S1-TIC4001-2/tp | closed | Update user guide | priority.Medium type.Task | ... with add income category, add expense category, and delete income/expense category commands. | 1.0 | Update user guide - ... with add income category, add expense category, and delete income/expense category commands. | priority | update user guide with add income category add expense category and delete income expense category commands | 1 |
562,129 | 16,638,664,341 | IssuesEvent | 2021-06-04 04:50:42 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.2.x beta] Pending Elections icon always visible | Category: UI Priority: Medium Squad: Mountain Goat Status: Not reproduced Type: Bug | The Pending Elections icon on the right side of the screen is always visible since 0.9.2
Seen on two different servers (launched pre 0.9.2 then upgraded) and a fresh 0.9.2.2 (for testing purpose).
| 1.0 | [0.9.2.x beta] Pending Elections icon always visible - The Pending Elections icon on the right side of the screen is always visible since 0.9.2
Seen on two different servers (launched pre 0.9.2 then upgraded) and a fresh 0.9.2.2 (for testing purpose).
| priority | pending elections icon always visible the pending elections icon on the right side of the screen is always visible since seen on two different servers launched pre then upgraded and a fresh for testing purpose | 1 |
19,871 | 2,622,174,271 | IssuesEvent | 2015-03-04 00:15:56 | byzhang/leveldb | https://api.github.com/repos/byzhang/leveldb | closed | Allow leveldb to be binded to by FFI | auto-migrated OpSys-All Priority-Medium Type-Enhancement | ```
It would be nice if I could bind to leveldb using FFI
(https://github.com/ffi/ffi). To allow FFI bindings, leveldb needs to expose a
C API (extern "C" { ... }) and build a dynamically-linked shared-library
(libleveldb.so).
```
Original issue reported on code.google.com by `postmode...@gmail.com` on 29 Jul 2011 at 11:25 | 1.0 | Allow leveldb to be binded to by FFI - ```
It would be nice if I could bind to leveldb using FFI
(https://github.com/ffi/ffi). To allow FFI bindings, leveldb needs to expose a
C API (extern "C" { ... }) and build a dynamically-linked shared-library
(libleveldb.so).
```
Original issue reported on code.google.com by `postmode...@gmail.com` on 29 Jul 2011 at 11:25 | priority | allow leveldb to be binded to by ffi it would be nice if i could bind to leveldb using ffi to allow ffi bindings leveldb needs to expose a c api extern c and build a dynamically linked shared library libleveldb so original issue reported on code google com by postmode gmail com on jul at | 1 |
275,156 | 8,575,062,185 | IssuesEvent | 2018-11-12 16:19:15 | naccyde/yall | https://api.github.com/repos/naccyde/yall | opened | Check log display when receiving SIGSEGV or such signal | Priority: Medium Status: On Hold Type: Enhancement | ## Summary
Check the library behavior when the application receive a system's signal (`SIGSEGV` and such).
## Steps to reproduce
N/A
## What is the current bug behavior?
N/A
## What is the expected correct behavior?
All the log message should be displayed. The current buffer of the writer thread should not be discarded as is could contains a set of crash relevant logs.
## Relevant logs and/or screenshots
N/A
## Possible fixes
If some logs are missing it could be interesting to find a way to write them before closing. Catching signals is not the better way... | 1.0 | Check log display when receiving SIGSEGV or such signal - ## Summary
Check the library behavior when the application receive a system's signal (`SIGSEGV` and such).
## Steps to reproduce
N/A
## What is the current bug behavior?
N/A
## What is the expected correct behavior?
All the log message should be displayed. The current buffer of the writer thread should not be discarded as is could contains a set of crash relevant logs.
## Relevant logs and/or screenshots
N/A
## Possible fixes
If some logs are missing it could be interesting to find a way to write them before closing. Catching signals is not the better way... | priority | check log display when receiving sigsegv or such signal summary check the library behavior when the application receive a system s signal sigsegv and such steps to reproduce n a what is the current bug behavior n a what is the expected correct behavior all the log message should be displayed the current buffer of the writer thread should not be discarded as is could contains a set of crash relevant logs relevant logs and or screenshots n a possible fixes if some logs are missing it could be interesting to find a way to write them before closing catching signals is not the better way | 1 |
780,987 | 27,417,609,706 | IssuesEvent | 2023-03-01 14:45:59 | PrefectHQ/prefect | https://api.github.com/repos/PrefectHQ/prefect | closed | Orion - add search functionality in block selection. | enhancement status:accepted ui priority:medium | ### First check
- [X] I added a descriptive title to this issue.
- [X] I used the GitHub search to find a similar request and didn't find it.
- [X] I searched the Prefect documentation for this feature.
### Prefect Version
2.x
### Describe the current behavior
If I define a block as an input for a flow or as a attribute of another block, I get a drop-down in Orion. If the list is long I have to scroll through lots of options.
### Describe the proposed behavior
Add search functionality to the drop-down. If I click a field in Orion that is of any block type, I can search through that list by typing.
The behavior would be similar how the search for issues here in GitHub works.
<img src="https://user-images.githubusercontent.com/24698503/197032103-1248981b-0436-4ebd-8783-24b1fb01b095.jpg" width="300">
### Example Use
This is especially helpful if one has lots of blocks of the same type. Say I have a custom Block called `ObjectDetectionModel`.
Each of these contains one trained and published model. If I have 100 of these the pure drop-down becomes a pain to use.
### Additional context
_No response_ | 1.0 | Orion - add search functionality in block selection. - ### First check
- [X] I added a descriptive title to this issue.
- [X] I used the GitHub search to find a similar request and didn't find it.
- [X] I searched the Prefect documentation for this feature.
### Prefect Version
2.x
### Describe the current behavior
If I define a block as an input for a flow or as a attribute of another block, I get a drop-down in Orion. If the list is long I have to scroll through lots of options.
### Describe the proposed behavior
Add search functionality to the drop-down. If I click a field in Orion that is of any block type, I can search through that list by typing.
The behavior would be similar how the search for issues here in GitHub works.
<img src="https://user-images.githubusercontent.com/24698503/197032103-1248981b-0436-4ebd-8783-24b1fb01b095.jpg" width="300">
### Example Use
This is especially helpful if one has lots of blocks of the same type. Say I have a custom Block called `ObjectDetectionModel`.
Each of these contains one trained and published model. If I have 100 of these the pure drop-down becomes a pain to use.
### Additional context
_No response_ | priority | orion add search functionality in block selection first check i added a descriptive title to this issue i used the github search to find a similar request and didn t find it i searched the prefect documentation for this feature prefect version x describe the current behavior if i define a block as an input for a flow or as a attribute of another block i get a drop down in orion if the list is long i have to scroll through lots of options describe the proposed behavior add search functionality to the drop down if i click a field in orion that is of any block type i can search through that list by typing the behavior would be similar how the search for issues here in github works example use this is especially helpful if one has lots of blocks of the same type say i have a custom block called objectdetectionmodel each of these contains one trained and published model if i have of these the pure drop down becomes a pain to use additional context no response | 1 |
93,872 | 3,912,667,215 | IssuesEvent | 2016-04-20 11:25:03 | parallelus/Plugin-Installer-for-Runway | https://api.github.com/repos/parallelus/Plugin-Installer-for-Runway | opened | Install free plugins from WP repo rather than from included zip files | enhancement Priority 2: Medium | We want to change the way the plugin installer works so that free plugins (such as Sidekick, Ninja Forms and Simple Colorbox for example) are installed from the WordPress plugin repository rather than from zip files included in the theme package.
For free plugins, when you choose them in the Runway admin to set up the plugin installer, currently it goes to the WordPress repository and downloads the zip files into the 'extensions/plugin-installer/plugins' folder. That's what we need to change; what we need now is for it to write references into the JSON file so that it knows what plugins to use, but not have a local copies.
Then, in the standalone theme, when users click Install for any of the free required or recommended plugins it installs it from the WordPress repository.
With regard to updating any of these free plugins, we don’t need to do anything at all with the plugin installer, just let WordPress do its normal update notifications thing. | 1.0 | Install free plugins from WP repo rather than from included zip files - We want to change the way the plugin installer works so that free plugins (such as Sidekick, Ninja Forms and Simple Colorbox for example) are installed from the WordPress plugin repository rather than from zip files included in the theme package.
For free plugins, when you choose them in the Runway admin to set up the plugin installer, currently it goes to the WordPress repository and downloads the zip files into the 'extensions/plugin-installer/plugins' folder. That's what we need to change; what we need now is for it to write references into the JSON file so that it knows what plugins to use, but not have a local copies.
Then, in the standalone theme, when users click Install for any of the free required or recommended plugins it installs it from the WordPress repository.
With regard to updating any of these free plugins, we don’t need to do anything at all with the plugin installer, just let WordPress do its normal update notifications thing. | priority | install free plugins from wp repo rather than from included zip files we want to change the way the plugin installer works so that free plugins such as sidekick ninja forms and simple colorbox for example are installed from the wordpress plugin repository rather than from zip files included in the theme package for free plugins when you choose them in the runway admin to set up the plugin installer currently it goes to the wordpress repository and downloads the zip files into the extensions plugin installer plugins folder that s what we need to change what we need now is for it to write references into the json file so that it knows what plugins to use but not have a local copies then in the standalone theme when users click install for any of the free required or recommended plugins it installs it from the wordpress repository with regard to updating any of these free plugins we don’t need to do anything at all with the plugin installer just let wordpress do its normal update notifications thing | 1 |
320,804 | 9,789,338,080 | IssuesEvent | 2019-06-10 09:33:17 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | Tutorial "Forage for food" glitch | Medium Priority QA Staging | 1. After restart tutorials sometimes You need to collect plants very far from your position.

2. Sometimes 1 plant appears. I think need minimum 3 plants for tutorial.
.
3. When you hungry to work Tutorial "Forage for Food" appears but markers don't appear.


| 1.0 | Tutorial "Forage for food" glitch - 1. After restart tutorials sometimes You need to collect plants very far from your position.

2. Sometimes 1 plant appears. I think need minimum 3 plants for tutorial.
.
3. When you hungry to work Tutorial "Forage for Food" appears but markers don't appear.


| priority | tutorial forage for food glitch after restart tutorials sometimes you need to collect plants very far from your position sometimes plant appears i think need minimum plants for tutorial when you hungry to work tutorial forage for food appears but markers don t appear | 1 |
594,288 | 18,042,376,639 | IssuesEvent | 2021-09-18 08:59:44 | medic-code/IOL-Assist | https://api.github.com/repos/medic-code/IOL-Assist | closed | Improve individual IOL Page styling | Style issue Medium priority | Currently have a bare-bones styling for this page. May want to think about a design theme across the App and apply changes to the IOL Page | 1.0 | Improve individual IOL Page styling - Currently have a bare-bones styling for this page. May want to think about a design theme across the App and apply changes to the IOL Page | priority | improve individual iol page styling currently have a bare bones styling for this page may want to think about a design theme across the app and apply changes to the iol page | 1 |
814,732 | 30,519,645,557 | IssuesEvent | 2023-07-19 07:08:43 | ArizonaGreenTea05/FinancialOverview | https://api.github.com/repos/ArizonaGreenTea05/FinancialOverview | opened | WinFormsFinance: Enhance sales | Kind: enhancement Module: WinFormsFinance Priority: medium | Depends on #50
- [ ] different pages for sale definition and overview
- [ ] main page contains table with all sales and combo box to choose if it should be calculated up/down to daily, weekly, monthly or yearly (similar to current all-sales)
- [ ] overview page contains multiple tables for daily, weekly, monthly, yearly
- [ ] add button on main and overview page leads to new dialog to define a new sale | 1.0 | WinFormsFinance: Enhance sales - Depends on #50
- [ ] different pages for sale definition and overview
- [ ] main page contains table with all sales and combo box to choose if it should be calculated up/down to daily, weekly, monthly or yearly (similar to current all-sales)
- [ ] overview page contains multiple tables for daily, weekly, monthly, yearly
- [ ] add button on main and overview page leads to new dialog to define a new sale | priority | winformsfinance enhance sales depends on different pages for sale definition and overview main page contains table with all sales and combo box to choose if it should be calculated up down to daily weekly monthly or yearly similar to current all sales overview page contains multiple tables for daily weekly monthly yearly add button on main and overview page leads to new dialog to define a new sale | 1 |
128,292 | 5,052,295,734 | IssuesEvent | 2016-12-21 01:25:05 | JustBru00/RenamePlugin | https://api.github.com/repos/JustBru00/RenamePlugin | closed | Add messages.yml | Addition Request Medium Priority TODO | Requested by @Paras on spigotmc.org. https://www.spigotmc.org/threads/epicrename.51650/page-6#post-1749180
So basically redo the config system. :smile:
| 1.0 | Add messages.yml - Requested by @Paras on spigotmc.org. https://www.spigotmc.org/threads/epicrename.51650/page-6#post-1749180
So basically redo the config system. :smile:
| priority | add messages yml requested by paras on spigotmc org so basically redo the config system smile | 1 |
22,775 | 2,650,921,453 | IssuesEvent | 2015-03-16 06:46:31 | grepper/tovid | https://api.github.com/repos/grepper/tovid | closed | makexml tovid_encoded.mpg was not found; mplex created tovid_encoded.1.mpg | bug imported Priority-Medium wontfix | _From [DaleEMo...@gmail.com](https://code.google.com/u/109909325065118593133/) on October 05, 2007 08:45:02_
Howdy y'all;
When running tovid GUI 0.31 makexml can't find the file created by mplex.
Here's the specifics from my log file:
\- - - - -
mplex -V -f 8 -o /tmp/1/theWarANW.mpg.tovid_encoded.%d.mpg
/home/dalem/theWarANW.mpg.tovid_encoded.0/video.m2v
/home/dalem/theWarANW.mpg.tovid_encoded.0/audio.ac3
Multiplexing finished successfully
Output files:
4.2G /tmp/1/theWarANW.mpg.tovid_encoded.1.mpg
4.2G total
=========================================================
Statistics written to /home/dalem/.tovid/stats.tovid
Cleaning up...
removed `/home/dalem/theWarANW.mpg.tovid_encoded.0/video.yuv'
Removing temporary files...
removed `/home/dalem/theWarANW.mpg.tovid_encoded.0/tovid.scratch'
removed `/home/dalem/theWarANW.mpg.tovid_encoded.0/video.m2v'
removed `/home/dalem/theWarANW.mpg.tovid_encoded.0/audio.ac3'
removed `/home/dalem/theWarANW.mpg.tovid_encoded.0/tovid.log'
removed directory: `/home/dalem/theWarANW.mpg.tovid_encoded.0'
=========================================================
Done!
=========================================================
Running command: makexml -quiet -overwrite -dvd -menu
"/tmp/1/A_Necessary_War.mpg" "/tmp/1/theWarANW.mpg.tovid_encoded.mpg" -out
"/tmp/1/The_War"
\--------------------------------
makexml
A script to generate XML for authoring a VCD, SVCD, or DVD.
Part of the tovid suite, version 0.31 http://www.tovid.org --------------------------------
Adding a titleset-level menu using file: /tmp/1/A_Necessary_War.mpg
The file /tmp/1/theWarANW.mpg.tovid_encoded.mpg was not found. Exiting.
\- - - - -
Many thanks for any suggestions,
Dale E. Moore
**Attachment:** [bad1.log](http://code.google.com/p/tovid/issues/detail?id=14)
_Original issue: http://code.google.com/p/tovid/issues/detail?id=14_ | 1.0 | makexml tovid_encoded.mpg was not found; mplex created tovid_encoded.1.mpg - _From [DaleEMo...@gmail.com](https://code.google.com/u/109909325065118593133/) on October 05, 2007 08:45:02_
Howdy y'all;
When running tovid GUI 0.31 makexml can't find the file created by mplex.
Here's the specifics from my log file:
\- - - - -
mplex -V -f 8 -o /tmp/1/theWarANW.mpg.tovid_encoded.%d.mpg
/home/dalem/theWarANW.mpg.tovid_encoded.0/video.m2v
/home/dalem/theWarANW.mpg.tovid_encoded.0/audio.ac3
Multiplexing finished successfully
Output files:
4.2G /tmp/1/theWarANW.mpg.tovid_encoded.1.mpg
4.2G total
=========================================================
Statistics written to /home/dalem/.tovid/stats.tovid
Cleaning up...
removed `/home/dalem/theWarANW.mpg.tovid_encoded.0/video.yuv'
Removing temporary files...
removed `/home/dalem/theWarANW.mpg.tovid_encoded.0/tovid.scratch'
removed `/home/dalem/theWarANW.mpg.tovid_encoded.0/video.m2v'
removed `/home/dalem/theWarANW.mpg.tovid_encoded.0/audio.ac3'
removed `/home/dalem/theWarANW.mpg.tovid_encoded.0/tovid.log'
removed directory: `/home/dalem/theWarANW.mpg.tovid_encoded.0'
=========================================================
Done!
=========================================================
Running command: makexml -quiet -overwrite -dvd -menu
"/tmp/1/A_Necessary_War.mpg" "/tmp/1/theWarANW.mpg.tovid_encoded.mpg" -out
"/tmp/1/The_War"
\--------------------------------
makexml
A script to generate XML for authoring a VCD, SVCD, or DVD.
Part of the tovid suite, version 0.31 http://www.tovid.org --------------------------------
Adding a titleset-level menu using file: /tmp/1/A_Necessary_War.mpg
The file /tmp/1/theWarANW.mpg.tovid_encoded.mpg was not found. Exiting.
\- - - - -
Many thanks for any suggestions,
Dale E. Moore
**Attachment:** [bad1.log](http://code.google.com/p/tovid/issues/detail?id=14)
_Original issue: http://code.google.com/p/tovid/issues/detail?id=14_ | priority | makexml tovid encoded mpg was not found mplex created tovid encoded mpg from on october howdy y all when running tovid gui makexml can t find the file created by mplex here s the specifics from my log file mplex v f o tmp thewaranw mpg tovid encoded d mpg home dalem thewaranw mpg tovid encoded video home dalem thewaranw mpg tovid encoded audio multiplexing finished successfully output files tmp thewaranw mpg tovid encoded mpg total statistics written to home dalem tovid stats tovid cleaning up removed home dalem thewaranw mpg tovid encoded video yuv removing temporary files removed home dalem thewaranw mpg tovid encoded tovid scratch removed home dalem thewaranw mpg tovid encoded video removed home dalem thewaranw mpg tovid encoded audio removed home dalem thewaranw mpg tovid encoded tovid log removed directory home dalem thewaranw mpg tovid encoded done running command makexml quiet overwrite dvd menu tmp a necessary war mpg tmp thewaranw mpg tovid encoded mpg out tmp the war makexml a script to generate xml for authoring a vcd svcd or dvd part of the tovid suite version adding a titleset level menu using file tmp a necessary war mpg the file tmp thewaranw mpg tovid encoded mpg was not found exiting many thanks for any suggestions dale e moore attachment original issue | 1 |
166,374 | 6,303,826,153 | IssuesEvent | 2017-07-21 14:35:59 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio-ui] Search is not working, some configuration changed | bug Priority: Medium | Seems like the configuration for search changed, so we have to validate the UI. | 1.0 | [studio-ui] Search is not working, some configuration changed - Seems like the configuration for search changed, so we have to validate the UI. | priority | search is not working some configuration changed seems like the configuration for search changed so we have to validate the ui | 1 |
208,347 | 7,153,260,829 | IssuesEvent | 2018-01-26 00:41:48 | vmware/vic-product | https://api.github.com/repos/vmware/vic-product | closed | vm-support on VCH should include Harbor logs as well which are under /var/log/harbor | component/ova priority/medium team/lifecycle triage/proposed-1.4 | @lgayatri commented on [Fri May 26 2017](https://github.com/vmware/vic/issues/5265)
**User Statement:**
The support bundle that gets generated on VCH with vm-support should include Harbor logs
**Details:**
vm-support on VCH contains only vmware-* files from /var/log and does not include log folder harbor which is needed to debug harbor issues
**Acceptance Criteria:**
Extracted vm-support bundle of VCH, and found only below logs
root@vic-st-h2-132 [ /var/log/harbor/2017-05-26/tmp/vm-support.qabKha/var/log ]# ls
vmware-vgauthsvc.log.0 vmware-vmsvc.1.log vmware-vmsvc.2.log vmware-vmsvc.3.log vmware-vmsvc.4.log vmware-vmsvc.log
ls /var/log contains:
root@vic-st-h2-132 [ /var/log ]# ls
btmp harbor installer.log lastlog vmware-vmsvc.1.log vmware-vmsvc.3.log vmware-vmsvc.log
cloud-init.log journal vmware-vgauthsvc.log.0 vmware-vmsvc.2.log vmware-vmsvc.4.log wtmp
Missing log folder is "harbor"
Please include harbor log folder in the support bundle.
---
@anchal-agrawal commented on [Fri May 26 2017](https://github.com/vmware/vic/issues/5265#issuecomment-304334029)
@lgayatri Looks like this is the OVA applianceVM, not the VCH endpointVM. Ping @andrewtchin and @frapposelli for adding an estimate and priority for the work involved.
| 1.0 | vm-support on VCH should include Harbor logs as well which are under /var/log/harbor - @lgayatri commented on [Fri May 26 2017](https://github.com/vmware/vic/issues/5265)
**User Statement:**
The support bundle that gets generated on VCH with vm-support should include Harbor logs
**Details:**
vm-support on VCH contains only vmware-* files from /var/log and does not include log folder harbor which is needed to debug harbor issues
**Acceptance Criteria:**
Extracted vm-support bundle of VCH, and found only below logs
root@vic-st-h2-132 [ /var/log/harbor/2017-05-26/tmp/vm-support.qabKha/var/log ]# ls
vmware-vgauthsvc.log.0 vmware-vmsvc.1.log vmware-vmsvc.2.log vmware-vmsvc.3.log vmware-vmsvc.4.log vmware-vmsvc.log
ls /var/log contains:
root@vic-st-h2-132 [ /var/log ]# ls
btmp harbor installer.log lastlog vmware-vmsvc.1.log vmware-vmsvc.3.log vmware-vmsvc.log
cloud-init.log journal vmware-vgauthsvc.log.0 vmware-vmsvc.2.log vmware-vmsvc.4.log wtmp
Missing log folder is "harbor"
Please include harbor log folder in the support bundle.
---
@anchal-agrawal commented on [Fri May 26 2017](https://github.com/vmware/vic/issues/5265#issuecomment-304334029)
@lgayatri Looks like this is the OVA applianceVM, not the VCH endpointVM. Ping @andrewtchin and @frapposelli for adding an estimate and priority for the work involved.
| priority | vm support on vch should include harbor logs as well which are under var log harbor lgayatri commented on user statement the support bundle that gets generated on vch with vm support should include harbor logs details vm support on vch contains only vmware files from var log and does not include log folder harbor which is needed to debug harbor issues acceptance criteria extracted vm support bundle of vch and found only below logs root vic st ls vmware vgauthsvc log vmware vmsvc log vmware vmsvc log vmware vmsvc log vmware vmsvc log vmware vmsvc log ls var log contains root vic st ls btmp harbor installer log lastlog vmware vmsvc log vmware vmsvc log vmware vmsvc log cloud init log journal vmware vgauthsvc log vmware vmsvc log vmware vmsvc log wtmp missing log folder is harbor please include harbor log folder in the support bundle anchal agrawal commented on lgayatri looks like this is the ova appliancevm not the vch endpointvm ping andrewtchin and frapposelli for adding an estimate and priority for the work involved | 1 |
769,098 | 26,993,320,154 | IssuesEvent | 2023-02-09 21:55:19 | rich-iannone/pointblank | https://api.github.com/repos/rich-iannone/pointblank | closed | pointblank simple example fails with a `fmt() unused argument ` error when using {gt} version 0.8.0 | Type: ☹︎ Bug Difficulty: [2] Intermediate Effort: [2] Medium Priority: ♨︎ Critical | ## Prework
* [x] Read and agree to the [code of conduct](https://www.contributor-covenant.org/version/2/0/code_of_conduct/) and [contributing guidelines](https://github.com/rich-iannone/pointblank/blob/main/.github/CONTRIBUTING.md).
* [x] If there is [already a relevant issue](https://github.com/rich-iannone/pointblank/issues), whether open or closed, comment on the existing thread instead of posting a new issue.
* [x] Post a [minimal reproducible example](https://www.tidyverse.org/help/) so the maintainer can troubleshoot the problems you identify. A reproducible example is:
* [x] **Runnable**: post enough R code and data so any onlooker can create the error on their own computer.
* [x] **Minimal**: reduce runtime wherever possible and remove complicated details that are irrelevant to the issue at hand.
* [x] **Readable**: format your code according to the [tidyverse style guide](https://style.tidyverse.org/).
## Forewords
Congratulation for the fantastic package, I'm using it on a daily basis when doing data wrangling, and I've had the honor to present it in front of the r-toulouse user group community with great success.
## Description
{pointblank} README simple example fails to print the agent object with an error
```
Error in fmt(data = data, columns = {: argument unused (prepend = TRUE)
```
when using {gt} version 0.8.0
## Reproducible example
``` r
# pak::pak("rstudio/gt@v0.8.0")
library(pointblank)
agent <-
dplyr::tibble(
a = c(5, 7, 6, 5, NA, 7),
b = c(6, 1, 0, 6, 0, 7)
) %>%
create_agent(
label = "A very *simple* example.",
) %>%
col_vals_between(
vars(a), 1, 9,
na_pass = TRUE
) %>%
col_vals_lt(
vars(c), 12,
preconditions = ~ . %>% dplyr::mutate(c = a + b)
) %>%
col_is_numeric(vars(a, b)) %>%
interrogate()
agent
#> Error in fmt(data = data, columns = {: argument inutilisé (prepend = TRUE)
```
<sup>Created on 2022-12-03 by the [reprex package](https://reprex.tidyverse.org) (v2.0.1)</sup>
<details style="margin-bottom:10px;">
<summary>
Session info
</summary>
``` r
sessioninfo::session_info()
#> ─ Session info ───────────────────────────────────────────────────────────────
#> setting value
#> version R version 4.2.1 (2022-06-23)
#> os Ubuntu 22.04.1 LTS
#> system x86_64, linux-gnu
#> ui X11
#> language fr_FR
#> collate fr_FR.UTF-8
#> ctype fr_FR.UTF-8
#> tz Europe/Paris
#> date 2022-12-03
#> pandoc 2.18 @ /usr/lib/rstudio/bin/quarto/bin/tools/ (via rmarkdown)
#>
#> ─ Packages ───────────────────────────────────────────────────────────────────
#> package * version date (UTC) lib source
#> assertthat 0.2.1 2019-03-21 [1] CRAN (R 4.2.1)
#> base64enc 0.1-3 2015-07-28 [1] CRAN (R 4.2.1)
#> blastula 0.3.2 2020-05-19 [1] CRAN (R 4.2.1)
#> cli 3.4.1 2022-09-23 [1] CRAN (R 4.2.1)
#> colorspace 2.0-3 2022-02-21 [1] CRAN (R 4.2.1)
#> DBI 1.1.3 2022-06-18 [1] CRAN (R 4.2.1)
#> digest 0.6.30 2022-10-18 [1] CRAN (R 4.2.1)
#> dplyr 1.0.10 2022-09-01 [1] CRAN (R 4.2.1)
#> evaluate 0.18 2022-11-07 [1] CRAN (R 4.2.1)
#> fansi 1.0.3 2022-03-24 [1] CRAN (R 4.2.1)
#> fastmap 1.1.0 2021-01-25 [1] CRAN (R 4.2.1)
#> fs 1.5.2 2021-12-08 [1] CRAN (R 4.2.1)
#> generics 0.1.3 2022-07-05 [1] CRAN (R 4.2.1)
#> ggplot2 3.4.0 2022-11-04 [1] CRAN (R 4.2.1)
#> glue 1.6.2 2022-02-24 [1] CRAN (R 4.2.1)
#> gt 0.8.0 2022-12-03 [1] Github (rstudio/gt@0acc7fb)
#> gtable 0.3.1 2022-09-01 [1] CRAN (R 4.2.1)
#> highr 0.9 2021-04-16 [1] CRAN (R 4.2.1)
#> htmltools 0.5.3 2022-07-18 [1] CRAN (R 4.2.1)
#> knitr 1.41 2022-11-18 [1] CRAN (R 4.2.1)
#> lifecycle 1.0.3 2022-10-07 [1] CRAN (R 4.2.1)
#> magrittr 2.0.3 2022-03-30 [1] CRAN (R 4.2.1)
#> munsell 0.5.0 2018-06-12 [1] CRAN (R 4.2.1)
#> pillar 1.8.1 2022-08-19 [1] CRAN (R 4.2.1)
#> pkgconfig 2.0.3 2019-09-22 [1] CRAN (R 4.2.1)
#> pointblank * 0.11.2 2022-10-08 [1] CRAN (R 4.2.1)
#> R6 2.5.1 2021-08-19 [1] CRAN (R 4.2.1)
#> reprex 2.0.1 2021-08-05 [3] CRAN (R 4.1.0)
#> rlang 1.0.6 2022-09-24 [1] CRAN (R 4.2.1)
#> rmarkdown 2.18 2022-11-09 [1] CRAN (R 4.2.1)
#> rstudioapi 0.14 2022-08-22 [1] CRAN (R 4.2.1)
#> scales 1.2.1 2022-08-20 [1] CRAN (R 4.2.1)
#> sessioninfo 1.2.2 2021-12-06 [1] CRAN (R 4.2.1)
#> stringi 1.7.8 2022-07-11 [1] CRAN (R 4.2.1)
#> stringr 1.5.0 2022-12-02 [1] CRAN (R 4.2.1)
#> tibble 3.1.8 2022-07-22 [1] CRAN (R 4.2.1)
#> tidyselect 1.2.0 2022-10-10 [1] CRAN (R 4.2.1)
#> utf8 1.2.2 2021-07-24 [1] CRAN (R 4.2.1)
#> vctrs 0.5.1 2022-11-16 [1] CRAN (R 4.2.1)
#> withr 2.5.0 2022-03-03 [1] CRAN (R 4.2.1)
#> xfun 0.35 2022-11-16 [1] CRAN (R 4.2.1)
#> yaml 2.3.6 2022-10-18 [1] CRAN (R 4.2.1)
#>
#> [1] /home/____/R/x86_64-pc-linux-gnu-library/4.2
#> [2] /usr/local/lib/R/site-library
#> [3] /usr/lib/R/site-library
#> [4] /usr/lib/R/library
#>
#> ──────────────────────────────────────────────────────────────────────────────
```
</details>
## Expected result
No error should happen running simple example and correct printing of the {gt} table
``` r
# pak::pak("rstudio/gt@v0.7.0")
library(pointblank)
agent <-
dplyr::tibble(
a = c(5, 7, 6, 5, NA, 7),
b = c(6, 1, 0, 6, 0, 7)
) %>%
create_agent(
label = "A very *simple* example.",
) %>%
col_vals_between(
vars(a), 1, 9,
na_pass = TRUE
) %>%
col_vals_lt(
vars(c), 12,
preconditions = ~ . %>% dplyr::mutate(c = a + b)
) %>%
col_is_numeric(vars(a, b)) %>%
interrogate()
agent
```
<div id="pb_agent" style="overflow-x:auto;overflow-y:auto;width:auto;height:auto;">
::: table removed :::
</div>
<sup>Created on 2022-12-03 by the [reprex package](https://reprex.tidyverse.org) (v2.0.1)</sup>
<details style="margin-bottom:10px;">
<summary>
Session info
</summary>
``` r
sessioninfo::session_info()
#> ─ Session info ───────────────────────────────────────────────────────────────
#> setting value
#> version R version 4.2.1 (2022-06-23)
#> os Ubuntu 22.04.1 LTS
#> system x86_64, linux-gnu
#> ui X11
#> language fr_FR
#> collate fr_FR.UTF-8
#> ctype fr_FR.UTF-8
#> tz Europe/Paris
#> date 2022-12-03
#> pandoc 2.18 @ /usr/lib/rstudio/bin/quarto/bin/tools/ (via rmarkdown)
#>
#> ─ Packages ───────────────────────────────────────────────────────────────────
#> package * version date (UTC) lib source
#> assertthat 0.2.1 2019-03-21 [1] CRAN (R 4.2.1)
#> base64enc 0.1-3 2015-07-28 [1] CRAN (R 4.2.1)
#> blastula 0.3.2 2020-05-19 [1] CRAN (R 4.2.1)
#> cli 3.4.1 2022-09-23 [1] CRAN (R 4.2.1)
#> colorspace 2.0-3 2022-02-21 [1] CRAN (R 4.2.1)
#> commonmark 1.8.1 2022-10-14 [1] CRAN (R 4.2.1)
#> DBI 1.1.3 2022-06-18 [1] CRAN (R 4.2.1)
#> digest 0.6.30 2022-10-18 [1] CRAN (R 4.2.1)
#> dplyr 1.0.10 2022-09-01 [1] CRAN (R 4.2.1)
#> evaluate 0.18 2022-11-07 [1] CRAN (R 4.2.1)
#> fansi 1.0.3 2022-03-24 [1] CRAN (R 4.2.1)
#> fastmap 1.1.0 2021-01-25 [1] CRAN (R 4.2.1)
#> fs 1.5.2 2021-12-08 [1] CRAN (R 4.2.1)
#> generics 0.1.3 2022-07-05 [1] CRAN (R 4.2.1)
#> ggplot2 3.4.0 2022-11-04 [1] CRAN (R 4.2.1)
#> glue 1.6.2 2022-02-24 [1] CRAN (R 4.2.1)
#> gt 0.7.0 2022-12-03 [1] Github (rstudio/gt@902c9e9)
#> gtable 0.3.1 2022-09-01 [1] CRAN (R 4.2.1)
#> highr 0.9 2021-04-16 [1] CRAN (R 4.2.1)
#> htmltools 0.5.3 2022-07-18 [1] CRAN (R 4.2.1)
#> knitr 1.41 2022-11-18 [1] CRAN (R 4.2.1)
#> lifecycle 1.0.3 2022-10-07 [1] CRAN (R 4.2.1)
#> magrittr 2.0.3 2022-03-30 [1] CRAN (R 4.2.1)
#> munsell 0.5.0 2018-06-12 [1] CRAN (R 4.2.1)
#> pillar 1.8.1 2022-08-19 [1] CRAN (R 4.2.1)
#> pkgconfig 2.0.3 2019-09-22 [1] CRAN (R 4.2.1)
#> pointblank * 0.11.2 2022-10-08 [1] CRAN (R 4.2.1)
#> R6 2.5.1 2021-08-19 [1] CRAN (R 4.2.1)
#> reprex 2.0.1 2021-08-05 [3] CRAN (R 4.1.0)
#> rlang 1.0.6 2022-09-24 [1] CRAN (R 4.2.1)
#> rmarkdown 2.18 2022-11-09 [1] CRAN (R 4.2.1)
#> rstudioapi 0.14 2022-08-22 [1] CRAN (R 4.2.1)
#> sass 0.4.4 2022-11-24 [1] CRAN (R 4.2.1)
#> scales 1.2.1 2022-08-20 [1] CRAN (R 4.2.1)
#> sessioninfo 1.2.2 2021-12-06 [1] CRAN (R 4.2.1)
#> stringi 1.7.8 2022-07-11 [1] CRAN (R 4.2.1)
#> stringr 1.5.0 2022-12-02 [1] CRAN (R 4.2.1)
#> tibble 3.1.8 2022-07-22 [1] CRAN (R 4.2.1)
#> tidyselect 1.2.0 2022-10-10 [1] CRAN (R 4.2.1)
#> utf8 1.2.2 2021-07-24 [1] CRAN (R 4.2.1)
#> vctrs 0.5.1 2022-11-16 [1] CRAN (R 4.2.1)
#> withr 2.5.0 2022-03-03 [1] CRAN (R 4.2.1)
#> xfun 0.35 2022-11-16 [1] CRAN (R 4.2.1)
#> yaml 2.3.6 2022-10-18 [1] CRAN (R 4.2.1)
#>
#> [1] /home/____/R/x86_64-pc-linux-gnu-library/4.2
#> [2] /usr/local/lib/R/site-library
#> [3] /usr/lib/R/site-library
#> [4] /usr/lib/R/library
#>
#> ──────────────────────────────────────────────────────────────────────────────
```
</details> | 1.0 | pointblank simple example fails with a `fmt() unused argument ` error when using {gt} version 0.8.0 - ## Prework
* [x] Read and agree to the [code of conduct](https://www.contributor-covenant.org/version/2/0/code_of_conduct/) and [contributing guidelines](https://github.com/rich-iannone/pointblank/blob/main/.github/CONTRIBUTING.md).
* [x] If there is [already a relevant issue](https://github.com/rich-iannone/pointblank/issues), whether open or closed, comment on the existing thread instead of posting a new issue.
* [x] Post a [minimal reproducible example](https://www.tidyverse.org/help/) so the maintainer can troubleshoot the problems you identify. A reproducible example is:
* [x] **Runnable**: post enough R code and data so any onlooker can create the error on their own computer.
* [x] **Minimal**: reduce runtime wherever possible and remove complicated details that are irrelevant to the issue at hand.
* [x] **Readable**: format your code according to the [tidyverse style guide](https://style.tidyverse.org/).
## Forewords
Congratulation for the fantastic package, I'm using it on a daily basis when doing data wrangling, and I've had the honor to present it in front of the r-toulouse user group community with great success.
## Description
{pointblank} README simple example fails to print the agent object with an error
```
Error in fmt(data = data, columns = {: argument unused (prepend = TRUE)
```
when using {gt} version 0.8.0
## Reproducible example
``` r
# pak::pak("rstudio/gt@v0.8.0")
library(pointblank)
agent <-
dplyr::tibble(
a = c(5, 7, 6, 5, NA, 7),
b = c(6, 1, 0, 6, 0, 7)
) %>%
create_agent(
label = "A very *simple* example.",
) %>%
col_vals_between(
vars(a), 1, 9,
na_pass = TRUE
) %>%
col_vals_lt(
vars(c), 12,
preconditions = ~ . %>% dplyr::mutate(c = a + b)
) %>%
col_is_numeric(vars(a, b)) %>%
interrogate()
agent
#> Error in fmt(data = data, columns = {: argument inutilisé (prepend = TRUE)
```
<sup>Created on 2022-12-03 by the [reprex package](https://reprex.tidyverse.org) (v2.0.1)</sup>
<details style="margin-bottom:10px;">
<summary>
Session info
</summary>
``` r
sessioninfo::session_info()
#> ─ Session info ───────────────────────────────────────────────────────────────
#> setting value
#> version R version 4.2.1 (2022-06-23)
#> os Ubuntu 22.04.1 LTS
#> system x86_64, linux-gnu
#> ui X11
#> language fr_FR
#> collate fr_FR.UTF-8
#> ctype fr_FR.UTF-8
#> tz Europe/Paris
#> date 2022-12-03
#> pandoc 2.18 @ /usr/lib/rstudio/bin/quarto/bin/tools/ (via rmarkdown)
#>
#> ─ Packages ───────────────────────────────────────────────────────────────────
#> package * version date (UTC) lib source
#> assertthat 0.2.1 2019-03-21 [1] CRAN (R 4.2.1)
#> base64enc 0.1-3 2015-07-28 [1] CRAN (R 4.2.1)
#> blastula 0.3.2 2020-05-19 [1] CRAN (R 4.2.1)
#> cli 3.4.1 2022-09-23 [1] CRAN (R 4.2.1)
#> colorspace 2.0-3 2022-02-21 [1] CRAN (R 4.2.1)
#> DBI 1.1.3 2022-06-18 [1] CRAN (R 4.2.1)
#> digest 0.6.30 2022-10-18 [1] CRAN (R 4.2.1)
#> dplyr 1.0.10 2022-09-01 [1] CRAN (R 4.2.1)
#> evaluate 0.18 2022-11-07 [1] CRAN (R 4.2.1)
#> fansi 1.0.3 2022-03-24 [1] CRAN (R 4.2.1)
#> fastmap 1.1.0 2021-01-25 [1] CRAN (R 4.2.1)
#> fs 1.5.2 2021-12-08 [1] CRAN (R 4.2.1)
#> generics 0.1.3 2022-07-05 [1] CRAN (R 4.2.1)
#> ggplot2 3.4.0 2022-11-04 [1] CRAN (R 4.2.1)
#> glue 1.6.2 2022-02-24 [1] CRAN (R 4.2.1)
#> gt 0.8.0 2022-12-03 [1] Github (rstudio/gt@0acc7fb)
#> gtable 0.3.1 2022-09-01 [1] CRAN (R 4.2.1)
#> highr 0.9 2021-04-16 [1] CRAN (R 4.2.1)
#> htmltools 0.5.3 2022-07-18 [1] CRAN (R 4.2.1)
#> knitr 1.41 2022-11-18 [1] CRAN (R 4.2.1)
#> lifecycle 1.0.3 2022-10-07 [1] CRAN (R 4.2.1)
#> magrittr 2.0.3 2022-03-30 [1] CRAN (R 4.2.1)
#> munsell 0.5.0 2018-06-12 [1] CRAN (R 4.2.1)
#> pillar 1.8.1 2022-08-19 [1] CRAN (R 4.2.1)
#> pkgconfig 2.0.3 2019-09-22 [1] CRAN (R 4.2.1)
#> pointblank * 0.11.2 2022-10-08 [1] CRAN (R 4.2.1)
#> R6 2.5.1 2021-08-19 [1] CRAN (R 4.2.1)
#> reprex 2.0.1 2021-08-05 [3] CRAN (R 4.1.0)
#> rlang 1.0.6 2022-09-24 [1] CRAN (R 4.2.1)
#> rmarkdown 2.18 2022-11-09 [1] CRAN (R 4.2.1)
#> rstudioapi 0.14 2022-08-22 [1] CRAN (R 4.2.1)
#> scales 1.2.1 2022-08-20 [1] CRAN (R 4.2.1)
#> sessioninfo 1.2.2 2021-12-06 [1] CRAN (R 4.2.1)
#> stringi 1.7.8 2022-07-11 [1] CRAN (R 4.2.1)
#> stringr 1.5.0 2022-12-02 [1] CRAN (R 4.2.1)
#> tibble 3.1.8 2022-07-22 [1] CRAN (R 4.2.1)
#> tidyselect 1.2.0 2022-10-10 [1] CRAN (R 4.2.1)
#> utf8 1.2.2 2021-07-24 [1] CRAN (R 4.2.1)
#> vctrs 0.5.1 2022-11-16 [1] CRAN (R 4.2.1)
#> withr 2.5.0 2022-03-03 [1] CRAN (R 4.2.1)
#> xfun 0.35 2022-11-16 [1] CRAN (R 4.2.1)
#> yaml 2.3.6 2022-10-18 [1] CRAN (R 4.2.1)
#>
#> [1] /home/____/R/x86_64-pc-linux-gnu-library/4.2
#> [2] /usr/local/lib/R/site-library
#> [3] /usr/lib/R/site-library
#> [4] /usr/lib/R/library
#>
#> ──────────────────────────────────────────────────────────────────────────────
```
</details>
## Expected result
No error should happen running simple example and correct printing of the {gt} table
``` r
# pak::pak("rstudio/gt@v0.7.0")
library(pointblank)
agent <-
dplyr::tibble(
a = c(5, 7, 6, 5, NA, 7),
b = c(6, 1, 0, 6, 0, 7)
) %>%
create_agent(
label = "A very *simple* example.",
) %>%
col_vals_between(
vars(a), 1, 9,
na_pass = TRUE
) %>%
col_vals_lt(
vars(c), 12,
preconditions = ~ . %>% dplyr::mutate(c = a + b)
) %>%
col_is_numeric(vars(a, b)) %>%
interrogate()
agent
```
<div id="pb_agent" style="overflow-x:auto;overflow-y:auto;width:auto;height:auto;">
::: table removed :::
</div>
<sup>Created on 2022-12-03 by the [reprex package](https://reprex.tidyverse.org) (v2.0.1)</sup>
<details style="margin-bottom:10px;">
<summary>
Session info
</summary>
``` r
sessioninfo::session_info()
#> ─ Session info ───────────────────────────────────────────────────────────────
#> setting value
#> version R version 4.2.1 (2022-06-23)
#> os Ubuntu 22.04.1 LTS
#> system x86_64, linux-gnu
#> ui X11
#> language fr_FR
#> collate fr_FR.UTF-8
#> ctype fr_FR.UTF-8
#> tz Europe/Paris
#> date 2022-12-03
#> pandoc 2.18 @ /usr/lib/rstudio/bin/quarto/bin/tools/ (via rmarkdown)
#>
#> ─ Packages ───────────────────────────────────────────────────────────────────
#> package * version date (UTC) lib source
#> assertthat 0.2.1 2019-03-21 [1] CRAN (R 4.2.1)
#> base64enc 0.1-3 2015-07-28 [1] CRAN (R 4.2.1)
#> blastula 0.3.2 2020-05-19 [1] CRAN (R 4.2.1)
#> cli 3.4.1 2022-09-23 [1] CRAN (R 4.2.1)
#> colorspace 2.0-3 2022-02-21 [1] CRAN (R 4.2.1)
#> commonmark 1.8.1 2022-10-14 [1] CRAN (R 4.2.1)
#> DBI 1.1.3 2022-06-18 [1] CRAN (R 4.2.1)
#> digest 0.6.30 2022-10-18 [1] CRAN (R 4.2.1)
#> dplyr 1.0.10 2022-09-01 [1] CRAN (R 4.2.1)
#> evaluate 0.18 2022-11-07 [1] CRAN (R 4.2.1)
#> fansi 1.0.3 2022-03-24 [1] CRAN (R 4.2.1)
#> fastmap 1.1.0 2021-01-25 [1] CRAN (R 4.2.1)
#> fs 1.5.2 2021-12-08 [1] CRAN (R 4.2.1)
#> generics 0.1.3 2022-07-05 [1] CRAN (R 4.2.1)
#> ggplot2 3.4.0 2022-11-04 [1] CRAN (R 4.2.1)
#> glue 1.6.2 2022-02-24 [1] CRAN (R 4.2.1)
#> gt 0.7.0 2022-12-03 [1] Github (rstudio/gt@902c9e9)
#> gtable 0.3.1 2022-09-01 [1] CRAN (R 4.2.1)
#> highr 0.9 2021-04-16 [1] CRAN (R 4.2.1)
#> htmltools 0.5.3 2022-07-18 [1] CRAN (R 4.2.1)
#> knitr 1.41 2022-11-18 [1] CRAN (R 4.2.1)
#> lifecycle 1.0.3 2022-10-07 [1] CRAN (R 4.2.1)
#> magrittr 2.0.3 2022-03-30 [1] CRAN (R 4.2.1)
#> munsell 0.5.0 2018-06-12 [1] CRAN (R 4.2.1)
#> pillar 1.8.1 2022-08-19 [1] CRAN (R 4.2.1)
#> pkgconfig 2.0.3 2019-09-22 [1] CRAN (R 4.2.1)
#> pointblank * 0.11.2 2022-10-08 [1] CRAN (R 4.2.1)
#> R6 2.5.1 2021-08-19 [1] CRAN (R 4.2.1)
#> reprex 2.0.1 2021-08-05 [3] CRAN (R 4.1.0)
#> rlang 1.0.6 2022-09-24 [1] CRAN (R 4.2.1)
#> rmarkdown 2.18 2022-11-09 [1] CRAN (R 4.2.1)
#> rstudioapi 0.14 2022-08-22 [1] CRAN (R 4.2.1)
#> sass 0.4.4 2022-11-24 [1] CRAN (R 4.2.1)
#> scales 1.2.1 2022-08-20 [1] CRAN (R 4.2.1)
#> sessioninfo 1.2.2 2021-12-06 [1] CRAN (R 4.2.1)
#> stringi 1.7.8 2022-07-11 [1] CRAN (R 4.2.1)
#> stringr 1.5.0 2022-12-02 [1] CRAN (R 4.2.1)
#> tibble 3.1.8 2022-07-22 [1] CRAN (R 4.2.1)
#> tidyselect 1.2.0 2022-10-10 [1] CRAN (R 4.2.1)
#> utf8 1.2.2 2021-07-24 [1] CRAN (R 4.2.1)
#> vctrs 0.5.1 2022-11-16 [1] CRAN (R 4.2.1)
#> withr 2.5.0 2022-03-03 [1] CRAN (R 4.2.1)
#> xfun 0.35 2022-11-16 [1] CRAN (R 4.2.1)
#> yaml 2.3.6 2022-10-18 [1] CRAN (R 4.2.1)
#>
#> [1] /home/____/R/x86_64-pc-linux-gnu-library/4.2
#> [2] /usr/local/lib/R/site-library
#> [3] /usr/lib/R/site-library
#> [4] /usr/lib/R/library
#>
#> ──────────────────────────────────────────────────────────────────────────────
```
</details> | priority | pointblank simple example fails with a fmt unused argument error when using gt version prework read and agree to the and if there is whether open or closed comment on the existing thread instead of posting a new issue post a so the maintainer can troubleshoot the problems you identify a reproducible example is runnable post enough r code and data so any onlooker can create the error on their own computer minimal reduce runtime wherever possible and remove complicated details that are irrelevant to the issue at hand readable format your code according to the forewords congratulation for the fantastic package i m using it on a daily basis when doing data wrangling and i ve had the honor to present it in front of the r toulouse user group community with great success description pointblank readme simple example fails to print the agent object with an error error in fmt data data columns argument unused prepend true when using gt version reproducible example r pak pak rstudio gt library pointblank agent dplyr tibble a c na b c create agent label a very simple example col vals between vars a na pass true col vals lt vars c preconditions dplyr mutate c a b col is numeric vars a b interrogate agent error in fmt data data columns argument inutilisé prepend true created on by the session info r sessioninfo session info ─ session info ─────────────────────────────────────────────────────────────── setting value version r version os ubuntu lts system linux gnu ui language fr fr collate fr fr utf ctype fr fr utf tz europe paris date pandoc usr lib rstudio bin quarto bin tools via rmarkdown ─ packages ─────────────────────────────────────────────────────────────────── package version date utc lib source assertthat cran r cran r blastula cran r cli cran r colorspace cran r dbi cran r digest cran r dplyr cran r evaluate cran r fansi cran r fastmap cran r fs cran r generics cran r cran r glue cran r gt github rstudio gt gtable cran r highr cran r htmltools cran r knitr cran r lifecycle cran r magrittr cran r munsell cran r pillar cran r pkgconfig cran r pointblank cran r cran r reprex cran r rlang cran r rmarkdown cran r rstudioapi cran r scales cran r sessioninfo cran r stringi cran r stringr cran r tibble cran r tidyselect cran r cran r vctrs cran r withr cran r xfun cran r yaml cran r home r pc linux gnu library usr local lib r site library usr lib r site library usr lib r library ────────────────────────────────────────────────────────────────────────────── expected result no error should happen running simple example and correct printing of the gt table r pak pak rstudio gt library pointblank agent dplyr tibble a c na b c create agent label a very simple example col vals between vars a na pass true col vals lt vars c preconditions dplyr mutate c a b col is numeric vars a b interrogate agent table removed created on by the session info r sessioninfo session info ─ session info ─────────────────────────────────────────────────────────────── setting value version r version os ubuntu lts system linux gnu ui language fr fr collate fr fr utf ctype fr fr utf tz europe paris date pandoc usr lib rstudio bin quarto bin tools via rmarkdown ─ packages ─────────────────────────────────────────────────────────────────── package version date utc lib source assertthat cran r cran r blastula cran r cli cran r colorspace cran r commonmark cran r dbi cran r digest cran r dplyr cran r evaluate cran r fansi cran r fastmap cran r fs cran r generics cran r cran r glue cran r gt github rstudio gt gtable cran r highr cran r htmltools cran r knitr cran r lifecycle cran r magrittr cran r munsell cran r pillar cran r pkgconfig cran r pointblank cran r cran r reprex cran r rlang cran r rmarkdown cran r rstudioapi cran r sass cran r scales cran r sessioninfo cran r stringi cran r stringr cran r tibble cran r tidyselect cran r cran r vctrs cran r withr cran r xfun cran r yaml cran r home r pc linux gnu library usr local lib r site library usr lib r site library usr lib r library ────────────────────────────────────────────────────────────────────────────── | 1 |
254,093 | 8,069,725,846 | IssuesEvent | 2018-08-06 07:15:44 | project8/morpho | https://api.github.com/repos/project8/morpho | closed | Use more functionalities of stan | enhancement medium priority | Currently we are only using morpho for its sampling capability of the likelihood function.
In some special cases (when the mode of the likelihood is the same as its median), a profiled likelihood approach can save us a lot of time by treating all the nuisance parameters.
Stan contains a optimization ('optimizing') method where it looks for the parameters which maximize the LogLikelihood.
We should unlock this feature with morpho.
The major problem is that the parameters of the sampling and optimizing method are not identical and a selection needs to be done before calling `getattr(sm,stan_mode)`... | 1.0 | Use more functionalities of stan - Currently we are only using morpho for its sampling capability of the likelihood function.
In some special cases (when the mode of the likelihood is the same as its median), a profiled likelihood approach can save us a lot of time by treating all the nuisance parameters.
Stan contains a optimization ('optimizing') method where it looks for the parameters which maximize the LogLikelihood.
We should unlock this feature with morpho.
The major problem is that the parameters of the sampling and optimizing method are not identical and a selection needs to be done before calling `getattr(sm,stan_mode)`... | priority | use more functionalities of stan currently we are only using morpho for its sampling capability of the likelihood function in some special cases when the mode of the likelihood is the same as its median a profiled likelihood approach can save us a lot of time by treating all the nuisance parameters stan contains a optimization optimizing method where it looks for the parameters which maximize the loglikelihood we should unlock this feature with morpho the major problem is that the parameters of the sampling and optimizing method are not identical and a selection needs to be done before calling getattr sm stan mode | 1 |
356,065 | 10,588,296,748 | IssuesEvent | 2019-10-09 01:24:15 | qlcchain/go-qlc | https://api.github.com/repos/qlcchain/go-qlc | opened | research internal event bus for better performance | Priority: Medium Type: Enhancement | ### Description of the issue
- support FIFO
- worker pool
### Issue-Type
- [ ] bug report
- [x] feature request
- [ ] Documentation improvement
| 1.0 | research internal event bus for better performance - ### Description of the issue
- support FIFO
- worker pool
### Issue-Type
- [ ] bug report
- [x] feature request
- [ ] Documentation improvement
| priority | research internal event bus for better performance description of the issue support fifo worker pool issue type bug report feature request documentation improvement | 1 |
612,427 | 19,012,623,105 | IssuesEvent | 2021-11-23 10:58:44 | assurance-maladie-digital/design-system | https://api.github.com/repos/assurance-maladie-digital/design-system | closed | Ajout nom page introduction | medium-priority | Associé à l'US #1342 Nommer notre Designer System
En tant qu'utilisateur du Design System, je veux en connaitre le nom afin de pouvoir le nommer et le différencier de ces concurrents.
Série d'US en trois temps :
(1) préparation d'un atelier d'idéation - voir US #1342
(2) passage de l'atelier - voir US #1388
**(3) analyse des résultats et soumission pour implémentation du nom en page d'introduction**
Au terme de l'US, il conviendra de créer un nouvel US sur la définition graphique du nom. | 1.0 | Ajout nom page introduction - Associé à l'US #1342 Nommer notre Designer System
En tant qu'utilisateur du Design System, je veux en connaitre le nom afin de pouvoir le nommer et le différencier de ces concurrents.
Série d'US en trois temps :
(1) préparation d'un atelier d'idéation - voir US #1342
(2) passage de l'atelier - voir US #1388
**(3) analyse des résultats et soumission pour implémentation du nom en page d'introduction**
Au terme de l'US, il conviendra de créer un nouvel US sur la définition graphique du nom. | priority | ajout nom page introduction associé à l us nommer notre designer system en tant qu utilisateur du design system je veux en connaitre le nom afin de pouvoir le nommer et le différencier de ces concurrents série d us en trois temps préparation d un atelier d idéation voir us passage de l atelier voir us analyse des résultats et soumission pour implémentation du nom en page d introduction au terme de l us il conviendra de créer un nouvel us sur la définition graphique du nom | 1 |
77,509 | 3,506,406,157 | IssuesEvent | 2016-01-08 06:32:27 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | closed | sry (BB #554) | migrated Priority: Medium Type: Feature Request | This issue was migrated from bitbucket.
**Original Reporter:** ArtemisCZSK
**Original Date:** 15.03.2014 20:29:43 GMT+0000
**Original Priority:** major
**Original Type:** enhancement
**Original State:** invalid
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/554
<hr>
sry | 1.0 | sry (BB #554) - This issue was migrated from bitbucket.
**Original Reporter:** ArtemisCZSK
**Original Date:** 15.03.2014 20:29:43 GMT+0000
**Original Priority:** major
**Original Type:** enhancement
**Original State:** invalid
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/554
<hr>
sry | priority | sry bb this issue was migrated from bitbucket original reporter artemisczsk original date gmt original priority major original type enhancement original state invalid direct link sry | 1 |
268,407 | 8,406,645,845 | IssuesEvent | 2018-10-11 18:32:19 | GCE-NEIIST/GCE-NEIIST-webapp | https://api.github.com/repos/GCE-NEIIST/GCE-NEIIST-webapp | closed | Improve the classifier's performance | Priority: Medium Status: Available Type: Enhancement | The classifier implemented to leverage GCE-Thesis has to be improved.
We can evaluate its performance, recurring to some internal and external measures:
-Precision
-Recall
-F1 Score
-Error
| 1.0 | Improve the classifier's performance - The classifier implemented to leverage GCE-Thesis has to be improved.
We can evaluate its performance, recurring to some internal and external measures:
-Precision
-Recall
-F1 Score
-Error
| priority | improve the classifier s performance the classifier implemented to leverage gce thesis has to be improved we can evaluate its performance recurring to some internal and external measures precision recall score error | 1 |
205,865 | 7,106,883,506 | IssuesEvent | 2018-01-16 18:01:23 | SmartlyDressedGames/Unturned-4.x-Community | https://api.github.com/repos/SmartlyDressedGames/Unturned-4.x-Community | opened | Test Item for Each Wearable Slot | Priority: Medium Status: To-Do Type: Content | - [x] Shirt: Tee
- [x] Pants: Jeans
- [ ] Gloves: Fingerless - Redo
- [ ] Shoes: Sneakers - Redo
- [ ] Belt: Pistol Holster
- [ ] Vest: Chest Rig
- [ ] Backpack: Rucksack
- [ ] Wrists: Watch
- [ ] Neck: Dogtags
- [ ] Glasses: Aviators
- [ ] Mask: Balaclava
- [ ] Hat: Baseball Cap
- [ ] Overcoat: Hoodie | 1.0 | Test Item for Each Wearable Slot - - [x] Shirt: Tee
- [x] Pants: Jeans
- [ ] Gloves: Fingerless - Redo
- [ ] Shoes: Sneakers - Redo
- [ ] Belt: Pistol Holster
- [ ] Vest: Chest Rig
- [ ] Backpack: Rucksack
- [ ] Wrists: Watch
- [ ] Neck: Dogtags
- [ ] Glasses: Aviators
- [ ] Mask: Balaclava
- [ ] Hat: Baseball Cap
- [ ] Overcoat: Hoodie | priority | test item for each wearable slot shirt tee pants jeans gloves fingerless redo shoes sneakers redo belt pistol holster vest chest rig backpack rucksack wrists watch neck dogtags glasses aviators mask balaclava hat baseball cap overcoat hoodie | 1 |
23,463 | 2,659,694,213 | IssuesEvent | 2015-03-18 22:38:01 | SiCKRAGETV/sickrage-issues | https://api.github.com/repos/SiCKRAGETV/sickrage-issues | closed | Manage Episode Status | 1: Feature request 2: Medium Priority 3: Confirmed branch: develop | <a href="https://github.com/mgaulton"><img src="https://avatars.githubusercontent.com/u/4757726?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [mgaulton](https://github.com/mgaulton)**
_Thursday Nov 27, 2014 at 16:29 GMT_
_Originally opened as https://github.com/SiCKRAGETV/SickRage/issues/970 (14 comment(s))_
----
Wondering if there is a way to flag all failed status episodes and change to another status ie wanted.
| 1.0 | Manage Episode Status - <a href="https://github.com/mgaulton"><img src="https://avatars.githubusercontent.com/u/4757726?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [mgaulton](https://github.com/mgaulton)**
_Thursday Nov 27, 2014 at 16:29 GMT_
_Originally opened as https://github.com/SiCKRAGETV/SickRage/issues/970 (14 comment(s))_
----
Wondering if there is a way to flag all failed status episodes and change to another status ie wanted.
| priority | manage episode status issue by thursday nov at gmt originally opened as comment s wondering if there is a way to flag all failed status episodes and change to another status ie wanted | 1 |
419,701 | 12,227,183,137 | IssuesEvent | 2020-05-03 14:15:30 | wevote/WeVoteServer | https://api.github.com/repos/wevote/WeVoteServer | opened | Import Polling Locations from Spreadsheet | Difficulty: Medium Priority 1 | We are able to import a variety of data, like Measures and Candidates. Please extend this system to import polling locations from an uploaded CSV.
This is a related page in the admin interface: https://api.wevoteusa.org/import_export_batches/batch_list/?kind_of_batch=MEASURE&google_civic_election_id=0&state_code= | 1.0 | Import Polling Locations from Spreadsheet - We are able to import a variety of data, like Measures and Candidates. Please extend this system to import polling locations from an uploaded CSV.
This is a related page in the admin interface: https://api.wevoteusa.org/import_export_batches/batch_list/?kind_of_batch=MEASURE&google_civic_election_id=0&state_code= | priority | import polling locations from spreadsheet we are able to import a variety of data like measures and candidates please extend this system to import polling locations from an uploaded csv this is a related page in the admin interface | 1 |
550,449 | 16,112,897,185 | IssuesEvent | 2021-04-28 01:05:14 | uwblueprint/shoe-project | https://api.github.com/repos/uwblueprint/shoe-project | opened | Missing toasts for when visibility is toggled, map missing on publish story toast | all-stories bug priority: medium | # Description
The publish toast should have a map icon to the left of it like in Figma.

When multi-select is toggled, there should be a toast after the "hide" or "show" buttons are clicked.
https://www.figma.com/file/uVqnweL2ApqvPHWzQYmZ1t/Dashboard?node-id=1159%3A2297
# Potential Solution
- recycle the toast component from publish for the visibility buttons
- add the map icon from Material UI icons.
| 1.0 | Missing toasts for when visibility is toggled, map missing on publish story toast - # Description
The publish toast should have a map icon to the left of it like in Figma.

When multi-select is toggled, there should be a toast after the "hide" or "show" buttons are clicked.
https://www.figma.com/file/uVqnweL2ApqvPHWzQYmZ1t/Dashboard?node-id=1159%3A2297
# Potential Solution
- recycle the toast component from publish for the visibility buttons
- add the map icon from Material UI icons.
| priority | missing toasts for when visibility is toggled map missing on publish story toast description the publish toast should have a map icon to the left of it like in figma when multi select is toggled there should be a toast after the hide or show buttons are clicked potential solution recycle the toast component from publish for the visibility buttons add the map icon from material ui icons | 1 |
48,043 | 2,990,135,158 | IssuesEvent | 2015-07-21 07:12:11 | jayway/rest-assured | https://api.github.com/repos/jayway/rest-assured | closed | Jackson Json newline issue | bug imported invalid Priority-Medium | _From [arun0...@gmail.com](https://code.google.com/u/105805403900039762191/) on November 02, 2013 01:49:02_
What steps will reproduce the problem? 1. Using Jackson 2 generate a simple json (which gives back pretty print json as return).
2. Example : Ping health response is : "{\n \"status\" : \"Success\",\n \"pong\" : 1\n}" - which is valid JSON but
looks like rest-assured doesn't like it?
3. I get this error : Caused by: groovy.json.JsonException: A JSON payload should start with an openning curly brace '{' or an openning square bracket '['.
Instead, '"{
"status" : "Success",
"pong" : 1
}"' was found on line: 1, column: 1
when I try to assert : assertEquals("Success", from(response).getString("status")); What version of the product are you using? On what operating system? Rest Assured version 1.8.1, Jackson 2.2.3 running on Mac. Please provide any additional information below. using Spring rest template to make http calls - not sure if it matters.
_Original issue: http://code.google.com/p/rest-assured/issues/detail?id=261_ | 1.0 | Jackson Json newline issue - _From [arun0...@gmail.com](https://code.google.com/u/105805403900039762191/) on November 02, 2013 01:49:02_
What steps will reproduce the problem? 1. Using Jackson 2 generate a simple json (which gives back pretty print json as return).
2. Example : Ping health response is : "{\n \"status\" : \"Success\",\n \"pong\" : 1\n}" - which is valid JSON but
looks like rest-assured doesn't like it?
3. I get this error : Caused by: groovy.json.JsonException: A JSON payload should start with an openning curly brace '{' or an openning square bracket '['.
Instead, '"{
"status" : "Success",
"pong" : 1
}"' was found on line: 1, column: 1
when I try to assert : assertEquals("Success", from(response).getString("status")); What version of the product are you using? On what operating system? Rest Assured version 1.8.1, Jackson 2.2.3 running on Mac. Please provide any additional information below. using Spring rest template to make http calls - not sure if it matters.
_Original issue: http://code.google.com/p/rest-assured/issues/detail?id=261_ | priority | jackson json newline issue from on november what steps will reproduce the problem using jackson generate a simple json which gives back pretty print json as return example ping health response is n status success n pong n which is valid json but looks like rest assured doesn t like it i get this error caused by groovy json jsonexception a json payload should start with an openning curly brace or an openning square bracket instead status success pong was found on line column when i try to assert assertequals success from response getstring status what version of the product are you using on what operating system rest assured version jackson running on mac please provide any additional information below using spring rest template to make http calls not sure if it matters original issue | 1 |
22,042 | 2,644,665,673 | IssuesEvent | 2015-03-12 18:07:37 | SCIInstitute/SCIRun | https://api.github.com/repos/SCIInstitute/SCIRun | closed | Remove rescale colormap functionality from ShowField | Framework Graphics Priority-Medium Refactoring | Remove rescale colormap functionality from ShowField and use the RescaleColorMap module instead. | 1.0 | Remove rescale colormap functionality from ShowField - Remove rescale colormap functionality from ShowField and use the RescaleColorMap module instead. | priority | remove rescale colormap functionality from showfield remove rescale colormap functionality from showfield and use the rescalecolormap module instead | 1 |
668,909 | 22,603,423,525 | IssuesEvent | 2022-06-29 11:11:25 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | Bluetooth: Controller: Missing validation of unsupported PHY when performing PHY update | bug priority: medium area: Bluetooth area: Bluetooth Controller | **Describe the bug**
PHY update procedure is missing implementation to handle unsupported PHY requested by peer central device.
PHY update complete should not be generated to Host, connection is maintained on the old PHY and the Controller should not respond to PDUs received on the unsupported PHY.
This bug lead to failure of latest conformance tests (new testcases added in test suite).
**To Reproduce**
Build samples/bluetooth/hci_uart for nrf52dk_nrf52832 with correct configurations required for conformance tests, and execute the tests on a conformance tester.
**Expected behavior**
Test failure.
**Impact**
Conformance test failure
| 1.0 | Bluetooth: Controller: Missing validation of unsupported PHY when performing PHY update - **Describe the bug**
PHY update procedure is missing implementation to handle unsupported PHY requested by peer central device.
PHY update complete should not be generated to Host, connection is maintained on the old PHY and the Controller should not respond to PDUs received on the unsupported PHY.
This bug lead to failure of latest conformance tests (new testcases added in test suite).
**To Reproduce**
Build samples/bluetooth/hci_uart for nrf52dk_nrf52832 with correct configurations required for conformance tests, and execute the tests on a conformance tester.
**Expected behavior**
Test failure.
**Impact**
Conformance test failure
| priority | bluetooth controller missing validation of unsupported phy when performing phy update describe the bug phy update procedure is missing implementation to handle unsupported phy requested by peer central device phy update complete should not be generated to host connection is maintained on the old phy and the controller should not respond to pdus received on the unsupported phy this bug lead to failure of latest conformance tests new testcases added in test suite to reproduce build samples bluetooth hci uart for with correct configurations required for conformance tests and execute the tests on a conformance tester expected behavior test failure impact conformance test failure | 1 |
524,601 | 15,217,488,772 | IssuesEvent | 2021-02-17 16:40:15 | enso-org/ide | https://api.github.com/repos/enso-org/ide | closed | Crash reporting | Category: Controllers Difficulty: Intermediate Hacktoberfest Priority: Medium Type: Enhancement | ### Summary
Our application should graciously handle all panics which may appear in rust code, and send the reports to the appointed server.
### Value
Bugs in our code can be easily reported by users and fixed by us.
### Specification
Our application is essentially a web application packed with electron, and all logic is implemented in Rust, and compiled to the WASM. The js part only loads the WASM module and call one entry_point function.
* Create a simple service gathering the crash reports and writting them to the file with date.
* Whenever rust code panics, the whole page displayed in electron should be refreshed. After refresh the information about crash should be displayed in a html element being a red box at the top of the page. The user should have options to send the report or dismiss the message.
* If application crashes again while information about previous crash is still displayed, the refresh should be skipped, and all html elements except the crash message removed.
* The report should contain the string representation of PanicInfo, and the stacktrace.
* Address and port of the service should be obtained from application options, and default to localhost and arbitrary picked port. The connection error with service should not be fatal.
* The panic message should also be printed to the console (like it is currently), for convenience of debugging.
### Acceptance Criteria & Test Cases
Test with service run on localhost and application with modified rust code:
* with `panic!` during initialization (some function of `IdeInitializer`)
* with `panic!` during some user interaction, e.g. moving node (`GraphEditorIntegratedWithControllerModel::node_moved_in_ui`).
| 1.0 | Crash reporting - ### Summary
Our application should graciously handle all panics which may appear in rust code, and send the reports to the appointed server.
### Value
Bugs in our code can be easily reported by users and fixed by us.
### Specification
Our application is essentially a web application packed with electron, and all logic is implemented in Rust, and compiled to the WASM. The js part only loads the WASM module and call one entry_point function.
* Create a simple service gathering the crash reports and writting them to the file with date.
* Whenever rust code panics, the whole page displayed in electron should be refreshed. After refresh the information about crash should be displayed in a html element being a red box at the top of the page. The user should have options to send the report or dismiss the message.
* If application crashes again while information about previous crash is still displayed, the refresh should be skipped, and all html elements except the crash message removed.
* The report should contain the string representation of PanicInfo, and the stacktrace.
* Address and port of the service should be obtained from application options, and default to localhost and arbitrary picked port. The connection error with service should not be fatal.
* The panic message should also be printed to the console (like it is currently), for convenience of debugging.
### Acceptance Criteria & Test Cases
Test with service run on localhost and application with modified rust code:
* with `panic!` during initialization (some function of `IdeInitializer`)
* with `panic!` during some user interaction, e.g. moving node (`GraphEditorIntegratedWithControllerModel::node_moved_in_ui`).
| priority | crash reporting summary our application should graciously handle all panics which may appear in rust code and send the reports to the appointed server value bugs in our code can be easily reported by users and fixed by us specification our application is essentially a web application packed with electron and all logic is implemented in rust and compiled to the wasm the js part only loads the wasm module and call one entry point function create a simple service gathering the crash reports and writting them to the file with date whenever rust code panics the whole page displayed in electron should be refreshed after refresh the information about crash should be displayed in a html element being a red box at the top of the page the user should have options to send the report or dismiss the message if application crashes again while information about previous crash is still displayed the refresh should be skipped and all html elements except the crash message removed the report should contain the string representation of panicinfo and the stacktrace address and port of the service should be obtained from application options and default to localhost and arbitrary picked port the connection error with service should not be fatal the panic message should also be printed to the console like it is currently for convenience of debugging acceptance criteria test cases test with service run on localhost and application with modified rust code with panic during initialization some function of ideinitializer with panic during some user interaction e g moving node grapheditorintegratedwithcontrollermodel node moved in ui | 1 |
151,168 | 5,806,498,624 | IssuesEvent | 2017-05-04 03:05:13 | OperationCode/operationcode_frontend | https://api.github.com/repos/OperationCode/operationcode_frontend | closed | Make the footer responsive | Priority: Medium Status: In Progress Type: Feature | It currently looks like this:
<img width="414" alt="screen shot 2017-04-19 at 8 42 17 am" src="https://cloud.githubusercontent.com/assets/334550/25186025/2911f446-24dc-11e7-9bc2-f68d59e8852f.png">
| 1.0 | Make the footer responsive - It currently looks like this:
<img width="414" alt="screen shot 2017-04-19 at 8 42 17 am" src="https://cloud.githubusercontent.com/assets/334550/25186025/2911f446-24dc-11e7-9bc2-f68d59e8852f.png">
| priority | make the footer responsive it currently looks like this img width alt screen shot at am src | 1 |
801,536 | 28,491,901,994 | IssuesEvent | 2023-04-18 11:51:04 | NIAEFEUP/uporto-schedule-scrapper | https://api.github.com/repos/NIAEFEUP/uporto-schedule-scrapper | closed | Remove unneeded scrapper sql columns | medium effort low priority | The scrapper created columns that are not needed in the final dataset, so these must be removed to ensure we don't have purposeless information:
- course_year in the Course_Unit table
- other columns that are a by-product of the professors scrapping (TBD)

| 1.0 | Remove unneeded scrapper sql columns - The scrapper created columns that are not needed in the final dataset, so these must be removed to ensure we don't have purposeless information:
- course_year in the Course_Unit table
- other columns that are a by-product of the professors scrapping (TBD)

| priority | remove unneeded scrapper sql columns the scrapper created columns that are not needed in the final dataset so these must be removed to ensure we don t have purposeless information course year in the course unit table other columns that are a by product of the professors scrapping tbd | 1 |
614,425 | 19,182,676,320 | IssuesEvent | 2021-12-04 17:25:34 | BlackDemonZyT/BotSentry | https://api.github.com/repos/BlackDemonZyT/BotSentry | closed | Check Bedrock players (GeyserMC + floodgate) on Velocity | bug good first issue question medium priority | Hello. BotSentry does not check the players from the bedrock on velocity (does not analyze the connection of the player from the bedrock and the rest). If you block the IP address of the player who plays from bedrock (using floodgate the server gets the real ip of the bedrock player), then the player kicks from the server with the message that his IP has been banned using BotSentry and when reconnected he can safely connect. At Bungeecord (Waterfall) everything is as it should be, no problem. This only happens with Velocity.
Sorry my english is bad:) | 1.0 | Check Bedrock players (GeyserMC + floodgate) on Velocity - Hello. BotSentry does not check the players from the bedrock on velocity (does not analyze the connection of the player from the bedrock and the rest). If you block the IP address of the player who plays from bedrock (using floodgate the server gets the real ip of the bedrock player), then the player kicks from the server with the message that his IP has been banned using BotSentry and when reconnected he can safely connect. At Bungeecord (Waterfall) everything is as it should be, no problem. This only happens with Velocity.
Sorry my english is bad:) | priority | check bedrock players geysermc floodgate on velocity hello botsentry does not check the players from the bedrock on velocity does not analyze the connection of the player from the bedrock and the rest if you block the ip address of the player who plays from bedrock using floodgate the server gets the real ip of the bedrock player then the player kicks from the server with the message that his ip has been banned using botsentry and when reconnected he can safely connect at bungeecord waterfall everything is as it should be no problem this only happens with velocity sorry my english is bad | 1 |
57,291 | 3,081,254,561 | IssuesEvent | 2015-08-22 14:46:25 | bitfighter/bitfighter | https://api.github.com/repos/bitfighter/bitfighter | closed | vertex dragging with shift deselects vertex first | 020 bug imported Priority-Medium | _From [watusim...@bitfighter.org](https://code.google.com/u/105427273526970468779/) on April 29, 2015 12:12:11_
Editor: vertex dragging with shift deselects vertex first
_Original issue: http://code.google.com/p/bitfighter/issues/detail?id=505_ | 1.0 | vertex dragging with shift deselects vertex first - _From [watusim...@bitfighter.org](https://code.google.com/u/105427273526970468779/) on April 29, 2015 12:12:11_
Editor: vertex dragging with shift deselects vertex first
_Original issue: http://code.google.com/p/bitfighter/issues/detail?id=505_ | priority | vertex dragging with shift deselects vertex first from on april editor vertex dragging with shift deselects vertex first original issue | 1 |
619,659 | 19,531,993,699 | IssuesEvent | 2021-12-30 18:49:14 | bounswe/2021SpringGroup6 | https://api.github.com/repos/bounswe/2021SpringGroup6 | opened | Fix Activity Stream | Status: Not Yet Started Platform: Back-end Priority: Medium | Currently we do not filter the Activity Stream. We should change it so that only the activity from the users the user is following and the sports user has skill level on should be returned. | 1.0 | Fix Activity Stream - Currently we do not filter the Activity Stream. We should change it so that only the activity from the users the user is following and the sports user has skill level on should be returned. | priority | fix activity stream currently we do not filter the activity stream we should change it so that only the activity from the users the user is following and the sports user has skill level on should be returned | 1 |
470,017 | 13,529,766,498 | IssuesEvent | 2020-09-15 18:49:32 | momentum-mod/game | https://api.github.com/repos/momentum-mod/game | closed | Speedometer fixes | Priority: Medium Size: Medium Type: Bug | **Describe the bug**
~~Stage enter/exit speedometer does not show when exiting the start zone on linear maps. It used to due to a bug where it's fadeout animation would be started while it wasn't visible, so it was only visible if you restarted your run then began a new one quickly. This should be on though.~~
Not necessarily a bug, but it's not good that changing the gamemode on the hud settings tab loads that gamemode's speedo settings.
**Expected behavior**
~~Show stage exit velocity when leaving the start zone on a linear map.~~
Change how speedometer data is loaded/saved so you can edit a gamemode's settings without loading them.
**Desktop (please complete the following information):**
- OS: Windows
| 1.0 | Speedometer fixes - **Describe the bug**
~~Stage enter/exit speedometer does not show when exiting the start zone on linear maps. It used to due to a bug where it's fadeout animation would be started while it wasn't visible, so it was only visible if you restarted your run then began a new one quickly. This should be on though.~~
Not necessarily a bug, but it's not good that changing the gamemode on the hud settings tab loads that gamemode's speedo settings.
**Expected behavior**
~~Show stage exit velocity when leaving the start zone on a linear map.~~
Change how speedometer data is loaded/saved so you can edit a gamemode's settings without loading them.
**Desktop (please complete the following information):**
- OS: Windows
| priority | speedometer fixes describe the bug stage enter exit speedometer does not show when exiting the start zone on linear maps it used to due to a bug where it s fadeout animation would be started while it wasn t visible so it was only visible if you restarted your run then began a new one quickly this should be on though not necessarily a bug but it s not good that changing the gamemode on the hud settings tab loads that gamemode s speedo settings expected behavior show stage exit velocity when leaving the start zone on a linear map change how speedometer data is loaded saved so you can edit a gamemode s settings without loading them desktop please complete the following information os windows | 1 |
444,006 | 12,804,737,096 | IssuesEvent | 2020-07-03 05:39:39 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.0 staging-1633] NullReferenceExceptions in Browse Servers | Category: Tech Priority: Medium Status: Fixed Week Task | Step to reproduce:
- new game, browse all:

- exception will appear
```
NullReferenceException: Object reference not set to an instance of an object.
at SelectedServerUI.SetSelected (ServerListing serverInfo, System.Boolean showDeleteButton) [0x00000] in <00000000000000000000000000000000>:0
```
- change filter settings, unselect version match to see any server here:

- apply and select any server in list:

- I have exception again:
```
NullReferenceException: Object reference not set to an instance of an object.
at SelectedServerUI.SetSelected (ServerListing serverInfo, System.Boolean showDeleteButton) [0x00000] in <00000000000000000000000000000000>:0
at ServerBrowserTab.SetSelectedServer (ServerListing serverInfo, System.Boolean allowDelete) [0x00000] in <00000000000000000000000000000000>:0
at System.Action`1[T].Invoke (T obj) [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.Events.InvokableCall`1[T1].Invoke (T1 args0) [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.Events.UnityEvent`1[T0].Invoke (T0 arg0) [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.UI.Toggle.Set (System.Boolean value, System.Boolean sendCallback) [0x00000] in <00000000000000000000000000000000>:0
at System.EventHandler`1[TEventArgs].Invoke (System.Object sender, TEventArgs e) [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.EventSystems.ExecuteEvents.Execute[T] (UnityEngine.GameObject target, UnityEngine.EventSystems.BaseEventData eventData, UnityEngine.EventSystems.ExecuteEvents+EventFunction`1[T1] functor) [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.EventSystems.StandaloneInputModule.ReleaseMouse (UnityEngine.EventSystems.PointerEventData pointerEvent, UnityEngine.GameObject currentOverGo) [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.EventSystems.StandaloneInputModule.ProcessMousePress (UnityEngine.EventSystems.PointerInputModule+MouseButtonEventData data) [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.EventSystems.StandaloneInputModule.ProcessMouseEvent (System.Int32 id) [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.EventSystems.StandaloneInputModule.Process () [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.EventSystems.EventSystem.Update () [0x00000] in <00000000000000000000000000000000>:0
UnityEngine.Logger:LogException(Exception, Object)
UnityEngine.Debug:LogException(Exception)
UnityEngine.EventSystems.ExecuteEvents:Execute(GameObject, BaseEventData, EventFunction`1)
UnityEngine.EventSystems.StandaloneInputModule:ReleaseMouse(PointerEventData, GameObject)
UnityEngine.EventSystems.StandaloneInputModule:ProcessMousePress(MouseButtonEventData)
UnityEngine.EventSystems.StandaloneInputModule:ProcessMouseEvent(Int32)
UnityEngine.EventSystems.StandaloneInputModule:Process()
UnityEngine.EventSystems.EventSystem:Update()
```
- press Browse recommended, another exception:

```
NullReferenceException: Object reference not set to an instance of an object.
at SelectedServerUI.SetSelected (ServerListing serverInfo, System.Boolean showDeleteButton) [0x00000] in <00000000000000000000000000000000>:0
at ServerBrowserTab.SetSelectedServer (ServerListing serverInfo, System.Boolean allowDelete) [0x00000] in <00000000000000000000000000000000>:0
at ServerListingUI.Init (ServerListing serverInfo, ServerListingGroup container) [0x00000] in <00000000000000000000000000000000>:0
at ServerBrowserTab+<>c__DisplayClass28_0.<EnqueueListing>b__0 () [0x00000] in <00000000000000000000000000000000>:0
at System.Action.Invoke () [0x00000] in <00000000000000000000000000000000>:0
at ServerBrowserTab.Update () [0x00000] in <00000000000000000000000000000000>:0
```
[Player.log](https://github.com/StrangeLoopGames/EcoIssues/files/4845587/Player.log)
| 1.0 | [0.9.0 staging-1633] NullReferenceExceptions in Browse Servers - Step to reproduce:
- new game, browse all:

- exception will appear
```
NullReferenceException: Object reference not set to an instance of an object.
at SelectedServerUI.SetSelected (ServerListing serverInfo, System.Boolean showDeleteButton) [0x00000] in <00000000000000000000000000000000>:0
```
- change filter settings, unselect version match to see any server here:

- apply and select any server in list:

- I have exception again:
```
NullReferenceException: Object reference not set to an instance of an object.
at SelectedServerUI.SetSelected (ServerListing serverInfo, System.Boolean showDeleteButton) [0x00000] in <00000000000000000000000000000000>:0
at ServerBrowserTab.SetSelectedServer (ServerListing serverInfo, System.Boolean allowDelete) [0x00000] in <00000000000000000000000000000000>:0
at System.Action`1[T].Invoke (T obj) [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.Events.InvokableCall`1[T1].Invoke (T1 args0) [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.Events.UnityEvent`1[T0].Invoke (T0 arg0) [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.UI.Toggle.Set (System.Boolean value, System.Boolean sendCallback) [0x00000] in <00000000000000000000000000000000>:0
at System.EventHandler`1[TEventArgs].Invoke (System.Object sender, TEventArgs e) [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.EventSystems.ExecuteEvents.Execute[T] (UnityEngine.GameObject target, UnityEngine.EventSystems.BaseEventData eventData, UnityEngine.EventSystems.ExecuteEvents+EventFunction`1[T1] functor) [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.EventSystems.StandaloneInputModule.ReleaseMouse (UnityEngine.EventSystems.PointerEventData pointerEvent, UnityEngine.GameObject currentOverGo) [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.EventSystems.StandaloneInputModule.ProcessMousePress (UnityEngine.EventSystems.PointerInputModule+MouseButtonEventData data) [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.EventSystems.StandaloneInputModule.ProcessMouseEvent (System.Int32 id) [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.EventSystems.StandaloneInputModule.Process () [0x00000] in <00000000000000000000000000000000>:0
at UnityEngine.EventSystems.EventSystem.Update () [0x00000] in <00000000000000000000000000000000>:0
UnityEngine.Logger:LogException(Exception, Object)
UnityEngine.Debug:LogException(Exception)
UnityEngine.EventSystems.ExecuteEvents:Execute(GameObject, BaseEventData, EventFunction`1)
UnityEngine.EventSystems.StandaloneInputModule:ReleaseMouse(PointerEventData, GameObject)
UnityEngine.EventSystems.StandaloneInputModule:ProcessMousePress(MouseButtonEventData)
UnityEngine.EventSystems.StandaloneInputModule:ProcessMouseEvent(Int32)
UnityEngine.EventSystems.StandaloneInputModule:Process()
UnityEngine.EventSystems.EventSystem:Update()
```
- press Browse recommended, another exception:

```
NullReferenceException: Object reference not set to an instance of an object.
at SelectedServerUI.SetSelected (ServerListing serverInfo, System.Boolean showDeleteButton) [0x00000] in <00000000000000000000000000000000>:0
at ServerBrowserTab.SetSelectedServer (ServerListing serverInfo, System.Boolean allowDelete) [0x00000] in <00000000000000000000000000000000>:0
at ServerListingUI.Init (ServerListing serverInfo, ServerListingGroup container) [0x00000] in <00000000000000000000000000000000>:0
at ServerBrowserTab+<>c__DisplayClass28_0.<EnqueueListing>b__0 () [0x00000] in <00000000000000000000000000000000>:0
at System.Action.Invoke () [0x00000] in <00000000000000000000000000000000>:0
at ServerBrowserTab.Update () [0x00000] in <00000000000000000000000000000000>:0
```
[Player.log](https://github.com/StrangeLoopGames/EcoIssues/files/4845587/Player.log)
| priority | nullreferenceexceptions in browse servers step to reproduce new game browse all exception will appear nullreferenceexception object reference not set to an instance of an object at selectedserverui setselected serverlisting serverinfo system boolean showdeletebutton in change filter settings unselect version match to see any server here apply and select any server in list i have exception again nullreferenceexception object reference not set to an instance of an object at selectedserverui setselected serverlisting serverinfo system boolean showdeletebutton in at serverbrowsertab setselectedserver serverlisting serverinfo system boolean allowdelete in at system action invoke t obj in at unityengine events invokablecall invoke in at unityengine events unityevent invoke in at unityengine ui toggle set system boolean value system boolean sendcallback in at system eventhandler invoke system object sender teventargs e in at unityengine eventsystems executeevents execute unityengine gameobject target unityengine eventsystems baseeventdata eventdata unityengine eventsystems executeevents eventfunction functor in at unityengine eventsystems standaloneinputmodule releasemouse unityengine eventsystems pointereventdata pointerevent unityengine gameobject currentovergo in at unityengine eventsystems standaloneinputmodule processmousepress unityengine eventsystems pointerinputmodule mousebuttoneventdata data in at unityengine eventsystems standaloneinputmodule processmouseevent system id in at unityengine eventsystems standaloneinputmodule process in at unityengine eventsystems eventsystem update in unityengine logger logexception exception object unityengine debug logexception exception unityengine eventsystems executeevents execute gameobject baseeventdata eventfunction unityengine eventsystems standaloneinputmodule releasemouse pointereventdata gameobject unityengine eventsystems standaloneinputmodule processmousepress mousebuttoneventdata unityengine eventsystems standaloneinputmodule processmouseevent unityengine eventsystems standaloneinputmodule process unityengine eventsystems eventsystem update press browse recommended another exception nullreferenceexception object reference not set to an instance of an object at selectedserverui setselected serverlisting serverinfo system boolean showdeletebutton in at serverbrowsertab setselectedserver serverlisting serverinfo system boolean allowdelete in at serverlistingui init serverlisting serverinfo serverlistinggroup container in at serverbrowsertab c b in at system action invoke in at serverbrowsertab update in | 1 |
245,587 | 7,888,087,951 | IssuesEvent | 2018-06-27 20:46:02 | hydroshare/hydroshare | https://api.github.com/repos/hydroshare/hydroshare | closed | Can't update keywords/subjects through rest API | Medium Priority REST API bug | I'm trying to find a way to update resource keywords through the rest API. [This comment on another issue](https://github.com/hydroshare/hydroshare/issues/1599#issuecomment-262588374) made me think it's possible, but nothing I've tried has worked.
This code snippet shows a bit of what I've tried:
```python
def updateKeywords(self, id, keywords): # type: (str, [str]) -> object
url = "{url_base}/resource/{id}/scimeta/elements/".format(url_base=self.url_base, id=id)
r = self.request('PUT', url, json={'subjects': keywords})
# Also tried:
# r = self.request('PUT', url, json={'keywords': keywords})
# r = self.request('PUT', url, json={'subjects': [{"value": "keyword 1"}, {"value": "keyword 2"}]})
if r.status_code != 202:
raise HydroShareHTTPException((url, 'PUT', r.status_code, keywords))
return r.json()
```
All three attempts to update the keywords (in the code snippet above) return a 202 response, but the keywords/subjects remain unchanged on the resource.
Can someone provide an example that works? | 1.0 | Can't update keywords/subjects through rest API - I'm trying to find a way to update resource keywords through the rest API. [This comment on another issue](https://github.com/hydroshare/hydroshare/issues/1599#issuecomment-262588374) made me think it's possible, but nothing I've tried has worked.
This code snippet shows a bit of what I've tried:
```python
def updateKeywords(self, id, keywords): # type: (str, [str]) -> object
url = "{url_base}/resource/{id}/scimeta/elements/".format(url_base=self.url_base, id=id)
r = self.request('PUT', url, json={'subjects': keywords})
# Also tried:
# r = self.request('PUT', url, json={'keywords': keywords})
# r = self.request('PUT', url, json={'subjects': [{"value": "keyword 1"}, {"value": "keyword 2"}]})
if r.status_code != 202:
raise HydroShareHTTPException((url, 'PUT', r.status_code, keywords))
return r.json()
```
All three attempts to update the keywords (in the code snippet above) return a 202 response, but the keywords/subjects remain unchanged on the resource.
Can someone provide an example that works? | priority | can t update keywords subjects through rest api i m trying to find a way to update resource keywords through the rest api made me think it s possible but nothing i ve tried has worked this code snippet shows a bit of what i ve tried python def updatekeywords self id keywords type str object url url base resource id scimeta elements format url base self url base id id r self request put url json subjects keywords also tried r self request put url json keywords keywords r self request put url json subjects if r status code raise hydrosharehttpexception url put r status code keywords return r json all three attempts to update the keywords in the code snippet above return a response but the keywords subjects remain unchanged on the resource can someone provide an example that works | 1 |
682,881 | 23,360,788,738 | IssuesEvent | 2022-08-10 11:30:56 | Redocly/redoc | https://api.github.com/repos/Redocly/redoc | closed | Sidebar does not stick on 2.0.0-rc.74 | Type: Bug investigation Priority: Medium | **Describe the bug**
Redoc sidebar appears only on the first page, if I scroll down below first page my sidebar does not stick and I have to scroll back all the way up to view sidebar
**Expected behavior**
Sidebar should stick to all the pages.
**Minimal reproducible OpenAPI snippet(if possible)**
loadRedocDocumentation() {
const elem = this.element.nativeElement.querySelector('.redoc-container');
const options = {
theme: { colors: { primary: { main: '#0C7696' } } }
};
this.apiDocsService.getApiDocs().subscribe((res) => Redoc.init(res, options, elem));
}
**Screenshots**


**Additional context**
Add any other context about the problem here.
| 1.0 | Sidebar does not stick on 2.0.0-rc.74 - **Describe the bug**
Redoc sidebar appears only on the first page, if I scroll down below first page my sidebar does not stick and I have to scroll back all the way up to view sidebar
**Expected behavior**
Sidebar should stick to all the pages.
**Minimal reproducible OpenAPI snippet(if possible)**
loadRedocDocumentation() {
const elem = this.element.nativeElement.querySelector('.redoc-container');
const options = {
theme: { colors: { primary: { main: '#0C7696' } } }
};
this.apiDocsService.getApiDocs().subscribe((res) => Redoc.init(res, options, elem));
}
**Screenshots**


**Additional context**
Add any other context about the problem here.
| priority | sidebar does not stick on rc describe the bug redoc sidebar appears only on the first page if i scroll down below first page my sidebar does not stick and i have to scroll back all the way up to view sidebar expected behavior sidebar should stick to all the pages minimal reproducible openapi snippet if possible loadredocdocumentation const elem this element nativeelement queryselector redoc container const options theme colors primary main this apidocsservice getapidocs subscribe res redoc init res options elem screenshots additional context add any other context about the problem here | 1 |
558,743 | 16,541,611,796 | IssuesEvent | 2021-05-27 17:32:48 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.3.5 release] Escape must close backpack | Category: Gameplay Priority: Medium Squad: Otter | As a player, the expected behavior is that escape should close an open UI. This works for almost all UI except, the backpack. This is because some people prefer to play with the backpack UI open at all times.
We must go with the behavior that is most intuitive to the majority of players. We will create a follow up issue to allow pinning a UI so it is unimpacted by pressing escape. | 1.0 | [0.9.3.5 release] Escape must close backpack - As a player, the expected behavior is that escape should close an open UI. This works for almost all UI except, the backpack. This is because some people prefer to play with the backpack UI open at all times.
We must go with the behavior that is most intuitive to the majority of players. We will create a follow up issue to allow pinning a UI so it is unimpacted by pressing escape. | priority | escape must close backpack as a player the expected behavior is that escape should close an open ui this works for almost all ui except the backpack this is because some people prefer to play with the backpack ui open at all times we must go with the behavior that is most intuitive to the majority of players we will create a follow up issue to allow pinning a ui so it is unimpacted by pressing escape | 1 |
140,021 | 5,396,131,615 | IssuesEvent | 2017-02-27 10:44:39 | HPI-SWA-Lab/BP2016H1 | https://api.github.com/repos/HPI-SWA-Lab/BP2016H1 | opened | Extended profile information | priority medium | As a user I want fill in more profile information in order to be more recognizable and give background information about myself.
Required fields:
- username
- mail
- picture
- real name
- contact info
- description/bio
- knowledge in languages/skripts | 1.0 | Extended profile information - As a user I want fill in more profile information in order to be more recognizable and give background information about myself.
Required fields:
- username
- mail
- picture
- real name
- contact info
- description/bio
- knowledge in languages/skripts | priority | extended profile information as a user i want fill in more profile information in order to be more recognizable and give background information about myself required fields username mail picture real name contact info description bio knowledge in languages skripts | 1 |
146,879 | 5,629,964,543 | IssuesEvent | 2017-04-05 10:52:39 | Supadog/DB_iti | https://api.github.com/repos/Supadog/DB_iti | opened | Displaying only active professors when creating a user | Medium priority | Do not display inactive professors when creating a new professor/head professor user. | 1.0 | Displaying only active professors when creating a user - Do not display inactive professors when creating a new professor/head professor user. | priority | displaying only active professors when creating a user do not display inactive professors when creating a new professor head professor user | 1 |
261,745 | 8,245,604,814 | IssuesEvent | 2018-09-11 10:10:55 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | Pin interrupt not handled when two pin ints fires in quick succession | area: GPIO bug nRF priority: medium | I'm seeing an issue where one pin interrupt is not handled when there is a second pin interrupt firing within 12µs after the first. I have attached a logic_analyzer file that shows the two pin interrupts and the handler of the first pin interrupt.
[Interrupt_crash.zip](https://github.com/zephyrproject-rtos/zephyr/files/2105881/Interrupt_crash.zip)
Explanation of the logic signals: The pin interrupts are active LOW. I've added a pin toggling to the handler that sets the pin LOW when the handler is called (will send a k_alert_send), the pin toggles back to HIGH when k_alert_recv for the same signal is called. | 1.0 | Pin interrupt not handled when two pin ints fires in quick succession - I'm seeing an issue where one pin interrupt is not handled when there is a second pin interrupt firing within 12µs after the first. I have attached a logic_analyzer file that shows the two pin interrupts and the handler of the first pin interrupt.
[Interrupt_crash.zip](https://github.com/zephyrproject-rtos/zephyr/files/2105881/Interrupt_crash.zip)
Explanation of the logic signals: The pin interrupts are active LOW. I've added a pin toggling to the handler that sets the pin LOW when the handler is called (will send a k_alert_send), the pin toggles back to HIGH when k_alert_recv for the same signal is called. | priority | pin interrupt not handled when two pin ints fires in quick succession i m seeing an issue where one pin interrupt is not handled when there is a second pin interrupt firing within after the first i have attached a logic analyzer file that shows the two pin interrupts and the handler of the first pin interrupt explanation of the logic signals the pin interrupts are active low i ve added a pin toggling to the handler that sets the pin low when the handler is called will send a k alert send the pin toggles back to high when k alert recv for the same signal is called | 1 |
699,518 | 24,019,527,172 | IssuesEvent | 2022-09-15 06:14:43 | chaotic-aur/packages | https://api.github.com/repos/chaotic-aur/packages | closed | [Request] flat-remix-git | request:new-pkg priority:medium | ### Link to the package(s) in the AUR
https://aur.archlinux.org/packages/flat-remix
https://aur.archlinux.org/packages/flat-remix-git
### Utility this package has for you
Flat Remix is an icon theme inspired by material design. It is mostly flat using a colorful palette with some shadows, highlights, and gradients for some depth.
### Do you consider the package(s) to be useful for every Chaotic-AUR user?
YES!
### Do you consider the package to be useful for feature testing/preview?
- [ ] Yes
### Have you tested if the package builds in a clean chroot?
- [X] Yes
### Does the package's license allow redistributing it?
YES!
### Have you searched the issues to ensure this request is unique?
- [X] YES!
### Have you read the README to ensure this package is not banned?
- [X] YES!
### More information
_No response_ | 1.0 | [Request] flat-remix-git - ### Link to the package(s) in the AUR
https://aur.archlinux.org/packages/flat-remix
https://aur.archlinux.org/packages/flat-remix-git
### Utility this package has for you
Flat Remix is an icon theme inspired by material design. It is mostly flat using a colorful palette with some shadows, highlights, and gradients for some depth.
### Do you consider the package(s) to be useful for every Chaotic-AUR user?
YES!
### Do you consider the package to be useful for feature testing/preview?
- [ ] Yes
### Have you tested if the package builds in a clean chroot?
- [X] Yes
### Does the package's license allow redistributing it?
YES!
### Have you searched the issues to ensure this request is unique?
- [X] YES!
### Have you read the README to ensure this package is not banned?
- [X] YES!
### More information
_No response_ | priority | flat remix git link to the package s in the aur utility this package has for you flat remix is an icon theme inspired by material design it is mostly flat using a colorful palette with some shadows highlights and gradients for some depth do you consider the package s to be useful for every chaotic aur user yes do you consider the package to be useful for feature testing preview yes have you tested if the package builds in a clean chroot yes does the package s license allow redistributing it yes have you searched the issues to ensure this request is unique yes have you read the readme to ensure this package is not banned yes more information no response | 1 |
784,967 | 27,591,274,638 | IssuesEvent | 2023-03-09 00:42:01 | RoboJackets/urc-software | https://api.github.com/repos/RoboJackets/urc-software | closed | Automated install script | level ➤ easy area ➤ misc priority ➤ medium | ## Description
Bash/zsh script to automate the [Ubuntu/Linux install instructions](https://github.com/RoboJackets/urc-software/blob/master/documents/installation/ubuntu_installation.md)
## Requirements
- .sh script
| 1.0 | Automated install script - ## Description
Bash/zsh script to automate the [Ubuntu/Linux install instructions](https://github.com/RoboJackets/urc-software/blob/master/documents/installation/ubuntu_installation.md)
## Requirements
- .sh script
| priority | automated install script description bash zsh script to automate the requirements sh script | 1 |
645,141 | 20,996,078,470 | IssuesEvent | 2022-03-29 13:36:47 | robotframework/SSHLibrary | https://api.github.com/repos/robotframework/SSHLibrary | closed | Library raises Authentication failure, but host accepted it | bug priority: medium rc 1 | Hello,
I have experienced issues with the latest version 3.6.0 that I do not have with version 3.5.1
I manage to open a connection with "open_connection"
`open_connection(ip, port=port, timeout=timeout, prompt="#")`
but "login" systematically fails
`self.login("root", "")`
as follows
`Traceback (most recent call last):
File "/home/galaxy/.pyenv/versions/py369-ssh/lib/python3.6/site-packages/SSHLibrary/pythonclient.py", line 123, in _login
transport.auth_none(username)
File "/home/galaxy/.pyenv/versions/py369-ssh/lib/python3.6/site-packages/paramiko/transport.py", line 1446, in auth_none
return self.auth_handler.wait_for_response(my_event)
File "/home/galaxy/.pyenv/versions/py369-ssh/lib/python3.6/site-packages/paramiko/auth_handler.py", line 240, in wait_for_response
raise AuthenticationException("Authentication timeout.")
paramiko.ssh_exception.AuthenticationException: Authentication timeout.
`
`
During handling of the above exception, another exception occurred:
`
`
Traceback (most recent call last):
File "/home/galaxy/.pyenv/versions/py369-ssh/lib/python3.6/site-packages/SSHLibrary/abstractclient.py", line 202, in login
self._login(username, password, allow_agent, look_for_keys, proxy_cmd, read_config_host, jumphost_connection)
File "/home/galaxy/.pyenv/versions/py369-ssh/lib/python3.6/site-packages/SSHLibrary/pythonclient.py", line 141, in _login
raise SSHClientException
SSHLibrary.abstractclient.SSHClientException
`
`
During handling of the above exception, another exception occurred:
`
`
Traceback (most recent call last):
File "/home/galaxy/.pyenv/versions/py369-ssh/lib/python3.6/site-packages/SSHLibrary/library.py", line 1053, in _login
login_output = login_method(username, *args)
File "/home/galaxy/.pyenv/versions/py369-ssh/lib/python3.6/site-packages/SSHLibrary/abstractclient.py", line 206, in login
% self._decode(username))
SSHLibrary.abstractclient.SSHClientException: Authentication failed for user 'root'.
`
`
During handling of the above exception, another exception occurred:
`
`
Traceback (most recent call last):
File "qemu_investigation.py", line 7, in <module>
foo.connect_board()
File "/media/workdir/iot-bridge-projectconf/verification/test_lib/fr_iotb_environment/Ssh.py", line 56, in connect_board
raise e
File "/media/workdir/iot-bridge-projectconf/verification/test_lib/fr_iotb_environment/Ssh.py", line 49, in connect_board
self.login(self.sut.user, self.sut.password)
File "/home/galaxy/.pyenv/versions/py369-ssh/lib/python3.6/site-packages/SSHLibrary/library.py", line 982, in login
is_truthy(look_for_keys), delay, proxy_cmd, is_truthy(read_config_host), jumphost_connection)
File "/home/galaxy/.pyenv/versions/py369-ssh/lib/python3.6/site-packages/SSHLibrary/library.py", line 1059, in _login
raise RuntimeError(e)
RuntimeError: Authentication failed for user 'root'.
`
On the host side, login is seen with a `journalctl -f`
` sshd[2307]: Accepted none for root from 192.168.1.26 port 52926 ssh2`
As I mentionned, with version 3.5.1 eveything works fine (I switched version several times with `pip3 install robotframework-sshlibrary==3.5.1` / `pip3 install robotframework-sshlibrary==3.6.0`)
The host I am trying to connect to is an embedded system with linux, accessible with "root" and no password. I am not sure I can give you much details about the host, because this is work-related. Anyway, I figured this might reveal a problem in the 3.6.0 version of SSHLibrary.
Best regards | 1.0 | Library raises Authentication failure, but host accepted it - Hello,
I have experienced issues with the latest version 3.6.0 that I do not have with version 3.5.1
I manage to open a connection with "open_connection"
`open_connection(ip, port=port, timeout=timeout, prompt="#")`
but "login" systematically fails
`self.login("root", "")`
as follows
`Traceback (most recent call last):
File "/home/galaxy/.pyenv/versions/py369-ssh/lib/python3.6/site-packages/SSHLibrary/pythonclient.py", line 123, in _login
transport.auth_none(username)
File "/home/galaxy/.pyenv/versions/py369-ssh/lib/python3.6/site-packages/paramiko/transport.py", line 1446, in auth_none
return self.auth_handler.wait_for_response(my_event)
File "/home/galaxy/.pyenv/versions/py369-ssh/lib/python3.6/site-packages/paramiko/auth_handler.py", line 240, in wait_for_response
raise AuthenticationException("Authentication timeout.")
paramiko.ssh_exception.AuthenticationException: Authentication timeout.
`
`
During handling of the above exception, another exception occurred:
`
`
Traceback (most recent call last):
File "/home/galaxy/.pyenv/versions/py369-ssh/lib/python3.6/site-packages/SSHLibrary/abstractclient.py", line 202, in login
self._login(username, password, allow_agent, look_for_keys, proxy_cmd, read_config_host, jumphost_connection)
File "/home/galaxy/.pyenv/versions/py369-ssh/lib/python3.6/site-packages/SSHLibrary/pythonclient.py", line 141, in _login
raise SSHClientException
SSHLibrary.abstractclient.SSHClientException
`
`
During handling of the above exception, another exception occurred:
`
`
Traceback (most recent call last):
File "/home/galaxy/.pyenv/versions/py369-ssh/lib/python3.6/site-packages/SSHLibrary/library.py", line 1053, in _login
login_output = login_method(username, *args)
File "/home/galaxy/.pyenv/versions/py369-ssh/lib/python3.6/site-packages/SSHLibrary/abstractclient.py", line 206, in login
% self._decode(username))
SSHLibrary.abstractclient.SSHClientException: Authentication failed for user 'root'.
`
`
During handling of the above exception, another exception occurred:
`
`
Traceback (most recent call last):
File "qemu_investigation.py", line 7, in <module>
foo.connect_board()
File "/media/workdir/iot-bridge-projectconf/verification/test_lib/fr_iotb_environment/Ssh.py", line 56, in connect_board
raise e
File "/media/workdir/iot-bridge-projectconf/verification/test_lib/fr_iotb_environment/Ssh.py", line 49, in connect_board
self.login(self.sut.user, self.sut.password)
File "/home/galaxy/.pyenv/versions/py369-ssh/lib/python3.6/site-packages/SSHLibrary/library.py", line 982, in login
is_truthy(look_for_keys), delay, proxy_cmd, is_truthy(read_config_host), jumphost_connection)
File "/home/galaxy/.pyenv/versions/py369-ssh/lib/python3.6/site-packages/SSHLibrary/library.py", line 1059, in _login
raise RuntimeError(e)
RuntimeError: Authentication failed for user 'root'.
`
On the host side, login is seen with a `journalctl -f`
` sshd[2307]: Accepted none for root from 192.168.1.26 port 52926 ssh2`
As I mentionned, with version 3.5.1 eveything works fine (I switched version several times with `pip3 install robotframework-sshlibrary==3.5.1` / `pip3 install robotframework-sshlibrary==3.6.0`)
The host I am trying to connect to is an embedded system with linux, accessible with "root" and no password. I am not sure I can give you much details about the host, because this is work-related. Anyway, I figured this might reveal a problem in the 3.6.0 version of SSHLibrary.
Best regards | priority | library raises authentication failure but host accepted it hello i have experienced issues with the latest version that i do not have with version i manage to open a connection with open connection open connection ip port port timeout timeout prompt but login systematically fails self login root as follows traceback most recent call last file home galaxy pyenv versions ssh lib site packages sshlibrary pythonclient py line in login transport auth none username file home galaxy pyenv versions ssh lib site packages paramiko transport py line in auth none return self auth handler wait for response my event file home galaxy pyenv versions ssh lib site packages paramiko auth handler py line in wait for response raise authenticationexception authentication timeout paramiko ssh exception authenticationexception authentication timeout during handling of the above exception another exception occurred traceback most recent call last file home galaxy pyenv versions ssh lib site packages sshlibrary abstractclient py line in login self login username password allow agent look for keys proxy cmd read config host jumphost connection file home galaxy pyenv versions ssh lib site packages sshlibrary pythonclient py line in login raise sshclientexception sshlibrary abstractclient sshclientexception during handling of the above exception another exception occurred traceback most recent call last file home galaxy pyenv versions ssh lib site packages sshlibrary library py line in login login output login method username args file home galaxy pyenv versions ssh lib site packages sshlibrary abstractclient py line in login self decode username sshlibrary abstractclient sshclientexception authentication failed for user root during handling of the above exception another exception occurred traceback most recent call last file qemu investigation py line in foo connect board file media workdir iot bridge projectconf verification test lib fr iotb environment ssh py line in connect board raise e file media workdir iot bridge projectconf verification test lib fr iotb environment ssh py line in connect board self login self sut user self sut password file home galaxy pyenv versions ssh lib site packages sshlibrary library py line in login is truthy look for keys delay proxy cmd is truthy read config host jumphost connection file home galaxy pyenv versions ssh lib site packages sshlibrary library py line in login raise runtimeerror e runtimeerror authentication failed for user root on the host side login is seen with a journalctl f sshd accepted none for root from port as i mentionned with version eveything works fine i switched version several times with install robotframework sshlibrary install robotframework sshlibrary the host i am trying to connect to is an embedded system with linux accessible with root and no password i am not sure i can give you much details about the host because this is work related anyway i figured this might reveal a problem in the version of sshlibrary best regards | 1 |
212,078 | 7,228,167,232 | IssuesEvent | 2018-02-11 05:52:27 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | (Stupid) questions about coverage reports | bug priority: medium | Coverage reporting via https://codecov.io/gh/zephyrproject-rtos/zephyr was recently introduced, and coverage reports usually lead to a bunch of question. It's proposed to have this ticket to dump them.
| 1.0 | (Stupid) questions about coverage reports - Coverage reporting via https://codecov.io/gh/zephyrproject-rtos/zephyr was recently introduced, and coverage reports usually lead to a bunch of question. It's proposed to have this ticket to dump them.
| priority | stupid questions about coverage reports coverage reporting via was recently introduced and coverage reports usually lead to a bunch of question it s proposed to have this ticket to dump them | 1 |
140,714 | 5,415,024,965 | IssuesEvent | 2017-03-01 20:32:17 | PowerlineApp/powerline-mobile | https://api.github.com/repos/PowerlineApp/powerline-mobile | opened | Block Users - Content & Notifications & Comments | enhancement P2 - Medium Priority | User A needs to have the ability to block another user B. This will prevent User B's content from displaying in the User A newsfeed. It will also prevent User B from generating any notifications for User A (e.g. "commented on your post" @mentioned you,). It will also hide User A's content from User B's feed.
User A needs the ability to view a list of all Users that he has blocked. He needs the ability to unblock User B from that list.
User B should not know that User A blocked him. | 1.0 | Block Users - Content & Notifications & Comments - User A needs to have the ability to block another user B. This will prevent User B's content from displaying in the User A newsfeed. It will also prevent User B from generating any notifications for User A (e.g. "commented on your post" @mentioned you,). It will also hide User A's content from User B's feed.
User A needs the ability to view a list of all Users that he has blocked. He needs the ability to unblock User B from that list.
User B should not know that User A blocked him. | priority | block users content notifications comments user a needs to have the ability to block another user b this will prevent user b s content from displaying in the user a newsfeed it will also prevent user b from generating any notifications for user a e g commented on your post mentioned you it will also hide user a s content from user b s feed user a needs the ability to view a list of all users that he has blocked he needs the ability to unblock user b from that list user b should not know that user a blocked him | 1 |
673,351 | 22,959,299,059 | IssuesEvent | 2022-07-19 14:10:52 | MAIF/react-forms | https://api.github.com/repos/MAIF/react-forms | closed | It would be nice to have a function preporty for the collapse view of nested form | enhancement priority medium | - [ ] user can give visible fields on collapse
- [ ] use can give a function to build himself the collapsed view | 1.0 | It would be nice to have a function preporty for the collapse view of nested form - - [ ] user can give visible fields on collapse
- [ ] use can give a function to build himself the collapsed view | priority | it would be nice to have a function preporty for the collapse view of nested form user can give visible fields on collapse use can give a function to build himself the collapsed view | 1 |
379,941 | 11,251,654,324 | IssuesEvent | 2020-01-11 01:45:28 | Azure/ARO-RP | https://api.github.com/repos/Azure/ARO-RP | closed | Consider allowing the user to specify the cluster resource group | medium-priority | Currently we hard code the cluster resource group to the name of the cluster resource. AKS and ARO v3 use random UUID resource groups. Can we update our API to allow the user to specify the cluster resource group name (we will then create it). | 1.0 | Consider allowing the user to specify the cluster resource group - Currently we hard code the cluster resource group to the name of the cluster resource. AKS and ARO v3 use random UUID resource groups. Can we update our API to allow the user to specify the cluster resource group name (we will then create it). | priority | consider allowing the user to specify the cluster resource group currently we hard code the cluster resource group to the name of the cluster resource aks and aro use random uuid resource groups can we update our api to allow the user to specify the cluster resource group name we will then create it | 1 |
85,487 | 3,690,968,296 | IssuesEvent | 2016-02-25 22:01:23 | ngageoint/hootenanny | https://api.github.com/repos/ngageoint/hootenanny | closed | Modify StatsCmd to optionally write stats to a file in json format | Category: Core Priority: Medium Status: Defined Type: Feature | Much like https://github.com/ngageoint/hootenanny/issues/288, this change will allow the UI to trigger stats generation on single datasets and attach them to the map metadata in the services db for display in the UI. | 1.0 | Modify StatsCmd to optionally write stats to a file in json format - Much like https://github.com/ngageoint/hootenanny/issues/288, this change will allow the UI to trigger stats generation on single datasets and attach them to the map metadata in the services db for display in the UI. | priority | modify statscmd to optionally write stats to a file in json format much like this change will allow the ui to trigger stats generation on single datasets and attach them to the map metadata in the services db for display in the ui | 1 |
531,295 | 15,444,343,895 | IssuesEvent | 2021-03-08 10:17:02 | AY2021S2-CS2103T-T12-4/tp | https://api.github.com/repos/AY2021S2-CS2103T-T12-4/tp | closed | Bug: Exit command does not exit properly | priority.Medium severity.Low | - @JulietTeoh I have the same issue as well, just adding it here for the record. | 1.0 | Bug: Exit command does not exit properly - - @JulietTeoh I have the same issue as well, just adding it here for the record. | priority | bug exit command does not exit properly julietteoh i have the same issue as well just adding it here for the record | 1 |
481,213 | 13,882,059,181 | IssuesEvent | 2020-10-18 04:33:05 | AY2021S1-CS2103T-W16-3/tp | https://api.github.com/repos/AY2021S1-CS2103T-W16-3/tp | opened | Displayed list no longer updates automatically | priority.medium :2nd_place_medal: type.bug :bug: | The displayed list has to be reloaded after a transaction is added/edited/deleted in order to view the latest state of the list.
First noticed in #123. | 1.0 | Displayed list no longer updates automatically - The displayed list has to be reloaded after a transaction is added/edited/deleted in order to view the latest state of the list.
First noticed in #123. | priority | displayed list no longer updates automatically the displayed list has to be reloaded after a transaction is added edited deleted in order to view the latest state of the list first noticed in | 1 |
40,422 | 2,868,918,739 | IssuesEvent | 2015-06-05 21:57:34 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | Pub needs rigorous, testable specification | enhancement NotPlanned Priority-Medium | <a href="https://github.com/peter-ahe-google"><img src="https://avatars.githubusercontent.com/u/5689005?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [peter-ahe-google](https://github.com/peter-ahe-google)**
_Originally opened as dart-lang/sdk#3705_
----
The pub package manager currently use YAML to specify package dependencies. This should be part of the language specification. | 1.0 | Pub needs rigorous, testable specification - <a href="https://github.com/peter-ahe-google"><img src="https://avatars.githubusercontent.com/u/5689005?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [peter-ahe-google](https://github.com/peter-ahe-google)**
_Originally opened as dart-lang/sdk#3705_
----
The pub package manager currently use YAML to specify package dependencies. This should be part of the language specification. | priority | pub needs rigorous testable specification issue by originally opened as dart lang sdk the pub package manager currently use yaml to specify package dependencies this should be part of the language specification | 1 |
412,147 | 12,035,824,353 | IssuesEvent | 2020-04-13 18:36:48 | bounswe/bounswe2020group7 | https://api.github.com/repos/bounswe/bounswe2020group7 | closed | Design Documents Feedback Update(Class Diagram) | Priority: Medium Status: Done Type: Improvement Type: Task | Update the class diagram according to the feedback given [here](https://github.com/bounswe/bounswe2020group7/wiki/Design-Documents-Feedback). Then, write all updates to the feedback document which can be found [here](https://github.com/bounswe/bounswe2020group7/wiki/Design-Documents-Feedback)
Deadline: @@ | 1.0 | Design Documents Feedback Update(Class Diagram) - Update the class diagram according to the feedback given [here](https://github.com/bounswe/bounswe2020group7/wiki/Design-Documents-Feedback). Then, write all updates to the feedback document which can be found [here](https://github.com/bounswe/bounswe2020group7/wiki/Design-Documents-Feedback)
Deadline: @@ | priority | design documents feedback update class diagram update the class diagram according to the feedback given then write all updates to the feedback document which can be found deadline | 1 |
53,813 | 3,051,101,856 | IssuesEvent | 2015-08-12 05:39:31 | Baystation12/Baystation12 | https://api.github.com/repos/Baystation12/Baystation12 | closed | [MASTER] Portable Air Pumps need APC channel to operate | bug priority: medium | EQUIP channel, apparently. Cut it on APC and pump's UI won't be openable. It's also likely occuring with portable scrubbers. | 1.0 | [MASTER] Portable Air Pumps need APC channel to operate - EQUIP channel, apparently. Cut it on APC and pump's UI won't be openable. It's also likely occuring with portable scrubbers. | priority | portable air pumps need apc channel to operate equip channel apparently cut it on apc and pump s ui won t be openable it s also likely occuring with portable scrubbers | 1 |
25,806 | 2,683,995,351 | IssuesEvent | 2015-03-28 15:10:07 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | переключение языка lswitch и настройка "Monitor console lang" | 2–5 stars bug imported Priority-Medium | _From [avsergie...@gmail.com](https://code.google.com/u/115651336206363266876/) on February 13, 2011 02:34:40_
Есть хорошя переключалка языков lswitch. В windows и ФАРе работает хорошо, а в conemu с англ.на русский переключается нормально а с русского на англ необходимо два раза капс лок нажимать. Это где то несовместимость lswitch и conemu?
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=368_ | 1.0 | переключение языка lswitch и настройка "Monitor console lang" - _From [avsergie...@gmail.com](https://code.google.com/u/115651336206363266876/) on February 13, 2011 02:34:40_
Есть хорошя переключалка языков lswitch. В windows и ФАРе работает хорошо, а в conemu с англ.на русский переключается нормально а с русского на англ необходимо два раза капс лок нажимать. Это где то несовместимость lswitch и conemu?
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=368_ | priority | переключение языка lswitch и настройка monitor console lang from on february есть хорошя переключалка языков lswitch в windows и фаре работает хорошо а в conemu с англ на русский переключается нормально а с русского на англ необходимо два раза капс лок нажимать это где то несовместимость lswitch и conemu original issue | 1 |
335,359 | 10,152,462,564 | IssuesEvent | 2019-08-05 23:51:33 | medic/medic | https://api.github.com/repos/medic/medic | closed | Access logging outputs the load balancer IP in production | Priority: 2 - Medium Type: Bug | **To Reproduce**
Steps to reproduce the behavior:
1. Run this against any production instance and ignore the failure:
`curl -X POST https://<user>:<pass>@<instance>/api/v1/sms/africastalking/delivery-reports`
2. Look at the access logs for that instance
3. Find your request(s)
4. See that the IP doesn't match the IP address of the client (your machine)
**Expected behavior**
The logged IP address should match your client address.
**Logs**
```
Jul 28 11:15:58 dev-test-cdc-mohke-dsru dev-test-cdc-mohke-dsru-medic-api-logs: (dev-test-cdc-mohke-dsru-6bb57f6458-gdqrw) | [2019-07-28 08:15:57] REQ 492b5625-c7a1-4eba-923d-10cab3cc998b ::ffff:100.120.0.0 - POST /api/v1/sms/africastalking/delivery-reports HTTP/1.1
Jul 28 11:15:58 dev-test-cdc-mohke-dsru dev-test-cdc-mohke-dsru-medic-api-logs: (dev-test-cdc-mohke-dsru-6bb57f6458-gdqrw) | [2019-07-28 08:15:57] RES 492b5625-c7a1-4eba-923d-10cab3cc998b ::ffff:100.120.0.0 - POST /api/v1/sms/africastalking/delivery-reports HTTP/1.1 403 - 2.310 ms
```
The important bit is the `100.120.0.0` on both of these lines.
**Environment**
- Instance: any production instance
- Browser: N/A
- Client platform: N/A
- App: api
- Version: master
| 1.0 | Access logging outputs the load balancer IP in production - **To Reproduce**
Steps to reproduce the behavior:
1. Run this against any production instance and ignore the failure:
`curl -X POST https://<user>:<pass>@<instance>/api/v1/sms/africastalking/delivery-reports`
2. Look at the access logs for that instance
3. Find your request(s)
4. See that the IP doesn't match the IP address of the client (your machine)
**Expected behavior**
The logged IP address should match your client address.
**Logs**
```
Jul 28 11:15:58 dev-test-cdc-mohke-dsru dev-test-cdc-mohke-dsru-medic-api-logs: (dev-test-cdc-mohke-dsru-6bb57f6458-gdqrw) | [2019-07-28 08:15:57] REQ 492b5625-c7a1-4eba-923d-10cab3cc998b ::ffff:100.120.0.0 - POST /api/v1/sms/africastalking/delivery-reports HTTP/1.1
Jul 28 11:15:58 dev-test-cdc-mohke-dsru dev-test-cdc-mohke-dsru-medic-api-logs: (dev-test-cdc-mohke-dsru-6bb57f6458-gdqrw) | [2019-07-28 08:15:57] RES 492b5625-c7a1-4eba-923d-10cab3cc998b ::ffff:100.120.0.0 - POST /api/v1/sms/africastalking/delivery-reports HTTP/1.1 403 - 2.310 ms
```
The important bit is the `100.120.0.0` on both of these lines.
**Environment**
- Instance: any production instance
- Browser: N/A
- Client platform: N/A
- App: api
- Version: master
| priority | access logging outputs the load balancer ip in production to reproduce steps to reproduce the behavior run this against any production instance and ignore the failure curl x post look at the access logs for that instance find your request s see that the ip doesn t match the ip address of the client your machine expected behavior the logged ip address should match your client address logs jul dev test cdc mohke dsru dev test cdc mohke dsru medic api logs dev test cdc mohke dsru gdqrw req ffff post api sms africastalking delivery reports http jul dev test cdc mohke dsru dev test cdc mohke dsru medic api logs dev test cdc mohke dsru gdqrw res ffff post api sms africastalking delivery reports http ms the important bit is the on both of these lines environment instance any production instance browser n a client platform n a app api version master | 1 |
292,333 | 8,956,455,825 | IssuesEvent | 2019-01-26 17:40:21 | Stivius/XiboLinuxStack | https://api.github.com/repos/Stivius/XiboLinuxStack | closed | MediaInventory request | medium priority task | File cache manager should notify CMS to update the status of its cached files | 1.0 | MediaInventory request - File cache manager should notify CMS to update the status of its cached files | priority | mediainventory request file cache manager should notify cms to update the status of its cached files | 1 |
734,898 | 25,369,457,340 | IssuesEvent | 2022-11-21 09:29:17 | canonical/maas-ui | https://api.github.com/repos/canonical/maas-ui | closed | Sentry blocked by CORS | Priority: Medium Bug 🐛 | **Describe the bug**
Sentry requests are being blocked by CORS policy.
**Steps to reproduce**
1. Go to http://polong.internal:5240/MAAS/r/machines
2. Open Developer tools
3. The error below is being thrown in the console:
`Access to fetch at 'https://sentry.is.canonical.com/api/22/envelope/?sentry_key=901f18f8af164718b5cb34c869e6885d&sentry_version=7&sentry_client=sentry.javascript.browser%2F7.8.0' from origin 'http://polong.internal:5240' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.`
**maas-ui version**
3.3.0
**Additional context**
We might need to use a tunnel to workaround this: https://docs.sentry.io/platforms/javascript/troubleshooting/#using-the-tunnel-option
| 1.0 | Sentry blocked by CORS - **Describe the bug**
Sentry requests are being blocked by CORS policy.
**Steps to reproduce**
1. Go to http://polong.internal:5240/MAAS/r/machines
2. Open Developer tools
3. The error below is being thrown in the console:
`Access to fetch at 'https://sentry.is.canonical.com/api/22/envelope/?sentry_key=901f18f8af164718b5cb34c869e6885d&sentry_version=7&sentry_client=sentry.javascript.browser%2F7.8.0' from origin 'http://polong.internal:5240' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.`
**maas-ui version**
3.3.0
**Additional context**
We might need to use a tunnel to workaround this: https://docs.sentry.io/platforms/javascript/troubleshooting/#using-the-tunnel-option
| priority | sentry blocked by cors describe the bug sentry requests are being blocked by cors policy steps to reproduce go to open developer tools the error below is being thrown in the console access to fetch at from origin has been blocked by cors policy no access control allow origin header is present on the requested resource if an opaque response serves your needs set the request s mode to no cors to fetch the resource with cors disabled maas ui version additional context we might need to use a tunnel to workaround this | 1 |
638,797 | 20,738,558,758 | IssuesEvent | 2022-03-14 15:40:01 | google/flax | https://api.github.com/repos/google/flax | closed | Clarify expectations for `variables` dict in apply() | Priority: P2 - medium | Filing this to document some points for future reference, which came up while working on a fix for #1768.
#1768 is a case where a user unintentionally passed `variables={'params': {'params': ...}}`. This can be solved by adding a check inside apply().
There's a broader case to consider, of whether the library should enforce that the `variables` dict always contains a 'params' key. This would catch cases where the user unintentionally passes `variables={'kernel': ...}`, though I'd guess that that is relatively rarer.
Benefits of enforcing:
* Easier to detect invalid input early on
Costs of enforcing:
* It may not always be correct to assume that `variables` contains 'params', for real usecases.
* Would require a bunch of updates across the Flax tests: there are multiple instances of a variables dict not containing 'params' where it was intended to be valid, e.g. in linen_transforms_test
```
(Pdb) variables
FrozenDict({
test: {
inner: {
baz: DeviceArray([1.], dtype=float32),
},
},
})
```
That along with the docstring in variables.py would need to be updated, if the library enforces 'params'. | 1.0 | Clarify expectations for `variables` dict in apply() - Filing this to document some points for future reference, which came up while working on a fix for #1768.
#1768 is a case where a user unintentionally passed `variables={'params': {'params': ...}}`. This can be solved by adding a check inside apply().
There's a broader case to consider, of whether the library should enforce that the `variables` dict always contains a 'params' key. This would catch cases where the user unintentionally passes `variables={'kernel': ...}`, though I'd guess that that is relatively rarer.
Benefits of enforcing:
* Easier to detect invalid input early on
Costs of enforcing:
* It may not always be correct to assume that `variables` contains 'params', for real usecases.
* Would require a bunch of updates across the Flax tests: there are multiple instances of a variables dict not containing 'params' where it was intended to be valid, e.g. in linen_transforms_test
```
(Pdb) variables
FrozenDict({
test: {
inner: {
baz: DeviceArray([1.], dtype=float32),
},
},
})
```
That along with the docstring in variables.py would need to be updated, if the library enforces 'params'. | priority | clarify expectations for variables dict in apply filing this to document some points for future reference which came up while working on a fix for is a case where a user unintentionally passed variables params params this can be solved by adding a check inside apply there s a broader case to consider of whether the library should enforce that the variables dict always contains a params key this would catch cases where the user unintentionally passes variables kernel though i d guess that that is relatively rarer benefits of enforcing easier to detect invalid input early on costs of enforcing it may not always be correct to assume that variables contains params for real usecases would require a bunch of updates across the flax tests there are multiple instances of a variables dict not containing params where it was intended to be valid e g in linen transforms test pdb variables frozendict test inner baz devicearray dtype that along with the docstring in variables py would need to be updated if the library enforces params | 1 |
150,232 | 5,741,203,012 | IssuesEvent | 2017-04-24 04:14:22 | intel-analytics/BigDL | https://api.github.com/repos/intel-analytics/BigDL | closed | refactor the optimizer initialization process in python api | medium priority python | 1. add *set_model()* to optimizer
Currently the model could only be set to optimizer during creation. That means we can't reuse an optimizer for different models. Creating optimizer may be time consuming sometimes.
1. add *prepare_input()* to optimizer
Currently optimizer takes a long time to create and initialize. The reason is loading input and caching takes time. We could warp this part of work into one function and name it something like prepare_input/load_input, and allow user to train different models on the same set of input data. and it won't be confusing why initialization takes so long. | 1.0 | refactor the optimizer initialization process in python api - 1. add *set_model()* to optimizer
Currently the model could only be set to optimizer during creation. That means we can't reuse an optimizer for different models. Creating optimizer may be time consuming sometimes.
1. add *prepare_input()* to optimizer
Currently optimizer takes a long time to create and initialize. The reason is loading input and caching takes time. We could warp this part of work into one function and name it something like prepare_input/load_input, and allow user to train different models on the same set of input data. and it won't be confusing why initialization takes so long. | priority | refactor the optimizer initialization process in python api add set model to optimizer currently the model could only be set to optimizer during creation that means we can t reuse an optimizer for different models creating optimizer may be time consuming sometimes add prepare input to optimizer currently optimizer takes a long time to create and initialize the reason is loading input and caching takes time we could warp this part of work into one function and name it something like prepare input load input and allow user to train different models on the same set of input data and it won t be confusing why initialization takes so long | 1 |
243,096 | 7,853,297,854 | IssuesEvent | 2018-06-20 16:57:54 | ansible/galaxy | https://api.github.com/repos/ansible/galaxy | opened | Remove unsupported content types form the filter dropdown on the Search page | area/frontend priority/medium status/new | - [ ] We currently only support Role and Ansible Playbook Bundle
- [ ] Ansible Playbook Bundle is too long to display correctly on the list | 1.0 | Remove unsupported content types form the filter dropdown on the Search page - - [ ] We currently only support Role and Ansible Playbook Bundle
- [ ] Ansible Playbook Bundle is too long to display correctly on the list | priority | remove unsupported content types form the filter dropdown on the search page we currently only support role and ansible playbook bundle ansible playbook bundle is too long to display correctly on the list | 1 |
751,800 | 26,258,687,496 | IssuesEvent | 2023-01-06 04:48:56 | openmsupply/mobile | https://api.github.com/repos/openmsupply/mobile | closed | Number records removed from sync | Priority: high Effort: medium Type: Enhancement | ## Is your feature request related to a problem? Please describe.
[A new KDD on mSupply Server](https://github.com/sussol/msupply/pull/11767/files). Pasted below in case one cannot access:
<details>
# KDD-0012: Sync v5 number sequences not to sync
- _Date_: 2022-11-15
- _Deciders_: @Chris-Petty @andreievg
- _Status_: Decided
- _Outcome_: Don't sync number records. Generate them on use, and clean up when stores move around.
## Context
mSupply uses tables `number` and `number_reuse` to manage serials numbers on invoices (oms shipments), requisitions and several other things. These number tables have historically not been included in sync which has lead to some headaches with serial numbers seemingly resetting as stores move from one site to another, or sites get devices replaced and initialised.
Recently in sync v5 it was decided to sync them to resolve this problem. This has been implemented in mSupply desktop (4D) but in testing and addressing number sequence sync in omSupply it has become apparent it will not work:
- When sites are upgraded to omSupply, they may not have been previously upgraded to the latest version of mSupply desktop that syncs number sequences. This results in omSupply not getting them and generating its own that start from `1`.
- Desktop tracks requisition sequences with `requisition_number_for_store_`.
- Mobile `requisition_serial_number_for_store_`.
- Omsupply has split the sequences into `request_requisition_for_store_` and `response_requisition_for_store_` for request and response requisitions respectively.
This causes a mess. If all these applications sync their number sequences, when a store is moved to another site or the site is upgraded from one app to another, the sequences don't get reused because the keys don't match. Omsupply has diverged (in a good way) from what desktop was doing, so it's not immediately clear how to translate this. We actually thought that mobile was doing separate sequences too but it appears in code that it doesn't! It has a second sequence for another field `requisition.requester_reference` that seems to just be redundant.
We need a solution that:
- Allows stores to move from one site to another using potentially a different app and continue allocate serial numbers correctly (including number reuse)
- Allows a site to initialised in omSupply that was previously used on an older version of desktop or mobile and continue allocate serial numbers correctly (including number reuse)
- Regarding number_reuse, after a discussion with @craigdrown we've agreed that it's not used much. In operation if a store moves from one site to another, a site changes app or the number reuse pref is turned on, then we **do not** need to backfill the `number_reuse` table. Just continue from the current highest serial number.
### Small caveat on number reuse
omSupply uses 2 sequences for the 2 types of requisition unlike the old apps. While we don't currently support number reuse for requisitions, it could lead to some odd behaviours if we did:
- OmSupply store moves to desktop or mobile - We accept that it'll appear that there are duplicate sequences across the 2 requisition types.
- Desktop/mobile store moves omSupply - We accept that both requisition types will appear to have gaps in the sequences. This may wreck havoc with `number_reuse`, though! If the pref is on the intuition will be to reuse all the gaps which will seem like an odd behaviour.
## Options
### Option 1
The central server on initialisation sorts out making sure the correct sequences exist.
As described in the context above, we can rule this one out as it is flawed:
- Because each app has a different sequence key for requisition sequences, we have to make the central server client aware to make sure we're sending out `number` records with the correct key for the client app 😭
- Stores can be moved from site to site outside of initialisation. So solutions should address this too. The central server would also have to make sure it does it correctly when moving a store making sure to satisfy the above point.
### Option 2
The apps generate number sequences on initialisation and whenever a store is synced and set to active. Don't bother syncing number sequences.
- The remote sites would have to be **certain** that they have synced **all** all the records relating to the sequences before generating the number records, or face gaps and/or duplicates.
- The central server would have to make sure that for stores that are made active on itself, it generates the sequence for use as a remote site would.
_Pros:_
- Each remote app can maintain its own style of serial numbering
- At the moment the only tenable approach
_Cons:_
- Have to trust the remote apps to handle it correctly - this is fine, we already trust them to do serials numbering correctly regardless
- Have to implement it in every app. Not too hard
- Have to undo some existing work. Always the way 😉
### Option 3
When ever an app requests the next number in the sequence, it checks for the `number` record. If it exists, use it. If not, get the `max(invoice_num)+1` and save the `number` record.
When ever a store becomes active on a site, it should delete the `number` and `number_reuse` records for the store so that it may reset as above. On becoming inactive would be equivalent.
_Pros:_
- Super easy and effective
_Cons:_
- Less fun? Seems pretty solid really.
## Decision
Option 3 (for all apps).
Easiest approach, with least complexity that is quite robust. Code largely in one place. Not great for `number_reuse` though.
- Option 1 is not functional
- Option 2 works, but is complex and scattered.
Did consider option 2 for Desktop/4D and option 3 for omSupply, but after clarifying number reuse behaviours option 3 is adequate for mSupply desktop and way easier than option 2.
## Consequences
- mSupply desktop, open mSupply needs to implement option 3.
- Whenever an app requests the next number in the sequence, it checks for the `number` record. If it exists, use it. If not, get the `max(invoice_num)+1` and save the `number` record.
- Whenever a store becomes active on a site, it should delete the `number` and `number_reuse` records for the store so that it may reset as above. On becoming inactive would be equivalent.
- Remove any code for syncing the `number` and `number_reuse` tables.
- Migration for removing any `number` and `number_reuse` records for stores not active on the datafile/site.
</details>
To handle numbers sequences correctly:
- [ ] Whenever getting the next number in the sequence, it checks for the `number` record. If it exists, use it. If not, get the `max(invoice_num)+1` and save the `number` record.
- [ ] Whenever a store becomes active on a site, it should delete the `number` and `number_reuse` records for the store so that it may reset as above. On becoming inactive would be equivalent.
- [ ] Remove any code for syncing the `number` and `number_reuse` tables.
- [ ] If receiving `number` or `number_reuse` records in sync, just ignore them.
- [ ] Migration for removing any `number` and `number_reuse` records for stores not active on the datafile/site.
| 1.0 | Number records removed from sync - ## Is your feature request related to a problem? Please describe.
[A new KDD on mSupply Server](https://github.com/sussol/msupply/pull/11767/files). Pasted below in case one cannot access:
<details>
# KDD-0012: Sync v5 number sequences not to sync
- _Date_: 2022-11-15
- _Deciders_: @Chris-Petty @andreievg
- _Status_: Decided
- _Outcome_: Don't sync number records. Generate them on use, and clean up when stores move around.
## Context
mSupply uses tables `number` and `number_reuse` to manage serials numbers on invoices (oms shipments), requisitions and several other things. These number tables have historically not been included in sync which has lead to some headaches with serial numbers seemingly resetting as stores move from one site to another, or sites get devices replaced and initialised.
Recently in sync v5 it was decided to sync them to resolve this problem. This has been implemented in mSupply desktop (4D) but in testing and addressing number sequence sync in omSupply it has become apparent it will not work:
- When sites are upgraded to omSupply, they may not have been previously upgraded to the latest version of mSupply desktop that syncs number sequences. This results in omSupply not getting them and generating its own that start from `1`.
- Desktop tracks requisition sequences with `requisition_number_for_store_`.
- Mobile `requisition_serial_number_for_store_`.
- Omsupply has split the sequences into `request_requisition_for_store_` and `response_requisition_for_store_` for request and response requisitions respectively.
This causes a mess. If all these applications sync their number sequences, when a store is moved to another site or the site is upgraded from one app to another, the sequences don't get reused because the keys don't match. Omsupply has diverged (in a good way) from what desktop was doing, so it's not immediately clear how to translate this. We actually thought that mobile was doing separate sequences too but it appears in code that it doesn't! It has a second sequence for another field `requisition.requester_reference` that seems to just be redundant.
We need a solution that:
- Allows stores to move from one site to another using potentially a different app and continue allocate serial numbers correctly (including number reuse)
- Allows a site to initialised in omSupply that was previously used on an older version of desktop or mobile and continue allocate serial numbers correctly (including number reuse)
- Regarding number_reuse, after a discussion with @craigdrown we've agreed that it's not used much. In operation if a store moves from one site to another, a site changes app or the number reuse pref is turned on, then we **do not** need to backfill the `number_reuse` table. Just continue from the current highest serial number.
### Small caveat on number reuse
omSupply uses 2 sequences for the 2 types of requisition unlike the old apps. While we don't currently support number reuse for requisitions, it could lead to some odd behaviours if we did:
- OmSupply store moves to desktop or mobile - We accept that it'll appear that there are duplicate sequences across the 2 requisition types.
- Desktop/mobile store moves omSupply - We accept that both requisition types will appear to have gaps in the sequences. This may wreck havoc with `number_reuse`, though! If the pref is on the intuition will be to reuse all the gaps which will seem like an odd behaviour.
## Options
### Option 1
The central server on initialisation sorts out making sure the correct sequences exist.
As described in the context above, we can rule this one out as it is flawed:
- Because each app has a different sequence key for requisition sequences, we have to make the central server client aware to make sure we're sending out `number` records with the correct key for the client app 😭
- Stores can be moved from site to site outside of initialisation. So solutions should address this too. The central server would also have to make sure it does it correctly when moving a store making sure to satisfy the above point.
### Option 2
The apps generate number sequences on initialisation and whenever a store is synced and set to active. Don't bother syncing number sequences.
- The remote sites would have to be **certain** that they have synced **all** all the records relating to the sequences before generating the number records, or face gaps and/or duplicates.
- The central server would have to make sure that for stores that are made active on itself, it generates the sequence for use as a remote site would.
_Pros:_
- Each remote app can maintain its own style of serial numbering
- At the moment the only tenable approach
_Cons:_
- Have to trust the remote apps to handle it correctly - this is fine, we already trust them to do serials numbering correctly regardless
- Have to implement it in every app. Not too hard
- Have to undo some existing work. Always the way 😉
### Option 3
When ever an app requests the next number in the sequence, it checks for the `number` record. If it exists, use it. If not, get the `max(invoice_num)+1` and save the `number` record.
When ever a store becomes active on a site, it should delete the `number` and `number_reuse` records for the store so that it may reset as above. On becoming inactive would be equivalent.
_Pros:_
- Super easy and effective
_Cons:_
- Less fun? Seems pretty solid really.
## Decision
Option 3 (for all apps).
Easiest approach, with least complexity that is quite robust. Code largely in one place. Not great for `number_reuse` though.
- Option 1 is not functional
- Option 2 works, but is complex and scattered.
Did consider option 2 for Desktop/4D and option 3 for omSupply, but after clarifying number reuse behaviours option 3 is adequate for mSupply desktop and way easier than option 2.
## Consequences
- mSupply desktop, open mSupply needs to implement option 3.
- Whenever an app requests the next number in the sequence, it checks for the `number` record. If it exists, use it. If not, get the `max(invoice_num)+1` and save the `number` record.
- Whenever a store becomes active on a site, it should delete the `number` and `number_reuse` records for the store so that it may reset as above. On becoming inactive would be equivalent.
- Remove any code for syncing the `number` and `number_reuse` tables.
- Migration for removing any `number` and `number_reuse` records for stores not active on the datafile/site.
</details>
To handle numbers sequences correctly:
- [ ] Whenever getting the next number in the sequence, it checks for the `number` record. If it exists, use it. If not, get the `max(invoice_num)+1` and save the `number` record.
- [ ] Whenever a store becomes active on a site, it should delete the `number` and `number_reuse` records for the store so that it may reset as above. On becoming inactive would be equivalent.
- [ ] Remove any code for syncing the `number` and `number_reuse` tables.
- [ ] If receiving `number` or `number_reuse` records in sync, just ignore them.
- [ ] Migration for removing any `number` and `number_reuse` records for stores not active on the datafile/site.
| priority | number records removed from sync is your feature request related to a problem please describe pasted below in case one cannot access kdd sync number sequences not to sync date deciders chris petty andreievg status decided outcome don t sync number records generate them on use and clean up when stores move around context msupply uses tables number and number reuse to manage serials numbers on invoices oms shipments requisitions and several other things these number tables have historically not been included in sync which has lead to some headaches with serial numbers seemingly resetting as stores move from one site to another or sites get devices replaced and initialised recently in sync it was decided to sync them to resolve this problem this has been implemented in msupply desktop but in testing and addressing number sequence sync in omsupply it has become apparent it will not work when sites are upgraded to omsupply they may not have been previously upgraded to the latest version of msupply desktop that syncs number sequences this results in omsupply not getting them and generating its own that start from desktop tracks requisition sequences with requisition number for store mobile requisition serial number for store omsupply has split the sequences into request requisition for store and response requisition for store for request and response requisitions respectively this causes a mess if all these applications sync their number sequences when a store is moved to another site or the site is upgraded from one app to another the sequences don t get reused because the keys don t match omsupply has diverged in a good way from what desktop was doing so it s not immediately clear how to translate this we actually thought that mobile was doing separate sequences too but it appears in code that it doesn t it has a second sequence for another field requisition requester reference that seems to just be redundant we need a solution that allows stores to move from one site to another using potentially a different app and continue allocate serial numbers correctly including number reuse allows a site to initialised in omsupply that was previously used on an older version of desktop or mobile and continue allocate serial numbers correctly including number reuse regarding number reuse after a discussion with craigdrown we ve agreed that it s not used much in operation if a store moves from one site to another a site changes app or the number reuse pref is turned on then we do not need to backfill the number reuse table just continue from the current highest serial number small caveat on number reuse omsupply uses sequences for the types of requisition unlike the old apps while we don t currently support number reuse for requisitions it could lead to some odd behaviours if we did omsupply store moves to desktop or mobile we accept that it ll appear that there are duplicate sequences across the requisition types desktop mobile store moves omsupply we accept that both requisition types will appear to have gaps in the sequences this may wreck havoc with number reuse though if the pref is on the intuition will be to reuse all the gaps which will seem like an odd behaviour options option the central server on initialisation sorts out making sure the correct sequences exist as described in the context above we can rule this one out as it is flawed because each app has a different sequence key for requisition sequences we have to make the central server client aware to make sure we re sending out number records with the correct key for the client app 😭 stores can be moved from site to site outside of initialisation so solutions should address this too the central server would also have to make sure it does it correctly when moving a store making sure to satisfy the above point option the apps generate number sequences on initialisation and whenever a store is synced and set to active don t bother syncing number sequences the remote sites would have to be certain that they have synced all all the records relating to the sequences before generating the number records or face gaps and or duplicates the central server would have to make sure that for stores that are made active on itself it generates the sequence for use as a remote site would pros each remote app can maintain its own style of serial numbering at the moment the only tenable approach cons have to trust the remote apps to handle it correctly this is fine we already trust them to do serials numbering correctly regardless have to implement it in every app not too hard have to undo some existing work always the way 😉 option when ever an app requests the next number in the sequence it checks for the number record if it exists use it if not get the max invoice num and save the number record when ever a store becomes active on a site it should delete the number and number reuse records for the store so that it may reset as above on becoming inactive would be equivalent pros super easy and effective cons less fun seems pretty solid really decision option for all apps easiest approach with least complexity that is quite robust code largely in one place not great for number reuse though option is not functional option works but is complex and scattered did consider option for desktop and option for omsupply but after clarifying number reuse behaviours option is adequate for msupply desktop and way easier than option consequences msupply desktop open msupply needs to implement option whenever an app requests the next number in the sequence it checks for the number record if it exists use it if not get the max invoice num and save the number record whenever a store becomes active on a site it should delete the number and number reuse records for the store so that it may reset as above on becoming inactive would be equivalent remove any code for syncing the number and number reuse tables migration for removing any number and number reuse records for stores not active on the datafile site to handle numbers sequences correctly whenever getting the next number in the sequence it checks for the number record if it exists use it if not get the max invoice num and save the number record whenever a store becomes active on a site it should delete the number and number reuse records for the store so that it may reset as above on becoming inactive would be equivalent remove any code for syncing the number and number reuse tables if receiving number or number reuse records in sync just ignore them migration for removing any number and number reuse records for stores not active on the datafile site | 1 |
489,060 | 14,100,420,446 | IssuesEvent | 2020-11-06 04:08:16 | AY2021S1-CS2113T-W11-3/tp | https://api.github.com/repos/AY2021S1-CS2113T-W11-3/tp | closed | [PE-D] [CS2113T-W11-3] Deleting is rejected if i add a white space at the back | priority.Medium severity.Medium | [CS2113T-W11-3]

<!--session: 1604047632392-d1e1cdea-7ee1-480f-b261-56799b6437c2-->
-------------
Labels: `severity.Medium` `type.FunctionalityBug`
original: Varsha3006/ped#6 | 1.0 | [PE-D] [CS2113T-W11-3] Deleting is rejected if i add a white space at the back - [CS2113T-W11-3]

<!--session: 1604047632392-d1e1cdea-7ee1-480f-b261-56799b6437c2-->
-------------
Labels: `severity.Medium` `type.FunctionalityBug`
original: Varsha3006/ped#6 | priority | deleting is rejected if i add a white space at the back labels severity medium type functionalitybug original ped | 1 |
619,445 | 19,525,889,122 | IssuesEvent | 2021-12-30 07:38:21 | bounswe/2021SpringGroup3 | https://api.github.com/repos/bounswe/2021SpringGroup3 | opened | Backend: Advanced Search Bug | Type: Bug Status: Available Priority: Medium Component: Backend | ### What happened?
Advanced search API takes start and end fields required. Users should be able to filter posts with only one of the fields
### Which version?
Nodejs version 14.17.0
### How to reproduce this bug?
_No response_ | 1.0 | Backend: Advanced Search Bug - ### What happened?
Advanced search API takes start and end fields required. Users should be able to filter posts with only one of the fields
### Which version?
Nodejs version 14.17.0
### How to reproduce this bug?
_No response_ | priority | backend advanced search bug what happened advanced search api takes start and end fields required users should be able to filter posts with only one of the fields which version nodejs version how to reproduce this bug no response | 1 |
89,936 | 3,807,035,370 | IssuesEvent | 2016-03-25 04:24:37 | OuterDeepSpace/OuterDeepSpace | https://api.github.com/repos/OuterDeepSpace/OuterDeepSpace | closed | MainGameDlg instance has no attribute 'onSystems' | Bug Medium Priority | Issue happens when going from menu Planets | System List
Screenshot: http://i.imgur.com/DKnjBdf.jpg | 1.0 | MainGameDlg instance has no attribute 'onSystems' - Issue happens when going from menu Planets | System List
Screenshot: http://i.imgur.com/DKnjBdf.jpg | priority | maingamedlg instance has no attribute onsystems issue happens when going from menu planets system list screenshot | 1 |
649,420 | 21,300,576,093 | IssuesEvent | 2022-04-15 02:10:44 | moja-global/community-website | https://api.github.com/repos/moja-global/community-website | closed | Feature Request: Add scroll to top button | enhancement Assigned feature request Issue:No-Activity Priority = Medium | ### Is your feature request related to a problem? Please describe.
When the page is really long, it becomes too tedious to scroll back to top through screens and screens of content. Hence the role of the Scroll to top button.
### Describe the solution you'd like.
_No response_
### Describe alternatives you've considered
_No response_
### Additional context.
_No response_ | 1.0 | Feature Request: Add scroll to top button - ### Is your feature request related to a problem? Please describe.
When the page is really long, it becomes too tedious to scroll back to top through screens and screens of content. Hence the role of the Scroll to top button.
### Describe the solution you'd like.
_No response_
### Describe alternatives you've considered
_No response_
### Additional context.
_No response_ | priority | feature request add scroll to top button is your feature request related to a problem please describe when the page is really long it becomes too tedious to scroll back to top through screens and screens of content hence the role of the scroll to top button describe the solution you d like no response describe alternatives you ve considered no response additional context no response | 1 |
82,180 | 3,603,818,042 | IssuesEvent | 2016-02-03 20:29:16 | hpi-swt2/hpi-hiwi-portal | https://api.github.com/repos/hpi-swt2/hpi-hiwi-portal | closed | Student edit page | 0 - Backlog Priority Medium | - Status obligatory (as it is already?)
- current employer (not obligatory)
- Dropdown employers are aloud to view my profile, other students can view my profile...
- "Studienschwerpunkt -> Auswahl von Lehrstühlen"
- Interessen
- Abitur
- Projekterfahrung
- 2 Slots zum eingeben einer nicht aufgeführten Sprache | 1.0 | Student edit page - - Status obligatory (as it is already?)
- current employer (not obligatory)
- Dropdown employers are aloud to view my profile, other students can view my profile...
- "Studienschwerpunkt -> Auswahl von Lehrstühlen"
- Interessen
- Abitur
- Projekterfahrung
- 2 Slots zum eingeben einer nicht aufgeführten Sprache | priority | student edit page status obligatory as it is already current employer not obligatory dropdown employers are aloud to view my profile other students can view my profile studienschwerpunkt auswahl von lehrstühlen interessen abitur projekterfahrung slots zum eingeben einer nicht aufgeführten sprache | 1 |
188,149 | 6,773,358,827 | IssuesEvent | 2017-10-27 05:21:55 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | opened | Coverity issue seen with CID: 178237 | area: Drivers bug priority: medium | Static code scan issues seen in File: /drivers/ieee802154/ieee802154_mcr20a.c
Category: Memory - corruptions
Function: _mcr20a_write_burst
Component: Drivers
Please fix or provide comments to square it off in coverity in the link: https://scan9.coverity.com/reports.htm#v32951/p12996 | 1.0 | Coverity issue seen with CID: 178237 - Static code scan issues seen in File: /drivers/ieee802154/ieee802154_mcr20a.c
Category: Memory - corruptions
Function: _mcr20a_write_burst
Component: Drivers
Please fix or provide comments to square it off in coverity in the link: https://scan9.coverity.com/reports.htm#v32951/p12996 | priority | coverity issue seen with cid static code scan issues seen in file drivers c category memory corruptions function write burst component drivers please fix or provide comments to square it off in coverity in the link | 1 |
606,701 | 18,767,843,671 | IssuesEvent | 2021-11-06 08:42:22 | AY2122S1-CS2103T-T12-3/tp | https://api.github.com/repos/AY2122S1-CS2103T-T12-3/tp | closed | [PE-D] Phone Number infinite | priority.Medium | As you can see here, the phone number can be extremely long. Users may, by accident, enter a very long phone number and it is not logical.
Suggest to place some preventative measures to ensure that the phone number is not that long
But keep up the good work!

<!--session: 1635494624855-6c4169c8-8f98-434c-bb03-2bfcaad4f18c--><!--Version: Web v3.4.1-->
-------------
Labels: `type.FeatureFlaw` `severity.Medium`
original: Timothyoung97/ped#5 | 1.0 | [PE-D] Phone Number infinite - As you can see here, the phone number can be extremely long. Users may, by accident, enter a very long phone number and it is not logical.
Suggest to place some preventative measures to ensure that the phone number is not that long
But keep up the good work!

<!--session: 1635494624855-6c4169c8-8f98-434c-bb03-2bfcaad4f18c--><!--Version: Web v3.4.1-->
-------------
Labels: `type.FeatureFlaw` `severity.Medium`
original: Timothyoung97/ped#5 | priority | phone number infinite as you can see here the phone number can be extremely long users may by accident enter a very long phone number and it is not logical suggest to place some preventative measures to ensure that the phone number is not that long but keep up the good work labels type featureflaw severity medium original ped | 1 |
392,479 | 11,591,907,959 | IssuesEvent | 2020-02-24 10:23:10 | luna/enso | https://api.github.com/repos/luna/enso | closed | Implement Multiclient Write Lock | Category: Tooling Change: Non-Breaking Difficulty: Hard Priority: Medium Type: Enhancement | ### Summary
It is important for our users that they are able to connect multiple IDE clients to a single running Enso project. This enables a multitude of sharing and collaborative workflows, letting users work seamlessly together from their own machines. While, initially, we only aim to have rudimentary multi-client support. we want to evolve to support collaborative real-time editing in the future, and so the implementation should account for this.
Furthermore, we want to remain compatible with the single-client LSP as we implement this, so care will need to be taken to ensure that we remain compatible while creating an authorisation flow for additional connections.
### Value
We have rudimentary multi-client support in the engine, allowing multiple users to connect to one instance at once. Furthermore, maintiain LSP compatibility.
### Specification
- [ ] Create a concept of the 'write lock'. The engine will only accept `didChange` messages from the client currently holding the write lock.
+ Write lock should be automatically granted to the first client to connect.
+ If any client makes a change without holding the write lock, the change should be denied.
- [ ] Implement the write lock on top of the capabilities system from #464.
+ [ ] Create a capability `canEdit` as specified in the design document.
+ [ ] Use it to prepare for the above behaviour. It cannot be completed as part of this issue as the text functionality does not yet exist.
- [ ] Ensure that the specification document is up to date with respect to any changes to the protocol.
### Acceptance Criteria & Test Cases
- We can support multiple clients connecting to a single engine instance, where only one client is able to write at any given time.
- This functionality is rigorously tested using stub endpoints.
| 1.0 | Implement Multiclient Write Lock - ### Summary
It is important for our users that they are able to connect multiple IDE clients to a single running Enso project. This enables a multitude of sharing and collaborative workflows, letting users work seamlessly together from their own machines. While, initially, we only aim to have rudimentary multi-client support. we want to evolve to support collaborative real-time editing in the future, and so the implementation should account for this.
Furthermore, we want to remain compatible with the single-client LSP as we implement this, so care will need to be taken to ensure that we remain compatible while creating an authorisation flow for additional connections.
### Value
We have rudimentary multi-client support in the engine, allowing multiple users to connect to one instance at once. Furthermore, maintiain LSP compatibility.
### Specification
- [ ] Create a concept of the 'write lock'. The engine will only accept `didChange` messages from the client currently holding the write lock.
+ Write lock should be automatically granted to the first client to connect.
+ If any client makes a change without holding the write lock, the change should be denied.
- [ ] Implement the write lock on top of the capabilities system from #464.
+ [ ] Create a capability `canEdit` as specified in the design document.
+ [ ] Use it to prepare for the above behaviour. It cannot be completed as part of this issue as the text functionality does not yet exist.
- [ ] Ensure that the specification document is up to date with respect to any changes to the protocol.
### Acceptance Criteria & Test Cases
- We can support multiple clients connecting to a single engine instance, where only one client is able to write at any given time.
- This functionality is rigorously tested using stub endpoints.
| priority | implement multiclient write lock summary it is important for our users that they are able to connect multiple ide clients to a single running enso project this enables a multitude of sharing and collaborative workflows letting users work seamlessly together from their own machines while initially we only aim to have rudimentary multi client support we want to evolve to support collaborative real time editing in the future and so the implementation should account for this furthermore we want to remain compatible with the single client lsp as we implement this so care will need to be taken to ensure that we remain compatible while creating an authorisation flow for additional connections value we have rudimentary multi client support in the engine allowing multiple users to connect to one instance at once furthermore maintiain lsp compatibility specification create a concept of the write lock the engine will only accept didchange messages from the client currently holding the write lock write lock should be automatically granted to the first client to connect if any client makes a change without holding the write lock the change should be denied implement the write lock on top of the capabilities system from create a capability canedit as specified in the design document use it to prepare for the above behaviour it cannot be completed as part of this issue as the text functionality does not yet exist ensure that the specification document is up to date with respect to any changes to the protocol acceptance criteria test cases we can support multiple clients connecting to a single engine instance where only one client is able to write at any given time this functionality is rigorously tested using stub endpoints | 1 |
486,510 | 14,010,312,439 | IssuesEvent | 2020-10-29 04:46:27 | vanjarosoftware/Vanjaro.Platform | https://api.github.com/repos/vanjarosoftware/Vanjaro.Platform | closed | Implement new settings in theme builder | Area: Frontend Enhancement Priority: Medium Release: Minor | 
**Sign in Form**
- Hover text color
- Font family
**Social Authentication buttons**
- Color
- Background
- Background hover color
- Font size
- Letter spacing
- Hover text color
- Font family
| 1.0 | Implement new settings in theme builder - 
**Sign in Form**
- Hover text color
- Font family
**Social Authentication buttons**
- Color
- Background
- Background hover color
- Font size
- Letter spacing
- Hover text color
- Font family
| priority | implement new settings in theme builder sign in form hover text color font family social authentication buttons color background background hover color font size letter spacing hover text color font family | 1 |
457,530 | 13,158,104,990 | IssuesEvent | 2020-08-10 13:50:06 | HabitRPG/habitica-ios | https://api.github.com/repos/HabitRPG/habitica-ios | closed | Tasks: Improve Challenge task deletion flow | Priority: medium Type: Enhancement | This will include:
- [x] making sure the Challenge megaphone icon and broken Challenge icon display
- [x] a new modal for handling tasks from broken Challenges by tapping broken icon
- [x] a new modal for when a user tries to delete a Challenge task from an active Challenge
- [x] updated(?) modal for when a user leaves an active challenge | 1.0 | Tasks: Improve Challenge task deletion flow - This will include:
- [x] making sure the Challenge megaphone icon and broken Challenge icon display
- [x] a new modal for handling tasks from broken Challenges by tapping broken icon
- [x] a new modal for when a user tries to delete a Challenge task from an active Challenge
- [x] updated(?) modal for when a user leaves an active challenge | priority | tasks improve challenge task deletion flow this will include making sure the challenge megaphone icon and broken challenge icon display a new modal for handling tasks from broken challenges by tapping broken icon a new modal for when a user tries to delete a challenge task from an active challenge updated modal for when a user leaves an active challenge | 1 |
321,306 | 9,796,624,192 | IssuesEvent | 2019-06-11 08:06:17 | projectacrn/acrn-hypervisor | https://api.github.com/repos/projectacrn/acrn-hypervisor | closed | HV will crash if launch two UOS with same UUID | priority: P3-Medium status: Assigned type: bug | HV will crash if launch two UOS with same UUID | 1.0 | HV will crash if launch two UOS with same UUID - HV will crash if launch two UOS with same UUID | priority | hv will crash if launch two uos with same uuid hv will crash if launch two uos with same uuid | 1 |
119,749 | 4,775,352,547 | IssuesEvent | 2016-10-27 10:05:12 | MatchboxDorry/dorry-web | https://api.github.com/repos/MatchboxDorry/dorry-web | closed | [UI]test1-重启,停止或者删除一个service服务后看到操作是否成功的提示 | censor: approved effort: 2 (medium) feature: view template flag: fixed priority: 2 (required) type: enhancement | **System:**
Mac mini Os X EI Capitan
**Browser:**
Chrome
**What I want to do**
我想重启或、停止或者删除一个service成功后看到操作是否成功的提示
**Where I am**
Service page
**What I have done**
我重启了一个Stopped Service的服务,并等待看到操作是否成功的提示.
**What I expect:**
无论操作成功是否,我希望看到页面顶部弹出提示,并且显示5s后消失.
**What really happened**:
当我进行了一个启动service的操作后,我却没有看到提示弹出. | 1.0 | [UI]test1-重启,停止或者删除一个service服务后看到操作是否成功的提示 - **System:**
Mac mini Os X EI Capitan
**Browser:**
Chrome
**What I want to do**
我想重启或、停止或者删除一个service成功后看到操作是否成功的提示
**Where I am**
Service page
**What I have done**
我重启了一个Stopped Service的服务,并等待看到操作是否成功的提示.
**What I expect:**
无论操作成功是否,我希望看到页面顶部弹出提示,并且显示5s后消失.
**What really happened**:
当我进行了一个启动service的操作后,我却没有看到提示弹出. | priority | 重启,停止或者删除一个service服务后看到操作是否成功的提示 system mac mini os x ei capitan browser chrome what i want to do 我想重启或、停止或者删除一个service成功后看到操作是否成功的提示 where i am service page what i have done 我重启了一个stopped service的服务,并等待看到操作是否成功的提示 what i expect 无论操作成功是否,我希望看到页面顶部弹出提示, what really happened 当我进行了一个启动service的操作后,我却没有看到提示弹出 | 1 |
270,147 | 8,452,851,394 | IssuesEvent | 2018-10-20 09:13:14 | EUCweb/BIS-F | https://api.github.com/repos/EUCweb/BIS-F | closed | Windows Defender -ArgumentList failing | Priority: Medium | Firstly, thank you for creating BIS-F. I'm an SCCM engineering who is moving into the Citrix PVS space, BIS-F integrated with our OSD task sequences really simplifies the whole processes.
I noticed that during the task sequence the step for updating virus definitions for Windows Defender sits idle and then fails after 60 maximum run time. Looking into the script further, the command line is **Start-Process -FilePath "$ProductPath\MpCMDrun.exe" -ArgumentList "SignatureUpdate" -WindowStyle Hidden**
The signatureupdate argument should be -SignatureUpdate (proceeding dash is missing). This causes Windows Defender to error 'Update failed with hr: 0x80070490".
Updating the PowerShell script to Start-Process -FilePath "**$ProductPath\MpCMDrun.exe" -ArgumentList "-SignatureUpdate" -WindowStyle Hidden**
resolves the issues.
| 1.0 | Windows Defender -ArgumentList failing - Firstly, thank you for creating BIS-F. I'm an SCCM engineering who is moving into the Citrix PVS space, BIS-F integrated with our OSD task sequences really simplifies the whole processes.
I noticed that during the task sequence the step for updating virus definitions for Windows Defender sits idle and then fails after 60 maximum run time. Looking into the script further, the command line is **Start-Process -FilePath "$ProductPath\MpCMDrun.exe" -ArgumentList "SignatureUpdate" -WindowStyle Hidden**
The signatureupdate argument should be -SignatureUpdate (proceeding dash is missing). This causes Windows Defender to error 'Update failed with hr: 0x80070490".
Updating the PowerShell script to Start-Process -FilePath "**$ProductPath\MpCMDrun.exe" -ArgumentList "-SignatureUpdate" -WindowStyle Hidden**
resolves the issues.
| priority | windows defender argumentlist failing firstly thank you for creating bis f i m an sccm engineering who is moving into the citrix pvs space bis f integrated with our osd task sequences really simplifies the whole processes i noticed that during the task sequence the step for updating virus definitions for windows defender sits idle and then fails after maximum run time looking into the script further the command line is start process filepath productpath mpcmdrun exe argumentlist signatureupdate windowstyle hidden the signatureupdate argument should be signatureupdate proceeding dash is missing this causes windows defender to error update failed with hr updating the powershell script to start process filepath productpath mpcmdrun exe argumentlist signatureupdate windowstyle hidden resolves the issues | 1 |
283,538 | 8,719,838,033 | IssuesEvent | 2018-12-08 05:08:29 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | Problem displaying multiple plots with history variables from Ale3d. | bug crash likelihood medium priority reviewed severity high wrong results | Al reported some bugs displaying multiple plots with history variables. Here is information about reproducing them:
The data files to reproduce the bug are:
if7f_001.00020
if7f_004.00020
Bug 1:
Step to demonstrate bug:
1) Have "Apply subset selections to all plots" on (the default).
2) Open if7f_004.00020
3) Add a Pseudocolor of hist/incmat1_1/dmfdt/temp
4) Add a Pseudocolor of hist/incmat2_2/dmfdt/temp
5) Press Draw
The second plot will generate the message:
WARNING: The Pseudocolor plot of variable "hist/incmat2_2/dmfdt/temp" yielded
no data.
This is because VisIt used the SIL selection from plot 1 of
incmat1_1 on
incmat2_2 off
incmat3_3 off
for plot 2, when the second variable is defined on incmat2_2. The work around is to turn off "Apply subset selections to all plots" and set the SIL for the second plot to be
incmat1_1 off
incmat2_2 on
incmat3_3 off
Bug 2:
1) Have "Apply subset selections to all plots" on (the default).
2) Open if7f_004.00020
3) Add a Filled Boundary plot of material.
4) Add a Pseudocolor of hist/incmat1_1/dmfdt/temp
5) Press Draw
This gives a plot where the Pseudocolor plot is drawing extra partial zones. This is because it selects all the materials for the Pseudocolor plot when the variable is only defined on incmat1_1. You may need to hide the Filled Boundary plot to see the problem.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 958
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Urgent
Subject: Problem displaying multiple plots with history variables from Ale3d.
Assigned to: Eric Brugger
Category:
Target version: 2.4.2
Author: Eric Brugger
Start: 02/08/2012
Due date:
% Done: 100
Estimated time: 16.0
Created: 02/08/2012 02:53 pm
Updated: 02/23/2012 08:29 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.4.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Al reported some bugs displaying multiple plots with history variables. Here is information about reproducing them:
The data files to reproduce the bug are:
if7f_001.00020
if7f_004.00020
Bug 1:
Step to demonstrate bug:
1) Have "Apply subset selections to all plots" on (the default).
2) Open if7f_004.00020
3) Add a Pseudocolor of hist/incmat1_1/dmfdt/temp
4) Add a Pseudocolor of hist/incmat2_2/dmfdt/temp
5) Press Draw
The second plot will generate the message:
WARNING: The Pseudocolor plot of variable "hist/incmat2_2/dmfdt/temp" yielded
no data.
This is because VisIt used the SIL selection from plot 1 of
incmat1_1 on
incmat2_2 off
incmat3_3 off
for plot 2, when the second variable is defined on incmat2_2. The work around is to turn off "Apply subset selections to all plots" and set the SIL for the second plot to be
incmat1_1 off
incmat2_2 on
incmat3_3 off
Bug 2:
1) Have "Apply subset selections to all plots" on (the default).
2) Open if7f_004.00020
3) Add a Filled Boundary plot of material.
4) Add a Pseudocolor of hist/incmat1_1/dmfdt/temp
5) Press Draw
This gives a plot where the Pseudocolor plot is drawing extra partial zones. This is because it selects all the materials for the Pseudocolor plot when the variable is only defined on incmat1_1. You may need to hide the Filled Boundary plot to see the problem.
Comments:
With some investigation I found the obvious, that the routine avtSILRestriction::SetFromCompatibleRestriction is returning true when the second plot is created. This is because the SILs both have incmat1_1, incmat2_2 and incmat3_3. So either that routine needs to know that it isn't really the same SIL, or the SIL needs to only consist of the parts that make sense, then there would be no problem in SetFromCompatibleRestriction. Sounds like the way to go, don't know how much work is involved. It also seems right since the user shouldn't even see the other materials as possibilities for selection.
So I don't think having a different SIL for the different variables is a winner since there is one material object per file, which is used to set avtMaterialMetaData.
I committed revisions 17424 and 17426 to the 2.4 RC and trunk with thefollowing change:1) I modified VisIt so that when you have "Apply subset selections to all plots" on and add a plot of a material restricted variable it doesn't apply the SIL from a compatible plot unless the variables of both plots are restricted to the same materials. This resolves #958.M help/en_US/relnotes2.4.2.htmlM viewer/main/ViewerPlotList.C
| 1.0 | Problem displaying multiple plots with history variables from Ale3d. - Al reported some bugs displaying multiple plots with history variables. Here is information about reproducing them:
The data files to reproduce the bug are:
if7f_001.00020
if7f_004.00020
Bug 1:
Step to demonstrate bug:
1) Have "Apply subset selections to all plots" on (the default).
2) Open if7f_004.00020
3) Add a Pseudocolor of hist/incmat1_1/dmfdt/temp
4) Add a Pseudocolor of hist/incmat2_2/dmfdt/temp
5) Press Draw
The second plot will generate the message:
WARNING: The Pseudocolor plot of variable "hist/incmat2_2/dmfdt/temp" yielded
no data.
This is because VisIt used the SIL selection from plot 1 of
incmat1_1 on
incmat2_2 off
incmat3_3 off
for plot 2, when the second variable is defined on incmat2_2. The work around is to turn off "Apply subset selections to all plots" and set the SIL for the second plot to be
incmat1_1 off
incmat2_2 on
incmat3_3 off
Bug 2:
1) Have "Apply subset selections to all plots" on (the default).
2) Open if7f_004.00020
3) Add a Filled Boundary plot of material.
4) Add a Pseudocolor of hist/incmat1_1/dmfdt/temp
5) Press Draw
This gives a plot where the Pseudocolor plot is drawing extra partial zones. This is because it selects all the materials for the Pseudocolor plot when the variable is only defined on incmat1_1. You may need to hide the Filled Boundary plot to see the problem.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 958
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Urgent
Subject: Problem displaying multiple plots with history variables from Ale3d.
Assigned to: Eric Brugger
Category:
Target version: 2.4.2
Author: Eric Brugger
Start: 02/08/2012
Due date:
% Done: 100
Estimated time: 16.0
Created: 02/08/2012 02:53 pm
Updated: 02/23/2012 08:29 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.4.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Al reported some bugs displaying multiple plots with history variables. Here is information about reproducing them:
The data files to reproduce the bug are:
if7f_001.00020
if7f_004.00020
Bug 1:
Step to demonstrate bug:
1) Have "Apply subset selections to all plots" on (the default).
2) Open if7f_004.00020
3) Add a Pseudocolor of hist/incmat1_1/dmfdt/temp
4) Add a Pseudocolor of hist/incmat2_2/dmfdt/temp
5) Press Draw
The second plot will generate the message:
WARNING: The Pseudocolor plot of variable "hist/incmat2_2/dmfdt/temp" yielded
no data.
This is because VisIt used the SIL selection from plot 1 of
incmat1_1 on
incmat2_2 off
incmat3_3 off
for plot 2, when the second variable is defined on incmat2_2. The work around is to turn off "Apply subset selections to all plots" and set the SIL for the second plot to be
incmat1_1 off
incmat2_2 on
incmat3_3 off
Bug 2:
1) Have "Apply subset selections to all plots" on (the default).
2) Open if7f_004.00020
3) Add a Filled Boundary plot of material.
4) Add a Pseudocolor of hist/incmat1_1/dmfdt/temp
5) Press Draw
This gives a plot where the Pseudocolor plot is drawing extra partial zones. This is because it selects all the materials for the Pseudocolor plot when the variable is only defined on incmat1_1. You may need to hide the Filled Boundary plot to see the problem.
Comments:
With some investigation I found the obvious, that the routine avtSILRestriction::SetFromCompatibleRestriction is returning true when the second plot is created. This is because the SILs both have incmat1_1, incmat2_2 and incmat3_3. So either that routine needs to know that it isn't really the same SIL, or the SIL needs to only consist of the parts that make sense, then there would be no problem in SetFromCompatibleRestriction. Sounds like the way to go, don't know how much work is involved. It also seems right since the user shouldn't even see the other materials as possibilities for selection.
So I don't think having a different SIL for the different variables is a winner since there is one material object per file, which is used to set avtMaterialMetaData.
I committed revisions 17424 and 17426 to the 2.4 RC and trunk with thefollowing change:1) I modified VisIt so that when you have "Apply subset selections to all plots" on and add a plot of a material restricted variable it doesn't apply the SIL from a compatible plot unless the variables of both plots are restricted to the same materials. This resolves #958.M help/en_US/relnotes2.4.2.htmlM viewer/main/ViewerPlotList.C
| priority | problem displaying multiple plots with history variables from al reported some bugs displaying multiple plots with history variables here is information about reproducing them the data files to reproduce the bug are bug step to demonstrate bug have apply subset selections to all plots on the default open add a pseudocolor of hist dmfdt temp add a pseudocolor of hist dmfdt temp press draw the second plot will generate the message warning the pseudocolor plot of variable hist dmfdt temp yielded no data this is because visit used the sil selection from plot of on off off for plot when the second variable is defined on the work around is to turn off apply subset selections to all plots and set the sil for the second plot to be off on off bug have apply subset selections to all plots on the default open add a filled boundary plot of material add a pseudocolor of hist dmfdt temp press draw this gives a plot where the pseudocolor plot is drawing extra partial zones this is because it selects all the materials for the pseudocolor plot when the variable is only defined on you may need to hide the filled boundary plot to see the problem redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority urgent subject problem displaying multiple plots with history variables from assigned to eric brugger category target version author eric brugger start due date done estimated time created pm updated pm likelihood occasional severity crash wrong results found in version impact expected use os all support group any description al reported some bugs displaying multiple plots with history variables here is information about reproducing them the data files to reproduce the bug are bug step to demonstrate bug have apply subset selections to all plots on the default open add a pseudocolor of hist dmfdt temp add a pseudocolor of hist dmfdt temp press draw the second plot will generate the message warning the pseudocolor plot of variable hist dmfdt temp yielded no data this is because visit used the sil selection from plot of on off off for plot when the second variable is defined on the work around is to turn off apply subset selections to all plots and set the sil for the second plot to be off on off bug have apply subset selections to all plots on the default open add a filled boundary plot of material add a pseudocolor of hist dmfdt temp press draw this gives a plot where the pseudocolor plot is drawing extra partial zones this is because it selects all the materials for the pseudocolor plot when the variable is only defined on you may need to hide the filled boundary plot to see the problem comments with some investigation i found the obvious that the routine avtsilrestriction setfromcompatiblerestriction is returning true when the second plot is created this is because the sils both have and so either that routine needs to know that it isn t really the same sil or the sil needs to only consist of the parts that make sense then there would be no problem in setfromcompatiblerestriction sounds like the way to go don t know how much work is involved it also seems right since the user shouldn t even see the other materials as possibilities for selection so i don t think having a different sil for the different variables is a winner since there is one material object per file which is used to set avtmaterialmetadata i committed revisions and to the rc and trunk with thefollowing change i modified visit so that when you have apply subset selections to all plots on and add a plot of a material restricted variable it doesn t apply the sil from a compatible plot unless the variables of both plots are restricted to the same materials this resolves m help en us htmlm viewer main viewerplotlist c | 1 |
697,911 | 23,958,419,241 | IssuesEvent | 2022-09-12 16:47:55 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [DocDB] Allow fractional factor for kTimeMultiplier | kind/bug area/docdb priority/medium status/awaiting-triage | Jira Link: [DB-3448](https://yugabyte.atlassian.net/browse/DB-3448)
### Description
Currently only integral factor is allowed for kTimeMultiplier.
This issue makes kTimeMultiplier of float type so that fractional factor is allowed. | 1.0 | [DocDB] Allow fractional factor for kTimeMultiplier - Jira Link: [DB-3448](https://yugabyte.atlassian.net/browse/DB-3448)
### Description
Currently only integral factor is allowed for kTimeMultiplier.
This issue makes kTimeMultiplier of float type so that fractional factor is allowed. | priority | allow fractional factor for ktimemultiplier jira link description currently only integral factor is allowed for ktimemultiplier this issue makes ktimemultiplier of float type so that fractional factor is allowed | 1 |
799,921 | 28,316,441,770 | IssuesEvent | 2023-04-10 20:03:15 | minio/docs | https://api.github.com/repos/minio/docs | closed | [Release] Console v0.25.0 doc impacts | priority: medium | Console [v0.25.0](https://github.com/minio/console/releases/tag/v0.25.0) updates the login screen.
- [ ] Review the `administration/minio-console/#logging-in` page for updates to the flow. ([PR #2695](https://github.com/minio/console/pull/2695)) | 1.0 | [Release] Console v0.25.0 doc impacts - Console [v0.25.0](https://github.com/minio/console/releases/tag/v0.25.0) updates the login screen.
- [ ] Review the `administration/minio-console/#logging-in` page for updates to the flow. ([PR #2695](https://github.com/minio/console/pull/2695)) | priority | console doc impacts console updates the login screen review the administration minio console logging in page for updates to the flow | 1 |
234,399 | 7,720,710,758 | IssuesEvent | 2018-05-24 00:45:55 | DarkPacks/SevTech-Ages | https://api.github.com/repos/DarkPacks/SevTech-Ages | closed | Wither produced Low Grade Charcoal from Netherrack. | Priority: Medium Status: Completed Type: Bug | <!-- Instructions on how to do issues like your boy darkosto -->
<!-- NOTE: If you have other mods installed or you have changed versions; please revert to a clean install
of the modpack and try to replicate the crash/issue otherwise we can ignore the crash due to a "modded" pack.
-->
<!-- Before anything else, use the *search* feature! -->
<!-- * Maybe someone already reported the issue you're experiencing? -->
<!-- * Maybe you can find the answer to your question by looking at older or closed issues? -->
<!-- * Have a go at it and see! -->
<!-- * Please search on the [issue track](../) before creating one. -->
## Issue / Bug
<!--- If you're describing a bug, describe the current behavior -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
<!--- MAKE SURE TO ADD LOGS! -->
<!--- If possible add a video/gif of the issue/bug (makes it easier for darkosto to understand you) -->
Wither produced Low Grade Charcoal from Netherrack.
https://clips.twitch.tv/ClearCloudyLapwingWoofer ( don’t forget off sounds )
## Expected Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
1. Call wither.
<!--- add more if needed -->
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## Logs
<!-- If your reporting a crash/bug. You NEED to provide logs otherwise your issue will be closed and ignored. -->
<!-- Twitch logs can be found in the installation directory for the Twitch App. Or click the ... button on SevTech and hit
"Open Folder" then upload the latest/crash logs to PasteBin or Gist. DON'T Upload them to GitHub as we don't want to download
your logs. ATLauncher is a similar process. -->
* Client/Server Log:
* Crash Log:
## Client Information
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Modpack Version: 3.0.5 probably
<!-- Please tell us how much memory you have allocated to the game. For Twitch/ATLauncher look in the settings -->
It was on melharucos’s server.
| 1.0 | Wither produced Low Grade Charcoal from Netherrack. - <!-- Instructions on how to do issues like your boy darkosto -->
<!-- NOTE: If you have other mods installed or you have changed versions; please revert to a clean install
of the modpack and try to replicate the crash/issue otherwise we can ignore the crash due to a "modded" pack.
-->
<!-- Before anything else, use the *search* feature! -->
<!-- * Maybe someone already reported the issue you're experiencing? -->
<!-- * Maybe you can find the answer to your question by looking at older or closed issues? -->
<!-- * Have a go at it and see! -->
<!-- * Please search on the [issue track](../) before creating one. -->
## Issue / Bug
<!--- If you're describing a bug, describe the current behavior -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
<!--- MAKE SURE TO ADD LOGS! -->
<!--- If possible add a video/gif of the issue/bug (makes it easier for darkosto to understand you) -->
Wither produced Low Grade Charcoal from Netherrack.
https://clips.twitch.tv/ClearCloudyLapwingWoofer ( don’t forget off sounds )
## Expected Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
1. Call wither.
<!--- add more if needed -->
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## Logs
<!-- If your reporting a crash/bug. You NEED to provide logs otherwise your issue will be closed and ignored. -->
<!-- Twitch logs can be found in the installation directory for the Twitch App. Or click the ... button on SevTech and hit
"Open Folder" then upload the latest/crash logs to PasteBin or Gist. DON'T Upload them to GitHub as we don't want to download
your logs. ATLauncher is a similar process. -->
* Client/Server Log:
* Crash Log:
## Client Information
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Modpack Version: 3.0.5 probably
<!-- Please tell us how much memory you have allocated to the game. For Twitch/ATLauncher look in the settings -->
It was on melharucos’s server.
| priority | wither produced low grade charcoal from netherrack note if you have other mods installed or you have changed versions please revert to a clean install of the modpack and try to replicate the crash issue otherwise we can ignore the crash due to a modded pack issue bug wither produced low grade charcoal from netherrack don’t forget off sounds expected behavior possible solution steps to reproduce for bugs call wither context logs twitch logs can be found in the installation directory for the twitch app or click the button on sevtech and hit open folder then upload the latest crash logs to pastebin or gist don t upload them to github as we don t want to download your logs atlauncher is a similar process client server log crash log client information modpack version probably it was on melharucos’s server | 1 |
224,954 | 7,473,998,913 | IssuesEvent | 2018-04-03 16:57:58 | dmwm/CRABServer | https://api.github.com/repos/dmwm/CRABServer | opened | review values of ExitCodes in CMSRunAnalysis.py | Priority: Medium Status: Available Type: Bug | e.g. do not set 10040 (which refers to site problems) when user's scriptExe fails ! See
https://hypernews.cern.ch/HyperNews/CMS/get/computing-tools/3727/2.html
looks like we should improve CRAB wrapper here
https://github.com/dmwm/CRABServer/blob/master/scripts/CMSRunAnalysis.py#L38-L44
to make sure we use exit codes consistently with
https://twiki.cern.ch/twiki/bin/viewauth/CMSPublic/JobExitCodes | 1.0 | review values of ExitCodes in CMSRunAnalysis.py - e.g. do not set 10040 (which refers to site problems) when user's scriptExe fails ! See
https://hypernews.cern.ch/HyperNews/CMS/get/computing-tools/3727/2.html
looks like we should improve CRAB wrapper here
https://github.com/dmwm/CRABServer/blob/master/scripts/CMSRunAnalysis.py#L38-L44
to make sure we use exit codes consistently with
https://twiki.cern.ch/twiki/bin/viewauth/CMSPublic/JobExitCodes | priority | review values of exitcodes in cmsrunanalysis py e g do not set which refers to site problems when user s scriptexe fails see looks like we should improve crab wrapper here to make sure we use exit codes consistently with | 1 |
449,848 | 12,975,798,506 | IssuesEvent | 2020-07-21 17:37:39 | conan-io/conan | https://api.github.com/repos/conan-io/conan | closed | [feature] Declare what a recipe 'provides' | complex: medium component: graph priority: low stage: queue type: feature | Add a `provides` attribute to the ConanFile class, it will list other package names that define the same functionality. Some examples:
* `libjpeg`, `libjpeg-turbo`, `mozjpeg` are different implementations of the same functionality and break the ODR principle.
* deprecated recipes like `cpp-taskflow` that takes the new name `taskflow`
* frameworks and individual libraries like a monolith `boost` and a modular one
All of them will introduce duplicated functionality in the graph (probably linking problems) that we want to avoid.
Recipes `mozjpeg` and `libjpeg-turbo` will contain a `provides = "libjpeg"` attribute and Conan will raise if both are present in the graph (any alternative and the _master_ one).
The workaround for this situation is to replace the requirement modifying one recipe in the middle to change the requirements, but eventually it will remove all the alternative implementations from the graph. **The actual solution would be to create a "proxy/virtual" recipe that, based on an option, will choose one of the implementations**, and all the recipes should require that proxy recipe instead of the actual jpeg implementation. Conan built-in functionality ensures that there is only one instance of the proxy recipe and there will be only one implementation of the actual functionality.
We should provide a POC together with the feature implementation to check it actually works. | 1.0 | [feature] Declare what a recipe 'provides' - Add a `provides` attribute to the ConanFile class, it will list other package names that define the same functionality. Some examples:
* `libjpeg`, `libjpeg-turbo`, `mozjpeg` are different implementations of the same functionality and break the ODR principle.
* deprecated recipes like `cpp-taskflow` that takes the new name `taskflow`
* frameworks and individual libraries like a monolith `boost` and a modular one
All of them will introduce duplicated functionality in the graph (probably linking problems) that we want to avoid.
Recipes `mozjpeg` and `libjpeg-turbo` will contain a `provides = "libjpeg"` attribute and Conan will raise if both are present in the graph (any alternative and the _master_ one).
The workaround for this situation is to replace the requirement modifying one recipe in the middle to change the requirements, but eventually it will remove all the alternative implementations from the graph. **The actual solution would be to create a "proxy/virtual" recipe that, based on an option, will choose one of the implementations**, and all the recipes should require that proxy recipe instead of the actual jpeg implementation. Conan built-in functionality ensures that there is only one instance of the proxy recipe and there will be only one implementation of the actual functionality.
We should provide a POC together with the feature implementation to check it actually works. | priority | declare what a recipe provides add a provides attribute to the conanfile class it will list other package names that define the same functionality some examples libjpeg libjpeg turbo mozjpeg are different implementations of the same functionality and break the odr principle deprecated recipes like cpp taskflow that takes the new name taskflow frameworks and individual libraries like a monolith boost and a modular one all of them will introduce duplicated functionality in the graph probably linking problems that we want to avoid recipes mozjpeg and libjpeg turbo will contain a provides libjpeg attribute and conan will raise if both are present in the graph any alternative and the master one the workaround for this situation is to replace the requirement modifying one recipe in the middle to change the requirements but eventually it will remove all the alternative implementations from the graph the actual solution would be to create a proxy virtual recipe that based on an option will choose one of the implementations and all the recipes should require that proxy recipe instead of the actual jpeg implementation conan built in functionality ensures that there is only one instance of the proxy recipe and there will be only one implementation of the actual functionality we should provide a poc together with the feature implementation to check it actually works | 1 |
637,238 | 20,623,863,228 | IssuesEvent | 2022-03-07 20:15:29 | abaporu-C/GROW-POS | https://api.github.com/repos/abaporu-C/GROW-POS | closed | Member/Edit UI | UI medium priority | - add prepends
- add Cancel button (go back)
- add ddl HouseholdName
- Income Situations in table format
See screenshot for details.

| 1.0 | Member/Edit UI - - add prepends
- add Cancel button (go back)
- add ddl HouseholdName
- Income Situations in table format
See screenshot for details.

| priority | member edit ui add prepends add cancel button go back add ddl householdname income situations in table format see screenshot for details | 1 |
40,187 | 2,867,245,912 | IssuesEvent | 2015-06-05 12:03:49 | MKergall/osmbonuspack | https://api.github.com/repos/MKergall/osmbonuspack | closed | Get visible items of a cluster | auto-migrated Priority-Medium Type-Enhancement | ```
I have a problem to show markers on map. When two or more marker are overlapped
then the cluster won't open. I solved it with deleting the cluster from
overlays and adding all Markers again separately. But I want get a list of
visible items in a cluster by clicking on it.
Assume we have 20 markers in a cluster and in a zoom level only 4 items are
available (Showing a circle with number 4). now I want to get these for makers
with click on cluster marker.
Thanks.
```
Original issue reported on code.google.com by `hr.saleh...@gmail.com` on 4 Feb 2015 at 11:37 | 1.0 | Get visible items of a cluster - ```
I have a problem to show markers on map. When two or more marker are overlapped
then the cluster won't open. I solved it with deleting the cluster from
overlays and adding all Markers again separately. But I want get a list of
visible items in a cluster by clicking on it.
Assume we have 20 markers in a cluster and in a zoom level only 4 items are
available (Showing a circle with number 4). now I want to get these for makers
with click on cluster marker.
Thanks.
```
Original issue reported on code.google.com by `hr.saleh...@gmail.com` on 4 Feb 2015 at 11:37 | priority | get visible items of a cluster i have a problem to show markers on map when two or more marker are overlapped then the cluster won t open i solved it with deleting the cluster from overlays and adding all markers again separately but i want get a list of visible items in a cluster by clicking on it assume we have markers in a cluster and in a zoom level only items are available showing a circle with number now i want to get these for makers with click on cluster marker thanks original issue reported on code google com by hr saleh gmail com on feb at | 1 |
632,421 | 20,196,347,944 | IssuesEvent | 2022-02-11 11:00:49 | ooni/probe | https://api.github.com/repos/ooni/probe | opened | probe-mobile: improve progress bar of tests | bug ooni/probe-mobile priority/medium | Currently the progress bar has some issues:
* The progress is local to every test group. This means that every time a test group finishes (ex. websites), the progress bar resets to zero and starts over. This is confusing because it's unclear to a user what is the overall progress/time remaining for the test session.
* In some test groups the progress bar is "jumpy", as in it resets back to zero and then jumps back to the progress that is was before. This happens frequently in the experiments test group.
* When you minimise a test, the progress bar resets to the unknown state and it takes some time for it to figure out the actual progress.
This issue is about improving the behaviour of the progress bar to have it move smoothly across all tests. | 1.0 | probe-mobile: improve progress bar of tests - Currently the progress bar has some issues:
* The progress is local to every test group. This means that every time a test group finishes (ex. websites), the progress bar resets to zero and starts over. This is confusing because it's unclear to a user what is the overall progress/time remaining for the test session.
* In some test groups the progress bar is "jumpy", as in it resets back to zero and then jumps back to the progress that is was before. This happens frequently in the experiments test group.
* When you minimise a test, the progress bar resets to the unknown state and it takes some time for it to figure out the actual progress.
This issue is about improving the behaviour of the progress bar to have it move smoothly across all tests. | priority | probe mobile improve progress bar of tests currently the progress bar has some issues the progress is local to every test group this means that every time a test group finishes ex websites the progress bar resets to zero and starts over this is confusing because it s unclear to a user what is the overall progress time remaining for the test session in some test groups the progress bar is jumpy as in it resets back to zero and then jumps back to the progress that is was before this happens frequently in the experiments test group when you minimise a test the progress bar resets to the unknown state and it takes some time for it to figure out the actual progress this issue is about improving the behaviour of the progress bar to have it move smoothly across all tests | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.