Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
597,773 | 18,171,405,335 | IssuesEvent | 2021-09-27 20:29:04 | ethereum/ethereum-org-website | https://api.github.com/repos/ethereum/ethereum-org-website | closed | Chinese simplified showing no tutorials | Type: Bug Type: Translation Priority: High | **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'https://ethereum.org/zh/developers/tutorials/'
2. See error
**Expected behavior**
Show translated tutorials or show English tutorials
**Screenshots**
Google translate of the main message: "No tutorial contains all these tags. Try to remove one or two features"
<img width="1471" alt="Screenshot 2021-09-27 at 11 27 58" src="https://user-images.githubusercontent.com/62268199/134892385-4574fed4-10b2-4e78-b6ed-d6edaac1e43e.png">
**Want to contribute?**
We love contributions from the Ethereum community! Please comment on an issue if you're interested in helping out with a PR.
| 1.0 | Chinese simplified showing no tutorials - **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'https://ethereum.org/zh/developers/tutorials/'
2. See error
**Expected behavior**
Show translated tutorials or show English tutorials
**Screenshots**
Google translate of the main message: "No tutorial contains all these tags. Try to remove one or two features"
<img width="1471" alt="Screenshot 2021-09-27 at 11 27 58" src="https://user-images.githubusercontent.com/62268199/134892385-4574fed4-10b2-4e78-b6ed-d6edaac1e43e.png">
**Want to contribute?**
We love contributions from the Ethereum community! Please comment on an issue if you're interested in helping out with a PR.
| priority | chinese simplified showing no tutorials describe the bug to reproduce steps to reproduce the behavior go to see error expected behavior show translated tutorials or show english tutorials screenshots google translate of the main message no tutorial contains all these tags try to remove one or two features img width alt screenshot at src want to contribute we love contributions from the ethereum community please comment on an issue if you re interested in helping out with a pr | 1 |
756,641 | 26,479,468,636 | IssuesEvent | 2023-01-17 13:42:43 | ita-social-projects/TeachUA | https://api.github.com/repos/ita-social-projects/TeachUA | closed | [Додати гурток] new club doesn't displayed on "Мiй профiль" after creating. | bug Priority: High | Environment: Windows, Версія 94.0.4606.81 (Розробка) (64-розрядна версія).
Reproducible: always
Build found: https://speak-ukrainian.org.ua/dev
Preconditions:
Log in as "Керівник":
Email - TestTeach.ua@meta.ua
Password: 12345678
Steps to reproduce
1. Go to https://speak-ukrainian.org.ua/dev
2. Click on 'Додати гурток'.
3. Filled all mandatory fields on "Основна iнформацiя ", "Контакти", "Опис" tabs.
4. Click "Завершити" button.
5. Go to the "Мiй профiль".
6. Check if a new club present on the website.
Actual result
New club doesn't present on the website.
Expected result
New club should be present on the website.
E.g.: "User story https://github.com/ita-social-projects/TeachUA/issues/70
"Bug", Priority (High), Severity (Major), Type ("Functional").
| 1.0 | [Додати гурток] new club doesn't displayed on "Мiй профiль" after creating. - Environment: Windows, Версія 94.0.4606.81 (Розробка) (64-розрядна версія).
Reproducible: always
Build found: https://speak-ukrainian.org.ua/dev
Preconditions:
Log in as "Керівник":
Email - TestTeach.ua@meta.ua
Password: 12345678
Steps to reproduce
1. Go to https://speak-ukrainian.org.ua/dev
2. Click on 'Додати гурток'.
3. Filled all mandatory fields on "Основна iнформацiя ", "Контакти", "Опис" tabs.
4. Click "Завершити" button.
5. Go to the "Мiй профiль".
6. Check if a new club present on the website.
Actual result
New club doesn't present on the website.
Expected result
New club should be present on the website.
E.g.: "User story https://github.com/ita-social-projects/TeachUA/issues/70
"Bug", Priority (High), Severity (Major), Type ("Functional").
| priority | new club doesn t displayed on мiй профiль after creating environment windows версія розробка розрядна версія reproducible always build found preconditions log in as керівник email testteach ua meta ua password steps to reproduce go to click on додати гурток filled all mandatory fields on основна iнформацiя контакти опис tabs click завершити button go to the мiй профiль check if a new club present on the website actual result new club doesn t present on the website expected result new club should be present on the website e g user story bug priority high severity major type functional | 1 |
604,339 | 18,681,927,352 | IssuesEvent | 2021-11-01 07:16:48 | dataware-tools/dataware-tools | https://api.github.com/repos/dataware-tools/dataware-tools | closed | Use new api client | kind/feature priority/high | ## Purpose
feature request
## Description
Use new api client
Related to https://github.com/dataware-tools/dataware-tools/issues/81
## TODOs
- [x] app-launcher
- [x] app-user-manager
- [x] app-data-browser-next | 1.0 | Use new api client - ## Purpose
feature request
## Description
Use new api client
Related to https://github.com/dataware-tools/dataware-tools/issues/81
## TODOs
- [x] app-launcher
- [x] app-user-manager
- [x] app-data-browser-next | priority | use new api client purpose feature request description use new api client related to todos app launcher app user manager app data browser next | 1 |
805,497 | 29,522,290,010 | IssuesEvent | 2023-06-05 03:49:08 | okTurtles/group-income | https://api.github.com/repos/okTurtles/group-income | opened | DMs are not displayed per-group | Kind:Bug App:Frontend Priority:High Note:UI/UX | ### Problem
DMs are not per-group as seen in this video:
https://github.com/okTurtles/group-income/assets/138706/53477a49-6181-4735-8697-10ae877f1af4
Reproduce:
1. u1 create a group (g1)
2. u1 invite u2, and u2 join (in a separate container tab)
3. u1 invite u4, and u4 join (in a separate container tab)
4. u1 create DM with u2 in g1
5. u3 sign up (in a separate container tab) create a new group (g2)
6. u3 copy invite link and invite u1 to join g2
7. u1 now part of g1 and g2, switch between them, see DM between u1 and u2 visible in both groups
### Solution
Find and fix the getters that are not scoped to per-group properly. | 1.0 | DMs are not displayed per-group - ### Problem
DMs are not per-group as seen in this video:
https://github.com/okTurtles/group-income/assets/138706/53477a49-6181-4735-8697-10ae877f1af4
Reproduce:
1. u1 create a group (g1)
2. u1 invite u2, and u2 join (in a separate container tab)
3. u1 invite u4, and u4 join (in a separate container tab)
4. u1 create DM with u2 in g1
5. u3 sign up (in a separate container tab) create a new group (g2)
6. u3 copy invite link and invite u1 to join g2
7. u1 now part of g1 and g2, switch between them, see DM between u1 and u2 visible in both groups
### Solution
Find and fix the getters that are not scoped to per-group properly. | priority | dms are not displayed per group problem dms are not per group as seen in this video reproduce create a group invite and join in a separate container tab invite and join in a separate container tab create dm with in sign up in a separate container tab create a new group copy invite link and invite to join now part of and switch between them see dm between and visible in both groups solution find and fix the getters that are not scoped to per group properly | 1 |
619,188 | 19,518,468,418 | IssuesEvent | 2021-12-29 14:17:41 | boston-library/curator | https://api.github.com/repos/boston-library/curator | closed | Change Indexing job to accept object_class and identifier instead of using global_id | solr_indexing priority: high | In the `Curator::Indexer::IndexerJob` change the perform method to do the following
```
def perform(obj_class_name, obj_id)
indexable_obj = Object.const_get(obj_class_name).for_reindex_all.find(obj_id)
indexable_obj.update_index
end
```
Also modify the following in the `Curator::Indexable` concern
```
def queue_indexing_job
Curator::Indexer::IndexingJob.set(wait: 2.seconds).perform_later(self.class.name, self.id)
end
def queue_deletion_job
Curator::Indexer::DeletionJob.set(wait: 2.seconds).perform_later(ark_id)
end
```
| 1.0 | Change Indexing job to accept object_class and identifier instead of using global_id - In the `Curator::Indexer::IndexerJob` change the perform method to do the following
```
def perform(obj_class_name, obj_id)
indexable_obj = Object.const_get(obj_class_name).for_reindex_all.find(obj_id)
indexable_obj.update_index
end
```
Also modify the following in the `Curator::Indexable` concern
```
def queue_indexing_job
Curator::Indexer::IndexingJob.set(wait: 2.seconds).perform_later(self.class.name, self.id)
end
def queue_deletion_job
Curator::Indexer::DeletionJob.set(wait: 2.seconds).perform_later(ark_id)
end
```
| priority | change indexing job to accept object class and identifier instead of using global id in the curator indexer indexerjob change the perform method to do the following def perform obj class name obj id indexable obj object const get obj class name for reindex all find obj id indexable obj update index end also modify the following in the curator indexable concern def queue indexing job curator indexer indexingjob set wait seconds perform later self class name self id end def queue deletion job curator indexer deletionjob set wait seconds perform later ark id end | 1 |
627,537 | 19,908,093,474 | IssuesEvent | 2022-01-25 14:43:41 | KazumaOhura/TauApplication | https://api.github.com/repos/KazumaOhura/TauApplication | opened | Actionsのアーティファクト作成時のZipファイル化のコマンド修正 | 0: help wanted Parent Priority: high Type: improvement | ## 概要
<!-- このissueの概要 -->
ZipコマンドがLinux用になっている為要修正
<!-- 親issueなら子issueの番号 -->
## 目的
<!-- このissueを作る目的 -->
CDが出来ないため
## タスク
<!-- 目的を達成するための詳細な作業内容 -->
- [ ] 圧縮コマンドをPowerShell用にする
| 1.0 | Actionsのアーティファクト作成時のZipファイル化のコマンド修正 - ## 概要
<!-- このissueの概要 -->
ZipコマンドがLinux用になっている為要修正
<!-- 親issueなら子issueの番号 -->
## 目的
<!-- このissueを作る目的 -->
CDが出来ないため
## タスク
<!-- 目的を達成するための詳細な作業内容 -->
- [ ] 圧縮コマンドをPowerShell用にする
| priority | actionsのアーティファクト作成時のzipファイル化のコマンド修正 概要 zipコマンドがlinux用になっている為要修正 目的 cdが出来ないため タスク 圧縮コマンドをpowershell用にする | 1 |
324,410 | 9,889,157,794 | IssuesEvent | 2019-06-25 13:13:56 | canonical-web-and-design/vanillaframework.io | https://api.github.com/repos/canonical-web-and-design/vanillaframework.io | closed | Remove references to gulp | Priority: High | **Describe the bug**
The [contribute page](https://vanillaframework.io/contribute) contains references to gulp, which is no longer used in the framework
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://vanillaframework.io/contribute
2. Search for "gulp"
**Expected behavior**
An up-to-date set of instructions for testing
| 1.0 | Remove references to gulp - **Describe the bug**
The [contribute page](https://vanillaframework.io/contribute) contains references to gulp, which is no longer used in the framework
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://vanillaframework.io/contribute
2. Search for "gulp"
**Expected behavior**
An up-to-date set of instructions for testing
| priority | remove references to gulp describe the bug the contains references to gulp which is no longer used in the framework to reproduce steps to reproduce the behavior go to search for gulp expected behavior an up to date set of instructions for testing | 1 |
596,861 | 18,149,118,760 | IssuesEvent | 2021-09-26 01:11:50 | uqbar-project/wollok-language | https://api.github.com/repos/uqbar-project/wollok-language | closed | Unificar sintaxis de linearización | needs discussion priority: high | @fdodino @npasserini @PalumboN
Buenas y santas,
Ahora mismo tenemos dos sintaxis diferentes para la linearización de mixines en dos lugares distintos del lenguaje:
En las entidades nombradas (por ejemplo, las clases) usamos la palabra clave `mixed with` para el primer mixin, seguida de `and` como separador para las demás:

Ej:
```wollok
clase MiClase mixed with MiMixin and MiOtroMixin
```
Por otro lado, en las clases anónimas que se crean producto de linearizar una clase al hacer `new` usamos la palabra clave `with`:

Ej:
```wollok
new MiOtraClase() with MiMixin with MiMixin
```
Imagino que esta diferencia es más bien producto de un descuido y no tiene ningún propósito, pero hace la gramática innecesariamente compleja y me gustaría que optemos por tener la misma notación en todos los casos.
Dicho eso, quisiera proponer el envión para que nos deshagamos de la palabra clave `mixed with` porque son dos palabras y eso nos atormenta a mi y a mi lexer pero sobre todo porque no está conjugada de la misma forma que `inherits` y el texto termina sugiriendo que es la superclase la que está mixeada y no la clase que definís:
```wollok
class MyClass inherits Other mixed with Something
```
Qué se yo... Podríamos usar la palabra `mixing`.
También me perturba un poco que `and` es al mismo tiempo un separador de mixines y un alias del operador &&, pero bue... No me quiero ir por las ramas.
Sé que estamos medio anticipando algún overhaul de las Clases y los Mixines, pero siendo que eso no está ni definido todavía y este es un cambio bastante sencillo podría estar bueno para meter en el próximo release que suba el mayor de la versión.
La otra diferencia importante de sintaxis es la que tenemos en el inherits de los singletons:
```wollok
object miObjeto inherits MiClase(p1, p2) with MiMixin
```
Si eliminamos los constructores, esa linea debería ser:
```wollok
object miObjeto inherits MiClase(arg1 = p1, arg2 = p2) with MiMixin
```
Yo propongo que también dejemos pasar parámetros al linearizar el mixin:
```wollok
object miObjeto inherits MiClase(arg1 = p1, arg2 = p2) with MiMixin(argDelMixin = otraCosa)
``` | 1.0 | Unificar sintaxis de linearización - @fdodino @npasserini @PalumboN
Buenas y santas,
Ahora mismo tenemos dos sintaxis diferentes para la linearización de mixines en dos lugares distintos del lenguaje:
En las entidades nombradas (por ejemplo, las clases) usamos la palabra clave `mixed with` para el primer mixin, seguida de `and` como separador para las demás:

Ej:
```wollok
clase MiClase mixed with MiMixin and MiOtroMixin
```
Por otro lado, en las clases anónimas que se crean producto de linearizar una clase al hacer `new` usamos la palabra clave `with`:

Ej:
```wollok
new MiOtraClase() with MiMixin with MiMixin
```
Imagino que esta diferencia es más bien producto de un descuido y no tiene ningún propósito, pero hace la gramática innecesariamente compleja y me gustaría que optemos por tener la misma notación en todos los casos.
Dicho eso, quisiera proponer el envión para que nos deshagamos de la palabra clave `mixed with` porque son dos palabras y eso nos atormenta a mi y a mi lexer pero sobre todo porque no está conjugada de la misma forma que `inherits` y el texto termina sugiriendo que es la superclase la que está mixeada y no la clase que definís:
```wollok
class MyClass inherits Other mixed with Something
```
Qué se yo... Podríamos usar la palabra `mixing`.
También me perturba un poco que `and` es al mismo tiempo un separador de mixines y un alias del operador &&, pero bue... No me quiero ir por las ramas.
Sé que estamos medio anticipando algún overhaul de las Clases y los Mixines, pero siendo que eso no está ni definido todavía y este es un cambio bastante sencillo podría estar bueno para meter en el próximo release que suba el mayor de la versión.
La otra diferencia importante de sintaxis es la que tenemos en el inherits de los singletons:
```wollok
object miObjeto inherits MiClase(p1, p2) with MiMixin
```
Si eliminamos los constructores, esa linea debería ser:
```wollok
object miObjeto inherits MiClase(arg1 = p1, arg2 = p2) with MiMixin
```
Yo propongo que también dejemos pasar parámetros al linearizar el mixin:
```wollok
object miObjeto inherits MiClase(arg1 = p1, arg2 = p2) with MiMixin(argDelMixin = otraCosa)
``` | priority | unificar sintaxis de linearización fdodino npasserini palumbon buenas y santas ahora mismo tenemos dos sintaxis diferentes para la linearización de mixines en dos lugares distintos del lenguaje en las entidades nombradas por ejemplo las clases usamos la palabra clave mixed with para el primer mixin seguida de and como separador para las demás ej wollok clase miclase mixed with mimixin and miotromixin por otro lado en las clases anónimas que se crean producto de linearizar una clase al hacer new usamos la palabra clave with ej wollok new miotraclase with mimixin with mimixin imagino que esta diferencia es más bien producto de un descuido y no tiene ningún propósito pero hace la gramática innecesariamente compleja y me gustaría que optemos por tener la misma notación en todos los casos dicho eso quisiera proponer el envión para que nos deshagamos de la palabra clave mixed with porque son dos palabras y eso nos atormenta a mi y a mi lexer pero sobre todo porque no está conjugada de la misma forma que inherits y el texto termina sugiriendo que es la superclase la que está mixeada y no la clase que definís wollok class myclass inherits other mixed with something qué se yo podríamos usar la palabra mixing también me perturba un poco que and es al mismo tiempo un separador de mixines y un alias del operador pero bue no me quiero ir por las ramas sé que estamos medio anticipando algún overhaul de las clases y los mixines pero siendo que eso no está ni definido todavía y este es un cambio bastante sencillo podría estar bueno para meter en el próximo release que suba el mayor de la versión la otra diferencia importante de sintaxis es la que tenemos en el inherits de los singletons wollok object miobjeto inherits miclase with mimixin si eliminamos los constructores esa linea debería ser wollok object miobjeto inherits miclase with mimixin yo propongo que también dejemos pasar parámetros al linearizar el mixin wollok object miobjeto inherits miclase with mimixin argdelmixin otracosa | 1 |
232,502 | 7,660,812,626 | IssuesEvent | 2018-05-11 12:05:09 | status-im/status-react | https://api.github.com/repos/status-im/status-react | closed | App crashed when accessing camera view but access to camera denied for app [iOS] | bounty bounty-m bug fix them all high-priority ios |
***Type***: Bug
***Summary***: If user denied access to camera for app on iOS and tries to access the camera view
app crashes with the error
`Application Specific Information:
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[AVCaptureMetadataOutput setMetadataObjectTypes:] Unsupported type found - use -availableMetadataObjectTypes'
`
Two ways to deny the access to camera:
1) When app asks permissions to Camera and user submits 'Deny Access' from requesting pop up
2) If user denied access from app setting
**Note**: Android is OK. iOS affected only.
#### Expected behavior
Pop up with request to access camera view if access was denied previously
#### Actual behavior
App crash on access camera view if access was denied previously
### Reproduction 1
- Instal fresh build
- Open Status and create account
- Navigate to `Profile` -> `Edit` -> `Edit profile image`
- Tap 'Capture'
- On request to access camera pop up tap - 'Don't allow'
### Reproduction 2
- Instal fresh build
- Open Status and create account
- Navigate to `+` -> `Start new chat`
- Tap on scan QR code button from 'Enter contact code' field
- On request to access camera pop up tap - 'Don't allow'
### Reproduction 3
**Precondition:** App installed, account created and access to camera allowed before
- From device settings for app - disable access to 'Camera'
- Open Status
- Log in to account (or create a new one)
- Navigate to `Wallet` -> `Send transaction` -> `Specify recipient`
- Tap 'Scan QR code'
### Additional Information
* Status version: Develop 22nd of February (0.9.11d711)
* Operating System: iOS
TF session with logs: https://app.testfairy.com/projects/4803590-status/builds/7756060/sessions/63/?accessToken=RkbcobWCuuHuS-TBCL6PBOvuedY | 1.0 | App crashed when accessing camera view but access to camera denied for app [iOS] -
***Type***: Bug
***Summary***: If user denied access to camera for app on iOS and tries to access the camera view
app crashes with the error
`Application Specific Information:
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[AVCaptureMetadataOutput setMetadataObjectTypes:] Unsupported type found - use -availableMetadataObjectTypes'
`
Two ways to deny the access to camera:
1) When app asks permissions to Camera and user submits 'Deny Access' from requesting pop up
2) If user denied access from app setting
**Note**: Android is OK. iOS affected only.
#### Expected behavior
Pop up with request to access camera view if access was denied previously
#### Actual behavior
App crash on access camera view if access was denied previously
### Reproduction 1
- Instal fresh build
- Open Status and create account
- Navigate to `Profile` -> `Edit` -> `Edit profile image`
- Tap 'Capture'
- On request to access camera pop up tap - 'Don't allow'
### Reproduction 2
- Instal fresh build
- Open Status and create account
- Navigate to `+` -> `Start new chat`
- Tap on scan QR code button from 'Enter contact code' field
- On request to access camera pop up tap - 'Don't allow'
### Reproduction 3
**Precondition:** App installed, account created and access to camera allowed before
- From device settings for app - disable access to 'Camera'
- Open Status
- Log in to account (or create a new one)
- Navigate to `Wallet` -> `Send transaction` -> `Specify recipient`
- Tap 'Scan QR code'
### Additional Information
* Status version: Develop 22nd of February (0.9.11d711)
* Operating System: iOS
TF session with logs: https://app.testfairy.com/projects/4803590-status/builds/7756060/sessions/63/?accessToken=RkbcobWCuuHuS-TBCL6PBOvuedY | priority | app crashed when accessing camera view but access to camera denied for app type bug summary if user denied access to camera for app on ios and tries to access the camera view app crashes with the error application specific information terminating app due to uncaught exception nsinvalidargumentexception reason unsupported type found use availablemetadataobjecttypes two ways to deny the access to camera when app asks permissions to camera and user submits deny access from requesting pop up if user denied access from app setting note android is ok ios affected only expected behavior pop up with request to access camera view if access was denied previously actual behavior app crash on access camera view if access was denied previously reproduction instal fresh build open status and create account navigate to profile edit edit profile image tap capture on request to access camera pop up tap don t allow reproduction instal fresh build open status and create account navigate to start new chat tap on scan qr code button from enter contact code field on request to access camera pop up tap don t allow reproduction precondition app installed account created and access to camera allowed before from device settings for app disable access to camera open status log in to account or create a new one navigate to wallet send transaction specify recipient tap scan qr code additional information status version develop of february operating system ios tf session with logs | 1 |
179,092 | 6,621,492,242 | IssuesEvent | 2017-09-21 19:23:53 | yashdsaraf/reimagined-eureka | https://api.github.com/repos/yashdsaraf/reimagined-eureka | closed | File explorer does not handle content overflow properly | bug priority: high | We need a better solution than mentioning fixed size for explorer div for different screens. | 1.0 | File explorer does not handle content overflow properly - We need a better solution than mentioning fixed size for explorer div for different screens. | priority | file explorer does not handle content overflow properly we need a better solution than mentioning fixed size for explorer div for different screens | 1 |
75,550 | 3,466,442,085 | IssuesEvent | 2015-12-22 03:33:57 | mulesoft/api-workbench | https://api.github.com/repos/mulesoft/api-workbench | closed | Uncaught TypeError: Cannot read property 'absolutePath' of null | atom bug parser priority:high | 1. Following tutorial at [http://apiworkbench.com/docs]{http://apiworkbench.com/docs}
2. At the section "Filling method bodies and responses", for step "To add a response body, make sure the focus is on 200: and choose Create new Response Body out of the available actions.", when I click on "Create new Response Body" I get this error message.
**Atom Version**: 1.3.2
**System**: Ubuntu 15.10
**Thrown From**: [api-workbench](https://github.com/mulesoft/api-workbench) package, v0.8.9
### Stack Trace
Uncaught TypeError: Cannot read property 'absolutePath' of null
```
At /home/orestis/.atom/packages/api-workbench/main.js:3
TypeError: Cannot read property 'absolutePath' of null
at t.typeId (/home/orestis/.atom/packages/api-workbench/main.js:3:9393)
at t.allProperties (/home/orestis/.atom/packages/api-workbench/main.js:10:26388)
at t.allProperties (/home/orestis/.atom/packages/api-workbench/main.js:3:8636)
at e.process (/home/orestis/.atom/packages/api-workbench/main.js:20:22310)
at e.process (/home/orestis/.atom/packages/api-workbench/main.js:20:21791)
at t.children (/home/orestis/.atom/packages/api-workbench/main.js:3:29308)
at t.attrs (/home/orestis/.atom/packages/api-workbench/main.js:3:30609)
at t.attr (/home/orestis/.atom/packages/api-workbench/main.js:3:30248)
at /home/orestis/.atom/packages/api-workbench/main.js:7:18092
at Array.forEach (native)
at e.createStubNode (/home/orestis/.atom/packages/api-workbench/main.js:7:18013)
at Object.a [as createStub] (/home/orestis/.atom/packages/api-workbench/main.js:29:23590)
at Object.v [as newNode] (/home/orestis/.atom/packages/api-workbench/main.js:29:15982)
at Object.onClick (/home/orestis/.atom/packages/api-workbench/main.js:42:21250)
at e.onClick (/home/orestis/.atom/packages/api-workbench/main.js:31:27335)
at /home/orestis/.atom/packages/api-workbench/main.js:31:20606
at /home/orestis/.atom/packages/api-workbench/main.js:27:18055
at Array.forEach (native)
at HTMLButtonElement._onClickListeners.length._onAltClickListeners.length.e.onclick (/home/orestis/.atom/packages/api-workbench/main.js:27:18028)
```
### Commands
```
-2:46.8.0 api-workbench:new-project (atom-pane.pane.active)
-2:29.1.0 core:select-all (atom-text-editor.editor.mini.is-focused)
-2:28.8.0 core:paste (atom-text-editor.editor.mini.is-focused)
2x -2:22.8.0 core:backspace (atom-text-editor.editor.mini.is-focused)
-2:14 core:move-right (atom-text-editor.editor.mini.is-focused)
3x -2:13.8.0 core:delete (atom-text-editor.editor.mini.is-focused)
30x -1:03.0 core:backspace (atom-text-editor.editor.is-focused)
```
### Config
```json
{}
```
### Installed Packages
```coffee
# User
api-workbench, v0.8.9
linter, v1.11.3
# Dev
No dev packages
```
| 1.0 | Uncaught TypeError: Cannot read property 'absolutePath' of null - 1. Following tutorial at [http://apiworkbench.com/docs]{http://apiworkbench.com/docs}
2. At the section "Filling method bodies and responses", for step "To add a response body, make sure the focus is on 200: and choose Create new Response Body out of the available actions.", when I click on "Create new Response Body" I get this error message.
**Atom Version**: 1.3.2
**System**: Ubuntu 15.10
**Thrown From**: [api-workbench](https://github.com/mulesoft/api-workbench) package, v0.8.9
### Stack Trace
Uncaught TypeError: Cannot read property 'absolutePath' of null
```
At /home/orestis/.atom/packages/api-workbench/main.js:3
TypeError: Cannot read property 'absolutePath' of null
at t.typeId (/home/orestis/.atom/packages/api-workbench/main.js:3:9393)
at t.allProperties (/home/orestis/.atom/packages/api-workbench/main.js:10:26388)
at t.allProperties (/home/orestis/.atom/packages/api-workbench/main.js:3:8636)
at e.process (/home/orestis/.atom/packages/api-workbench/main.js:20:22310)
at e.process (/home/orestis/.atom/packages/api-workbench/main.js:20:21791)
at t.children (/home/orestis/.atom/packages/api-workbench/main.js:3:29308)
at t.attrs (/home/orestis/.atom/packages/api-workbench/main.js:3:30609)
at t.attr (/home/orestis/.atom/packages/api-workbench/main.js:3:30248)
at /home/orestis/.atom/packages/api-workbench/main.js:7:18092
at Array.forEach (native)
at e.createStubNode (/home/orestis/.atom/packages/api-workbench/main.js:7:18013)
at Object.a [as createStub] (/home/orestis/.atom/packages/api-workbench/main.js:29:23590)
at Object.v [as newNode] (/home/orestis/.atom/packages/api-workbench/main.js:29:15982)
at Object.onClick (/home/orestis/.atom/packages/api-workbench/main.js:42:21250)
at e.onClick (/home/orestis/.atom/packages/api-workbench/main.js:31:27335)
at /home/orestis/.atom/packages/api-workbench/main.js:31:20606
at /home/orestis/.atom/packages/api-workbench/main.js:27:18055
at Array.forEach (native)
at HTMLButtonElement._onClickListeners.length._onAltClickListeners.length.e.onclick (/home/orestis/.atom/packages/api-workbench/main.js:27:18028)
```
### Commands
```
-2:46.8.0 api-workbench:new-project (atom-pane.pane.active)
-2:29.1.0 core:select-all (atom-text-editor.editor.mini.is-focused)
-2:28.8.0 core:paste (atom-text-editor.editor.mini.is-focused)
2x -2:22.8.0 core:backspace (atom-text-editor.editor.mini.is-focused)
-2:14 core:move-right (atom-text-editor.editor.mini.is-focused)
3x -2:13.8.0 core:delete (atom-text-editor.editor.mini.is-focused)
30x -1:03.0 core:backspace (atom-text-editor.editor.is-focused)
```
### Config
```json
{}
```
### Installed Packages
```coffee
# User
api-workbench, v0.8.9
linter, v1.11.3
# Dev
No dev packages
```
| priority | uncaught typeerror cannot read property absolutepath of null following tutorial at at the section filling method bodies and responses for step to add a response body make sure the focus is on and choose create new response body out of the available actions when i click on create new response body i get this error message atom version system ubuntu thrown from package stack trace uncaught typeerror cannot read property absolutepath of null at home orestis atom packages api workbench main js typeerror cannot read property absolutepath of null at t typeid home orestis atom packages api workbench main js at t allproperties home orestis atom packages api workbench main js at t allproperties home orestis atom packages api workbench main js at e process home orestis atom packages api workbench main js at e process home orestis atom packages api workbench main js at t children home orestis atom packages api workbench main js at t attrs home orestis atom packages api workbench main js at t attr home orestis atom packages api workbench main js at home orestis atom packages api workbench main js at array foreach native at e createstubnode home orestis atom packages api workbench main js at object a home orestis atom packages api workbench main js at object v home orestis atom packages api workbench main js at object onclick home orestis atom packages api workbench main js at e onclick home orestis atom packages api workbench main js at home orestis atom packages api workbench main js at home orestis atom packages api workbench main js at array foreach native at htmlbuttonelement onclicklisteners length onaltclicklisteners length e onclick home orestis atom packages api workbench main js commands api workbench new project atom pane pane active core select all atom text editor editor mini is focused core paste atom text editor editor mini is focused core backspace atom text editor editor mini is focused core move right atom text editor editor mini is focused core delete atom text editor editor mini is focused core backspace atom text editor editor is focused config json installed packages coffee user api workbench linter dev no dev packages | 1 |
759,939 | 26,619,438,392 | IssuesEvent | 2023-01-24 10:05:53 | slsdetectorgroup/slsDetectorPackage | https://api.github.com/repos/slsdetectorgroup/slsDetectorPackage | closed | Eiger python: Versions fail (hardwareversion) | action - Bug priority - High status - resolved | versions gets hardwareversion, which is not implemented for Eiger and fails | 1.0 | Eiger python: Versions fail (hardwareversion) - versions gets hardwareversion, which is not implemented for Eiger and fails | priority | eiger python versions fail hardwareversion versions gets hardwareversion which is not implemented for eiger and fails | 1 |
828,981 | 31,849,781,222 | IssuesEvent | 2023-09-14 23:57:07 | mantidproject/mantid | https://api.github.com/repos/mantidproject/mantid | opened | Plots from script are hidden | High Priority Bug ISIS Team: Core Found in Beta | **Describe the bug**
When running a script generated by the plot script generator, the plot is hidden. It used to be shown by default.
**To Reproduce**
1. Load a workspace and plot a spectrum
2. copy plot script to clipboard using the toolbar button
3. close plot
4. paste script into script editor
5. run script
6. plot is hidden, you have to go to the plots widget to show it.
**Expected behavior**
plot should appear
**Platform/Version (please complete the following information):**
- OS: all
- Mantid Version: 6.8 manual testing candidate (regression since 6.7)
| 1.0 | Plots from script are hidden - **Describe the bug**
When running a script generated by the plot script generator, the plot is hidden. It used to be shown by default.
**To Reproduce**
1. Load a workspace and plot a spectrum
2. copy plot script to clipboard using the toolbar button
3. close plot
4. paste script into script editor
5. run script
6. plot is hidden, you have to go to the plots widget to show it.
**Expected behavior**
plot should appear
**Platform/Version (please complete the following information):**
- OS: all
- Mantid Version: 6.8 manual testing candidate (regression since 6.7)
| priority | plots from script are hidden describe the bug when running a script generated by the plot script generator the plot is hidden it used to be shown by default to reproduce load a workspace and plot a spectrum copy plot script to clipboard using the toolbar button close plot paste script into script editor run script plot is hidden you have to go to the plots widget to show it expected behavior plot should appear platform version please complete the following information os all mantid version manual testing candidate regression since | 1 |
578,947 | 17,157,249,498 | IssuesEvent | 2021-07-14 08:33:47 | stackabletech/issues | https://api.github.com/repos/stackabletech/issues | closed | Provide NiFi packages with fix for bootstrap.conf location | priority/high status/blocked type/enhancement | Currently NiFi has no way to override the path where the config should be read from.
This requires a hack in the Agent.
We'd like to provide a package that has the fix included to allow configuring this location.
https://issues.apache.org/jira/browse/NIFI-5573
https://github.com/apache/nifi/pull/2985
This is blocked by our packaging discussion because we need to decide how we want to provide custom packages. | 1.0 | Provide NiFi packages with fix for bootstrap.conf location - Currently NiFi has no way to override the path where the config should be read from.
This requires a hack in the Agent.
We'd like to provide a package that has the fix included to allow configuring this location.
https://issues.apache.org/jira/browse/NIFI-5573
https://github.com/apache/nifi/pull/2985
This is blocked by our packaging discussion because we need to decide how we want to provide custom packages. | priority | provide nifi packages with fix for bootstrap conf location currently nifi has no way to override the path where the config should be read from this requires a hack in the agent we d like to provide a package that has the fix included to allow configuring this location this is blocked by our packaging discussion because we need to decide how we want to provide custom packages | 1 |
265,797 | 8,359,175,487 | IssuesEvent | 2018-10-03 07:15:45 | dirkwhoffmann/virtualc64 | https://api.github.com/repos/dirkwhoffmann/virtualc64 | opened | GUI performance decrease on Mojave | Priority-High bug | To reproduce (for sure in 3.0, maybe in 2.5 as well):
Open hardware preferences, change something, and hit cancel. It will take some time to close the window. Running the same executable in High Sierra does not show any delays.
Culprit: There might be an issue with recursive posix mutexes in Mohave. (see variable "mutex" which is accessed in C64::suspend() and C64::resume()).
All reviews about Mojave which I read so far are telling me that Mojave is the best OS Apple has ever released. Seriously? I must be running a different OS then 😖. Thinking about downgrading to High Sierra ...
| 1.0 | GUI performance decrease on Mojave - To reproduce (for sure in 3.0, maybe in 2.5 as well):
Open hardware preferences, change something, and hit cancel. It will take some time to close the window. Running the same executable in High Sierra does not show any delays.
Culprit: There might be an issue with recursive posix mutexes in Mohave. (see variable "mutex" which is accessed in C64::suspend() and C64::resume()).
All reviews about Mojave which I read so far are telling me that Mojave is the best OS Apple has ever released. Seriously? I must be running a different OS then 😖. Thinking about downgrading to High Sierra ...
| priority | gui performance decrease on mojave to reproduce for sure in maybe in as well open hardware preferences change something and hit cancel it will take some time to close the window running the same executable in high sierra does not show any delays culprit there might be an issue with recursive posix mutexes in mohave see variable mutex which is accessed in suspend and resume all reviews about mojave which i read so far are telling me that mojave is the best os apple has ever released seriously i must be running a different os then 😖 thinking about downgrading to high sierra | 1 |
785,537 | 27,617,235,248 | IssuesEvent | 2023-03-09 20:24:51 | ORNL-AMO/VERIFI | https://api.github.com/repos/ORNL-AMO/VERIFI | closed | Report Printing after bootstrap updates | bug High Priority Reports | need to fix report printing caused by bootstrap updates | 1.0 | Report Printing after bootstrap updates - need to fix report printing caused by bootstrap updates | priority | report printing after bootstrap updates need to fix report printing caused by bootstrap updates | 1 |
95,381 | 3,946,802,658 | IssuesEvent | 2016-04-28 07:03:40 | DigitalCampus/moodle-block_oppia_mobile_export | https://api.github.com/repos/DigitalCampus/moodle-block_oppia_mobile_export | opened | Replace some text in quiz questions and responses | bug high priority | to replace: \r, \n, & - these cause issues once exported
Might be able to remove this once the issue#136 has been merged in and the app handles the question/response text as (limited) html rather than as plain text | 1.0 | Replace some text in quiz questions and responses - to replace: \r, \n, & - these cause issues once exported
Might be able to remove this once the issue#136 has been merged in and the app handles the question/response text as (limited) html rather than as plain text | priority | replace some text in quiz questions and responses to replace r n nbsp amp these cause issues once exported might be able to remove this once the issue has been merged in and the app handles the question response text as limited html rather than as plain text | 1 |
714,854 | 24,578,257,334 | IssuesEvent | 2022-10-13 13:53:10 | AY2223S1-CS2113-T17-1/tp | https://api.github.com/repos/AY2223S1-CS2113-T17-1/tp | closed | [List] As an AOM, I can view the flight schedule and their timings for each day. | type.Story priority.High | So that I can be able to have an overview of the flight schedule for the day.
Jordon: Class creation (With accompanying methods)
Rachel: implement to main class
Due Date: Tuesday (11th of Oct) | 1.0 | [List] As an AOM, I can view the flight schedule and their timings for each day. - So that I can be able to have an overview of the flight schedule for the day.
Jordon: Class creation (With accompanying methods)
Rachel: implement to main class
Due Date: Tuesday (11th of Oct) | priority | as an aom i can view the flight schedule and their timings for each day so that i can be able to have an overview of the flight schedule for the day jordon class creation with accompanying methods rachel implement to main class due date tuesday of oct | 1 |
195,703 | 6,917,291,859 | IssuesEvent | 2017-11-29 07:51:48 | xcodeswift/sake | https://api.github.com/repos/xcodeswift/sake | closed | Update formula to copy the SakefileDescription dynamic library | difficulty:moderate priority:high status:ready-development type:bug | ## Context 🕵️♀️
`SakefileDescription.dylib` is necessary to run the `Sakefile.swift`. It should be copied as part of the installation.
## What 🌱
Review the formula `install` method and make sure that the dynamic library gets copied, and `sake` can link against it when running the `Sakefile.swift`.
## Proposal 🎉
Fix the Homebrew formula.
| 1.0 | Update formula to copy the SakefileDescription dynamic library - ## Context 🕵️♀️
`SakefileDescription.dylib` is necessary to run the `Sakefile.swift`. It should be copied as part of the installation.
## What 🌱
Review the formula `install` method and make sure that the dynamic library gets copied, and `sake` can link against it when running the `Sakefile.swift`.
## Proposal 🎉
Fix the Homebrew formula.
| priority | update formula to copy the sakefiledescription dynamic library context 🕵️♀️ sakefiledescription dylib is necessary to run the sakefile swift it should be copied as part of the installation what 🌱 review the formula install method and make sure that the dynamic library gets copied and sake can link against it when running the sakefile swift proposal 🎉 fix the homebrew formula | 1 |
104,304 | 4,209,106,103 | IssuesEvent | 2016-06-29 02:52:25 | PowerPointLabs/PowerPointLabs | https://api.github.com/repos/PowerPointLabs/PowerPointLabs | closed | Highlight Text doesn't work for mac ppt 2011 | Feature.TextHighlight Priority.Medium type-bug | _From [Gigi...@gmail.com](https://code.google.com/u/108486287164240412575/) on April 02, 2014 23:11:47_
What steps will reproduce the problem? 1. apply Highlight Text on a slide What is the expected output? What do you see instead? the sequence of animation's different in mac ppt
_Original issue: http://code.google.com/p/powerpointlabs/issues/detail?id=285_ | 1.0 | Highlight Text doesn't work for mac ppt 2011 - _From [Gigi...@gmail.com](https://code.google.com/u/108486287164240412575/) on April 02, 2014 23:11:47_
What steps will reproduce the problem? 1. apply Highlight Text on a slide What is the expected output? What do you see instead? the sequence of animation's different in mac ppt
_Original issue: http://code.google.com/p/powerpointlabs/issues/detail?id=285_ | priority | highlight text doesn t work for mac ppt from on april what steps will reproduce the problem apply highlight text on a slide what is the expected output what do you see instead the sequence of animation s different in mac ppt original issue | 1 |
187,355 | 6,756,430,517 | IssuesEvent | 2017-10-24 07:01:42 | RMUASD-Team1-2017/RMUASD | https://api.github.com/repos/RMUASD-Team1-2017/RMUASD | closed | Failure Scenarios - SORA | Priority High | Do a more specific documentation of the failure scenarios in the SORA application. Collaborate with other group and make draft to Kjeld. | 1.0 | Failure Scenarios - SORA - Do a more specific documentation of the failure scenarios in the SORA application. Collaborate with other group and make draft to Kjeld. | priority | failure scenarios sora do a more specific documentation of the failure scenarios in the sora application collaborate with other group and make draft to kjeld | 1 |
326,547 | 9,957,668,970 | IssuesEvent | 2019-07-05 17:45:37 | siddhi-io/siddhi-io-http | https://api.github.com/repos/siddhi-io/siddhi-io-http | reopened | Netty resources leak when performing perf tests | priority/high resolution/invalid severity/major type/bug | **Description:**
When performing a perf test using http request-response flow, following LEAK error message is getting printed.
```
[2019-04-19 15:46:12,400] ERROR {io.netty.util.ResourceLeakDetector} - LEAK: ByteBuf.release() was not called before it's garbage-collected. Enable advanced leak reporting to find out where the leak occurred. To enable adv
anced leak reporting, specify the JVM option '-Dio.netty.leakDetection.level=advanced' or call ResourceLeakDetector.setLevel() See http://netty.io/wiki/reference-counted-objects.html for more information.
```
After setting the leak detection level to "**ADVANCED**", it provides following logs.
```
[2019-05-31 06:37:24,564] ERROR {io.netty.util.ResourceLeakDetector} - LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records:
#1:
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:273)
io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:141)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
#2:
io.netty.buffer.AdvancedLeakAwareByteBuf.getByte(AdvancedLeakAwareByteBuf.java:154)
io.netty.handler.codec.http.HttpObjectDecoder.decode(HttpObjectDecoder.java:360)
io.netty.handler.codec.http.HttpClientCodec$Decoder.decode(HttpClientCodec.java:202)
io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:141)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
#3:
io.netty.buffer.AdvancedLeakAwareByteBuf.forEachByte(AdvancedLeakAwareByteBuf.java:670)
io.netty.handler.codec.http.HttpObjectDecoder$HeaderParser.parse(HttpObjectDecoder.java:803)
io.netty.handler.codec.http.HttpObjectDecoder.readHeaders(HttpObjectDecoder.java:603)
io.netty.handler.codec.http.HttpObjectDecoder.decode(HttpObjectDecoder.java:227)
io.netty.handler.codec.http.HttpClientCodec$Decoder.decode(HttpClientCodec.java:202)
io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:141)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
#4:
io.netty.buffer.AdvancedLeakAwareByteBuf.forEachByte(AdvancedLeakAwareByteBuf.java:670)
io.netty.handler.codec.http.HttpObjectDecoder$HeaderParser.parse(HttpObjectDecoder.java:803)
io.netty.handler.codec.http.HttpObjectDecoder$LineParser.parse(HttpObjectDecoder.java:852)
io.netty.handler.codec.http.HttpObjectDecoder.decode(HttpObjectDecoder.java:208)
io.netty.handler.codec.http.HttpClientCodec$Decoder.decode(HttpClientCodec.java:202)
io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:141)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
#5:
io.netty.buffer.AdvancedLeakAwareByteBuf.getUnsignedByte(AdvancedLeakAwareByteBuf.java:160)
io.netty.handler.codec.http.HttpObjectDecoder.skipControlCharacters(HttpObjectDecoder.java:568)
io.netty.handler.codec.http.HttpObjectDecoder.decode(HttpObjectDecoder.java:202)
io.netty.handler.codec.http.HttpClientCodec$Decoder.decode(HttpClientCodec.java:202)
io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:141)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
#6:
Hint: 'codec' will handle the message from this point.
io.netty.channel.DefaultChannelPipeline.touch(DefaultChannelPipeline.java:116)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:141)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
#7:
Hint: 'DefaultChannelPipeline$HeadContext#0' will handle the message from this point.
io.netty.channel.DefaultChannelPipeline.touch(DefaultChannelPipeline.java:116)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:141)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
Created at:
io.netty.buffer.AdvancedLeakAwareByteBuf.writeBytes(AdvancedLeakAwareByteBuf.java:634)
io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:345)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:126)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
: 6 leak records were discarded because they were duplicates
```
| 1.0 | Netty resources leak when performing perf tests - **Description:**
When performing a perf test using http request-response flow, following LEAK error message is getting printed.
```
[2019-04-19 15:46:12,400] ERROR {io.netty.util.ResourceLeakDetector} - LEAK: ByteBuf.release() was not called before it's garbage-collected. Enable advanced leak reporting to find out where the leak occurred. To enable adv
anced leak reporting, specify the JVM option '-Dio.netty.leakDetection.level=advanced' or call ResourceLeakDetector.setLevel() See http://netty.io/wiki/reference-counted-objects.html for more information.
```
After setting the leak detection level to "**ADVANCED**", it provides following logs.
```
[2019-05-31 06:37:24,564] ERROR {io.netty.util.ResourceLeakDetector} - LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records:
#1:
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:273)
io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:141)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
#2:
io.netty.buffer.AdvancedLeakAwareByteBuf.getByte(AdvancedLeakAwareByteBuf.java:154)
io.netty.handler.codec.http.HttpObjectDecoder.decode(HttpObjectDecoder.java:360)
io.netty.handler.codec.http.HttpClientCodec$Decoder.decode(HttpClientCodec.java:202)
io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:141)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
#3:
io.netty.buffer.AdvancedLeakAwareByteBuf.forEachByte(AdvancedLeakAwareByteBuf.java:670)
io.netty.handler.codec.http.HttpObjectDecoder$HeaderParser.parse(HttpObjectDecoder.java:803)
io.netty.handler.codec.http.HttpObjectDecoder.readHeaders(HttpObjectDecoder.java:603)
io.netty.handler.codec.http.HttpObjectDecoder.decode(HttpObjectDecoder.java:227)
io.netty.handler.codec.http.HttpClientCodec$Decoder.decode(HttpClientCodec.java:202)
io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:141)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
#4:
io.netty.buffer.AdvancedLeakAwareByteBuf.forEachByte(AdvancedLeakAwareByteBuf.java:670)
io.netty.handler.codec.http.HttpObjectDecoder$HeaderParser.parse(HttpObjectDecoder.java:803)
io.netty.handler.codec.http.HttpObjectDecoder$LineParser.parse(HttpObjectDecoder.java:852)
io.netty.handler.codec.http.HttpObjectDecoder.decode(HttpObjectDecoder.java:208)
io.netty.handler.codec.http.HttpClientCodec$Decoder.decode(HttpClientCodec.java:202)
io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:141)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
#5:
io.netty.buffer.AdvancedLeakAwareByteBuf.getUnsignedByte(AdvancedLeakAwareByteBuf.java:160)
io.netty.handler.codec.http.HttpObjectDecoder.skipControlCharacters(HttpObjectDecoder.java:568)
io.netty.handler.codec.http.HttpObjectDecoder.decode(HttpObjectDecoder.java:202)
io.netty.handler.codec.http.HttpClientCodec$Decoder.decode(HttpClientCodec.java:202)
io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:141)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
#6:
Hint: 'codec' will handle the message from this point.
io.netty.channel.DefaultChannelPipeline.touch(DefaultChannelPipeline.java:116)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:141)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
#7:
Hint: 'DefaultChannelPipeline$HeadContext#0' will handle the message from this point.
io.netty.channel.DefaultChannelPipeline.touch(DefaultChannelPipeline.java:116)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:141)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
Created at:
io.netty.buffer.AdvancedLeakAwareByteBuf.writeBytes(AdvancedLeakAwareByteBuf.java:634)
io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:345)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:126)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
: 6 leak records were discarded because they were duplicates
```
| priority | netty resources leak when performing perf tests description when performing a perf test using http request response flow following leak error message is getting printed error io netty util resourceleakdetector leak bytebuf release was not called before it s garbage collected enable advanced leak reporting to find out where the leak occurred to enable adv anced leak reporting specify the jvm option dio netty leakdetection level advanced or call resourceleakdetector setlevel see for more information after setting the leak detection level to advanced it provides following logs error io netty util resourceleakdetector leak bytebuf release was not called before it s garbage collected see for more information recent access records io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java io netty channel combinedchannelduplexhandler channelread combinedchannelduplexhandler java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java io netty channel nio abstractniobytechannel niobyteunsafe read abstractniobytechannel java io netty channel nio nioeventloop processselectedkey nioeventloop java io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java io netty channel nio nioeventloop processselectedkeys nioeventloop java io netty channel nio nioeventloop run nioeventloop java io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java java lang thread run thread java io netty buffer advancedleakawarebytebuf getbyte advancedleakawarebytebuf java io netty handler codec http httpobjectdecoder decode httpobjectdecoder java io netty handler codec http httpclientcodec decoder decode httpclientcodec java io netty handler codec bytetomessagedecoder decoderemovalreentryprotection bytetomessagedecoder java io netty handler codec bytetomessagedecoder calldecode bytetomessagedecoder java io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java io netty channel combinedchannelduplexhandler channelread combinedchannelduplexhandler java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java io netty channel nio abstractniobytechannel niobyteunsafe read abstractniobytechannel java io netty channel nio nioeventloop processselectedkey nioeventloop java io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java io netty channel nio nioeventloop processselectedkeys nioeventloop java io netty channel nio nioeventloop run nioeventloop java io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java java lang thread run thread java io netty buffer advancedleakawarebytebuf foreachbyte advancedleakawarebytebuf java io netty handler codec http httpobjectdecoder headerparser parse httpobjectdecoder java io netty handler codec http httpobjectdecoder readheaders httpobjectdecoder java io netty handler codec http httpobjectdecoder decode httpobjectdecoder java io netty handler codec http httpclientcodec decoder decode httpclientcodec java io netty handler codec bytetomessagedecoder decoderemovalreentryprotection bytetomessagedecoder java io netty handler codec bytetomessagedecoder calldecode bytetomessagedecoder java io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java io netty channel combinedchannelduplexhandler channelread combinedchannelduplexhandler java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java io netty channel nio abstractniobytechannel niobyteunsafe read abstractniobytechannel java io netty channel nio nioeventloop processselectedkey nioeventloop java io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java io netty channel nio nioeventloop processselectedkeys nioeventloop java io netty channel nio nioeventloop run nioeventloop java io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java java lang thread run thread java io netty buffer advancedleakawarebytebuf foreachbyte advancedleakawarebytebuf java io netty handler codec http httpobjectdecoder headerparser parse httpobjectdecoder java io netty handler codec http httpobjectdecoder lineparser parse httpobjectdecoder java io netty handler codec http httpobjectdecoder decode httpobjectdecoder java io netty handler codec http httpclientcodec decoder decode httpclientcodec java io netty handler codec bytetomessagedecoder decoderemovalreentryprotection bytetomessagedecoder java io netty handler codec bytetomessagedecoder calldecode bytetomessagedecoder java io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java io netty channel combinedchannelduplexhandler channelread combinedchannelduplexhandler java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java io netty channel nio abstractniobytechannel niobyteunsafe read abstractniobytechannel java io netty channel nio nioeventloop processselectedkey nioeventloop java io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java io netty channel nio nioeventloop processselectedkeys nioeventloop java io netty channel nio nioeventloop run nioeventloop java io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java java lang thread run thread java io netty buffer advancedleakawarebytebuf getunsignedbyte advancedleakawarebytebuf java io netty handler codec http httpobjectdecoder skipcontrolcharacters httpobjectdecoder java io netty handler codec http httpobjectdecoder decode httpobjectdecoder java io netty handler codec http httpclientcodec decoder decode httpclientcodec java io netty handler codec bytetomessagedecoder decoderemovalreentryprotection bytetomessagedecoder java io netty handler codec bytetomessagedecoder calldecode bytetomessagedecoder java io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java io netty channel combinedchannelduplexhandler channelread combinedchannelduplexhandler java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java io netty channel nio abstractniobytechannel niobyteunsafe read abstractniobytechannel java io netty channel nio nioeventloop processselectedkey nioeventloop java io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java io netty channel nio nioeventloop processselectedkeys nioeventloop java io netty channel nio nioeventloop run nioeventloop java io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java java lang thread run thread java hint codec will handle the message from this point io netty channel defaultchannelpipeline touch defaultchannelpipeline java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java io netty channel nio abstractniobytechannel niobyteunsafe read abstractniobytechannel java io netty channel nio nioeventloop processselectedkey nioeventloop java io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java io netty channel nio nioeventloop processselectedkeys nioeventloop java io netty channel nio nioeventloop run nioeventloop java io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java java lang thread run thread java hint defaultchannelpipeline headcontext will handle the message from this point io netty channel defaultchannelpipeline touch defaultchannelpipeline java io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java io netty channel nio abstractniobytechannel niobyteunsafe read abstractniobytechannel java io netty channel nio nioeventloop processselectedkey nioeventloop java io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java io netty channel nio nioeventloop processselectedkeys nioeventloop java io netty channel nio nioeventloop run nioeventloop java io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java java lang thread run thread java created at io netty buffer advancedleakawarebytebuf writebytes advancedleakawarebytebuf java io netty channel socket nio niosocketchannel doreadbytes niosocketchannel java io netty channel nio abstractniobytechannel niobyteunsafe read abstractniobytechannel java io netty channel nio nioeventloop processselectedkey nioeventloop java io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java io netty channel nio nioeventloop processselectedkeys nioeventloop java io netty channel nio nioeventloop run nioeventloop java io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java java lang thread run thread java leak records were discarded because they were duplicates | 1 |
376,489 | 11,147,644,280 | IssuesEvent | 2019-12-23 13:17:36 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | QGIS Server 3.4 broken on Windows | Bug Feedback High Priority Regression Server | Author Name: **Uroš Preložnik** (@uprel)
Original Redmine Issue: [20873](https://issues.qgis.org/issues/20873)
Affected QGIS version: 3.4.4
Redmine category:qgis_server
---
Latest QGIS Server from OSGeo4W doesn't work. Maybe is related to Python configuration.
This are Apache environment variables used (I checked, they are valid paths):
@SetEnv GDAL_DATA "C:\OSGeo4W64\share\gdal"
SetEnv QGIS_AUTH_DB_DIR_PATH "C:\OSGeo4W64\apps\qgis\resources"
SetEnv PYTHONHOME "C:\OSGeo4W64\apps\Python37"
SetEnv PATH "C:\OSGeo4W64\bin;C:\OSGeo4W64\apps\qgis\bin;C:\OSGeo4W64\apps\Qt5\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;"
SetEnv QGIS_PREFIX_PATH "C:\OSGeo4W64\apps\qgis"
SetEnv QT_PLUGIN_PATH "C:\OSGeo4W64\apps\qgis\qtplugins;C:\OSGeo4W64\apps\Qt5\plugins;"@
Call to GetCapabilities
http://localhost:8080/cgi-bin/qgis_mapserv.fcgi.exe?SERVICE=WMS&VERSION=1.3.0&REQUEST=GetCapabilities
Request takes over 7 seconds and generates broken response, some text on screen but not complete XML.
Attached is QGIS Server log file only for that request.
---
- [qgis-server.log](https://issues.qgis.org/attachments/download/14018/qgis-server.log) (Uroš Preložnik)
- [qgis server 3.4.4 uncomplete response.xml](https://issues.qgis.org/attachments/download/14343/qgis%20server%203.4.4%20uncomplete%20response.xml) (Uroš Preložnik)
- [qgis-server-3.4.4.log](https://issues.qgis.org/attachments/download/14344/qgis-server-3.4.4.log) (Uroš Preložnik)
- [httpd-vhosts.conf](https://issues.qgis.org/attachments/download/14548/httpd-vhosts.conf) (João Gaspar)
- [httpd.conf](https://issues.qgis.org/attachments/download/14549/httpd.conf) (João Gaspar) | 1.0 | QGIS Server 3.4 broken on Windows - Author Name: **Uroš Preložnik** (@uprel)
Original Redmine Issue: [20873](https://issues.qgis.org/issues/20873)
Affected QGIS version: 3.4.4
Redmine category:qgis_server
---
Latest QGIS Server from OSGeo4W doesn't work. Maybe is related to Python configuration.
This are Apache environment variables used (I checked, they are valid paths):
@SetEnv GDAL_DATA "C:\OSGeo4W64\share\gdal"
SetEnv QGIS_AUTH_DB_DIR_PATH "C:\OSGeo4W64\apps\qgis\resources"
SetEnv PYTHONHOME "C:\OSGeo4W64\apps\Python37"
SetEnv PATH "C:\OSGeo4W64\bin;C:\OSGeo4W64\apps\qgis\bin;C:\OSGeo4W64\apps\Qt5\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;"
SetEnv QGIS_PREFIX_PATH "C:\OSGeo4W64\apps\qgis"
SetEnv QT_PLUGIN_PATH "C:\OSGeo4W64\apps\qgis\qtplugins;C:\OSGeo4W64\apps\Qt5\plugins;"@
Call to GetCapabilities
http://localhost:8080/cgi-bin/qgis_mapserv.fcgi.exe?SERVICE=WMS&VERSION=1.3.0&REQUEST=GetCapabilities
Request takes over 7 seconds and generates broken response, some text on screen but not complete XML.
Attached is QGIS Server log file only for that request.
---
- [qgis-server.log](https://issues.qgis.org/attachments/download/14018/qgis-server.log) (Uroš Preložnik)
- [qgis server 3.4.4 uncomplete response.xml](https://issues.qgis.org/attachments/download/14343/qgis%20server%203.4.4%20uncomplete%20response.xml) (Uroš Preložnik)
- [qgis-server-3.4.4.log](https://issues.qgis.org/attachments/download/14344/qgis-server-3.4.4.log) (Uroš Preložnik)
- [httpd-vhosts.conf](https://issues.qgis.org/attachments/download/14548/httpd-vhosts.conf) (João Gaspar)
- [httpd.conf](https://issues.qgis.org/attachments/download/14549/httpd.conf) (João Gaspar) | priority | qgis server broken on windows author name uroš preložnik uprel original redmine issue affected qgis version redmine category qgis server latest qgis server from doesn t work maybe is related to python configuration this are apache environment variables used i checked they are valid paths setenv gdal data c share gdal setenv qgis auth db dir path c apps qgis resources setenv pythonhome c apps setenv path c bin c apps qgis bin c apps bin c windows c windows c windows wbem setenv qgis prefix path c apps qgis setenv qt plugin path c apps qgis qtplugins c apps plugins call to getcapabilities request takes over seconds and generates broken response some text on screen but not complete xml attached is qgis server log file only for that request uroš preložnik uroš preložnik uroš preložnik joão gaspar joão gaspar | 1 |
200,188 | 7,001,252,700 | IssuesEvent | 2017-12-18 09:33:20 | deeptools/deepTools | https://api.github.com/repos/deeptools/deepTools | closed | Segmentation fault (core dumped) for bamCoverage | bug high Priority pyBigWig | Hello,
I've installed deeptools 2.5.4 using bioconda, I am trying to make a bigwig from a bam file, but I keep getting a `Segmentation fault (core dumped)` error.
Any thoughts on how to fix it? Thanks so much! | 1.0 | Segmentation fault (core dumped) for bamCoverage - Hello,
I've installed deeptools 2.5.4 using bioconda, I am trying to make a bigwig from a bam file, but I keep getting a `Segmentation fault (core dumped)` error.
Any thoughts on how to fix it? Thanks so much! | priority | segmentation fault core dumped for bamcoverage hello i ve installed deeptools using bioconda i am trying to make a bigwig from a bam file but i keep getting a segmentation fault core dumped error any thoughts on how to fix it thanks so much | 1 |
310,458 | 9,498,896,484 | IssuesEvent | 2019-04-24 03:56:57 | wso2/product-microgateway | https://api.github.com/repos/wso2/product-microgateway | closed | Read enable and disable TLS from the containerConfig | Priority/Highest Type/Improvement | **Description:**
Enable TLS is hardcoded in the [1] secureKubernetesIngress.mustache file. Make it configurable and read from the containerConfig.
[1] https://github.com/wso2/product-microgateway/blob/v2.6.0/components/micro-gateway-cli/src/main/resources/templates/secureKubernetesIngress.mustache#L11 | 1.0 | Read enable and disable TLS from the containerConfig - **Description:**
Enable TLS is hardcoded in the [1] secureKubernetesIngress.mustache file. Make it configurable and read from the containerConfig.
[1] https://github.com/wso2/product-microgateway/blob/v2.6.0/components/micro-gateway-cli/src/main/resources/templates/secureKubernetesIngress.mustache#L11 | priority | read enable and disable tls from the containerconfig description enable tls is hardcoded in the securekubernetesingress mustache file make it configurable and read from the containerconfig | 1 |
715,278 | 24,593,070,649 | IssuesEvent | 2022-10-14 05:25:19 | XDRAGON2002/iiit-gram | https://api.github.com/repos/XDRAGON2002/iiit-gram | closed | Integrate mongodb atlas | help wanted good first issue easy hacktoberfest higher-priority | Integrate the mongoose library with the server so that we can start working with an actual database. | 1.0 | Integrate mongodb atlas - Integrate the mongoose library with the server so that we can start working with an actual database. | priority | integrate mongodb atlas integrate the mongoose library with the server so that we can start working with an actual database | 1 |
569,465 | 17,014,531,297 | IssuesEvent | 2021-07-02 10:03:32 | ArastoSahbaei/codic-education-website | https://api.github.com/repos/ArastoSahbaei/codic-education-website | closed | create login functionality | High Priority enhancement | a user should be able to login to the application and have the username/email displayed | 1.0 | create login functionality - a user should be able to login to the application and have the username/email displayed | priority | create login functionality a user should be able to login to the application and have the username email displayed | 1 |
90,180 | 3,812,589,454 | IssuesEvent | 2016-03-27 17:57:39 | chef/chef | https://api.github.com/repos/chef/chef | closed | tons of "Deprecation class overwrites LWRP resource" WARNING SPAM with chefspec | Bug Chef Core High Priority | ## Description
See https://github.com/chef-cookbooks/apt/issues/163 or https://github.com/sethvargo/chefspec/issues/642
## Chef Version
12.7.2-ish
## Replication Case
* cookbook with a custom resource
* a chefspec file with at least two chef runs in it (doesn't throw it with the default unit test from chef-dk because there's only one example)
* must use the ServerRunner (SoloRunner does not throw the WARN)
## Client Output
stuff like this:
```
% rspec
[2016-03-04T13:59:11-08:00] WARN: MultipackageInternal already exists! Deprecation class overwrites Custom resource multipackage_internal from cookbook multipackage
[2016-03-04T13:59:11-08:00] WARN: FakeTestResource already exists! Deprecation class overwrites Custom resource fake_test_resource from cookbook fake
.
Finished in 1.73 seconds (files took 0.99219 seconds to load)
2 examples, 0 failures
ChefSpec Coverage report generated...
Total Resources: 3
Touched Resources: 0
Touch Coverage: 0.0%
Untouched Resources:
multipackage_internal[collected packages install] multipackage/libraries/multipackage_impl.rb:63
multipackage_internal[collected packages remove] multipackage/libraries/multipackage_impl.rb:63
multipackage_internal[collected packages upgrade] multipackage/libraries/multipackage_impl.rb:63
```
| 1.0 | tons of "Deprecation class overwrites LWRP resource" WARNING SPAM with chefspec - ## Description
See https://github.com/chef-cookbooks/apt/issues/163 or https://github.com/sethvargo/chefspec/issues/642
## Chef Version
12.7.2-ish
## Replication Case
* cookbook with a custom resource
* a chefspec file with at least two chef runs in it (doesn't throw it with the default unit test from chef-dk because there's only one example)
* must use the ServerRunner (SoloRunner does not throw the WARN)
## Client Output
stuff like this:
```
% rspec
[2016-03-04T13:59:11-08:00] WARN: MultipackageInternal already exists! Deprecation class overwrites Custom resource multipackage_internal from cookbook multipackage
[2016-03-04T13:59:11-08:00] WARN: FakeTestResource already exists! Deprecation class overwrites Custom resource fake_test_resource from cookbook fake
.
Finished in 1.73 seconds (files took 0.99219 seconds to load)
2 examples, 0 failures
ChefSpec Coverage report generated...
Total Resources: 3
Touched Resources: 0
Touch Coverage: 0.0%
Untouched Resources:
multipackage_internal[collected packages install] multipackage/libraries/multipackage_impl.rb:63
multipackage_internal[collected packages remove] multipackage/libraries/multipackage_impl.rb:63
multipackage_internal[collected packages upgrade] multipackage/libraries/multipackage_impl.rb:63
```
| priority | tons of deprecation class overwrites lwrp resource warning spam with chefspec description see or chef version ish replication case cookbook with a custom resource a chefspec file with at least two chef runs in it doesn t throw it with the default unit test from chef dk because there s only one example must use the serverrunner solorunner does not throw the warn client output stuff like this rspec warn multipackageinternal already exists deprecation class overwrites custom resource multipackage internal from cookbook multipackage warn faketestresource already exists deprecation class overwrites custom resource fake test resource from cookbook fake finished in seconds files took seconds to load examples failures chefspec coverage report generated total resources touched resources touch coverage untouched resources multipackage internal multipackage libraries multipackage impl rb multipackage internal multipackage libraries multipackage impl rb multipackage internal multipackage libraries multipackage impl rb | 1 |
463,800 | 13,300,966,429 | IssuesEvent | 2020-08-25 12:14:56 | rocm-arch/rocm-arch | https://api.github.com/repos/rocm-arch/rocm-arch | closed | ROCm 3.7 released | high priority | New upstream release, most notable with Ubuntu 20.04 support:
https://github.com/RadeonOpenCompute/ROCm#Whats-New-in-This-Release
First experiments by @oleid indicate that it harmonizes with kernel 5.8.1:
https://www.phoronix.com/forums/forum/phoronix/latest-phoronix-articles/1201991-radeon-rocm-3-7-release-enables-openmp-5-0-by-default-in-aomp?p=1202019#post1202019
As with the ROCm 3.5 release I will add all packages to a [milestone](https://github.com/rocm-arch/rocm-arch/milestone/2) | 1.0 | ROCm 3.7 released - New upstream release, most notable with Ubuntu 20.04 support:
https://github.com/RadeonOpenCompute/ROCm#Whats-New-in-This-Release
First experiments by @oleid indicate that it harmonizes with kernel 5.8.1:
https://www.phoronix.com/forums/forum/phoronix/latest-phoronix-articles/1201991-radeon-rocm-3-7-release-enables-openmp-5-0-by-default-in-aomp?p=1202019#post1202019
As with the ROCm 3.5 release I will add all packages to a [milestone](https://github.com/rocm-arch/rocm-arch/milestone/2) | priority | rocm released new upstream release most notable with ubuntu support first experiments by oleid indicate that it harmonizes with kernel as with the rocm release i will add all packages to a | 1 |
308,641 | 9,441,387,044 | IssuesEvent | 2019-04-15 01:15:50 | qlcchain/go-qlc | https://api.github.com/repos/qlcchain/go-qlc | closed | Implement POV chain - ledger | Priority: High Type: Enhancement | ### Description of the issue
Implement POV chain ledger
### Issue-Type
- [ ] bug report
- [X] feature request
- [ ] Documentation improvement | 1.0 | Implement POV chain - ledger - ### Description of the issue
Implement POV chain ledger
### Issue-Type
- [ ] bug report
- [X] feature request
- [ ] Documentation improvement | priority | implement pov chain ledger description of the issue implement pov chain ledger issue type bug report feature request documentation improvement | 1 |
77,951 | 3,507,962,498 | IssuesEvent | 2016-01-08 15:48:32 | Mobicents/RestComm | https://api.github.com/repos/Mobicents/RestComm | closed | Wait for calls to leave a conference before stopping it | 1. Bug Core engine High-Priority XMS-1.0.0 | While running performance tests for JSR309 conference scenarios against XMS, an issue came up were the JSR309 driver was throwing timeouts when unjoining the call leg from the conference.
```
17:05:46,152 ERROR [com.vendor.dialogic.javax.media.mscontrol.DlgcSync2AsyncMonitor] (RestComm-akka.actor.default-dispatcher-24) SYNC_2_ASYNC DlgcSync2AsyncMonitor::waitForRequestCompletion():: test - out of wait with status: Error Timeout Error Executing What: DlgcProxyHelper:unjoin Request
```
The problem seems to be located [here](https://github.com/Mobicents/RestComm/blob/master/restcomm/restcomm.telephony/src/main/java/org/mobicents/servlet/restcomm/telephony/Conference.java#L240), when the Conference actor asks all participants to leave and then destroys its media session.
Since all these requests are asynchronous, we probably face a situation where the Conference session is destroyed before all the calls could unjoin properly. | 1.0 | Wait for calls to leave a conference before stopping it - While running performance tests for JSR309 conference scenarios against XMS, an issue came up were the JSR309 driver was throwing timeouts when unjoining the call leg from the conference.
```
17:05:46,152 ERROR [com.vendor.dialogic.javax.media.mscontrol.DlgcSync2AsyncMonitor] (RestComm-akka.actor.default-dispatcher-24) SYNC_2_ASYNC DlgcSync2AsyncMonitor::waitForRequestCompletion():: test - out of wait with status: Error Timeout Error Executing What: DlgcProxyHelper:unjoin Request
```
The problem seems to be located [here](https://github.com/Mobicents/RestComm/blob/master/restcomm/restcomm.telephony/src/main/java/org/mobicents/servlet/restcomm/telephony/Conference.java#L240), when the Conference actor asks all participants to leave and then destroys its media session.
Since all these requests are asynchronous, we probably face a situation where the Conference session is destroyed before all the calls could unjoin properly. | priority | wait for calls to leave a conference before stopping it while running performance tests for conference scenarios against xms an issue came up were the driver was throwing timeouts when unjoining the call leg from the conference error restcomm akka actor default dispatcher sync async waitforrequestcompletion test out of wait with status error timeout error executing what dlgcproxyhelper unjoin request the problem seems to be located when the conference actor asks all participants to leave and then destroys its media session since all these requests are asynchronous we probably face a situation where the conference session is destroyed before all the calls could unjoin properly | 1 |
48,205 | 2,994,686,069 | IssuesEvent | 2015-07-22 13:19:15 | N4SJAMK/teamboard-client-react | https://api.github.com/repos/N4SJAMK/teamboard-client-react | closed | Last editor is not updated ? | bug HIGH PRIORITY | After edit there is still pee@p.fi user who has edited a board..
I am Guest! | 1.0 | Last editor is not updated ? - After edit there is still pee@p.fi user who has edited a board..
I am Guest! | priority | last editor is not updated after edit there is still pee p fi user who has edited a board i am guest | 1 |
438,449 | 12,627,959,311 | IssuesEvent | 2020-06-15 00:22:27 | juancri/covid19-animation-generator | https://api.github.com/repos/juancri/covid19-animation-generator | closed | Add chart type: stacked area | high priority work in progress | - Add stacked area as an addition to line
- It will allow us to generate the graph chile/upc with master | 1.0 | Add chart type: stacked area - - Add stacked area as an addition to line
- It will allow us to generate the graph chile/upc with master | priority | add chart type stacked area add stacked area as an addition to line it will allow us to generate the graph chile upc with master | 1 |
376,250 | 11,140,298,652 | IssuesEvent | 2019-12-21 13:15:40 | level73/Monithon | https://api.github.com/repos/level73/Monithon | closed | Errore di Permessi File | Priority: High Type: Bug | Warning: move_uploaded_file(/var/www/dev.monithon.it/public_htmlpublic/resources/60cb4c8fbbbfd3164c63847a643b006241fae019.jpg): failed to open stream: No such file or directory in /var/www/dev.monithon.it/public_html/lib/repo.class.php on line 31
Warning: move_uploaded_file(): Unable to move '/tmp/php4HwX5L' to '/var/www/dev.monithon.it/public_htmlpublic/resources/60cb4c8fbbbfd3164c63847a643b006241fae019.jpg' in /var/www/dev.monithon.it/public_html/lib/repo.class.php on line 31
Array
(
[0] => 23000
[1] => 1048
[2] => Column 'file_repository' cannot be null
)
Se salvo solo descrizione, autore e titolo mi dà errori:
Report salvato con successo. [21]
Errore nel caricamento del file. File non salvato. [650]
File caricati! [91]
Dopo che ho salvato mi aspetterei di essere indirizzato alla home
Se vado su "i miei report" vedo che ha salvato 3 report diversi (io ho salvato 5 volte però).
| 1.0 | Errore di Permessi File - Warning: move_uploaded_file(/var/www/dev.monithon.it/public_htmlpublic/resources/60cb4c8fbbbfd3164c63847a643b006241fae019.jpg): failed to open stream: No such file or directory in /var/www/dev.monithon.it/public_html/lib/repo.class.php on line 31
Warning: move_uploaded_file(): Unable to move '/tmp/php4HwX5L' to '/var/www/dev.monithon.it/public_htmlpublic/resources/60cb4c8fbbbfd3164c63847a643b006241fae019.jpg' in /var/www/dev.monithon.it/public_html/lib/repo.class.php on line 31
Array
(
[0] => 23000
[1] => 1048
[2] => Column 'file_repository' cannot be null
)
Se salvo solo descrizione, autore e titolo mi dà errori:
Report salvato con successo. [21]
Errore nel caricamento del file. File non salvato. [650]
File caricati! [91]
Dopo che ho salvato mi aspetterei di essere indirizzato alla home
Se vado su "i miei report" vedo che ha salvato 3 report diversi (io ho salvato 5 volte però).
| priority | errore di permessi file warning move uploaded file var www dev monithon it public htmlpublic resources jpg failed to open stream no such file or directory in var www dev monithon it public html lib repo class php on line warning move uploaded file unable to move tmp to var www dev monithon it public htmlpublic resources jpg in var www dev monithon it public html lib repo class php on line array column file repository cannot be null se salvo solo descrizione autore e titolo mi dà errori report salvato con successo errore nel caricamento del file file non salvato file caricati dopo che ho salvato mi aspetterei di essere indirizzato alla home se vado su i miei report vedo che ha salvato report diversi io ho salvato volte però | 1 |
458,751 | 13,181,157,904 | IssuesEvent | 2020-08-12 13:55:17 | carbon-design-system/ibm-dotcom-library | https://api.github.com/repos/carbon-design-system/ibm-dotcom-library | opened | Web Component: Develop Card of the React version | dev package: web components priority: high | #### User Story
<!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} -->
> As a `[user role below]`:
IBM.com Library developer
> I need to:
create the `Card`
> so that I can:
provide ibm.com adopter developers a web component version for every react version available in the ibm.com Library
#### Additional information
<!-- {{Please provide any additional information or resources for reference}} -->
- Story within Storybook with corresponding knobs
- Utilize Carbon
- Create with Shadow DOM and Custom Elements standards
- **See the Epic for the Design and Functional specs information**
- [React canary environment](https://ibmdotcom-react-canary.mybluemix.net/?path=/docs/overview-getting-started--page)
- Prod QA testing issue (#3533)
#### Acceptance criteria
- [ ] Include README for the web component and corresponding styles
- [ ] Create Web Components styles in styles package
- [ ] No custom styles in web-components package
- [ ] Do not create knobs in Storybook that include JSON objects
- [ ] Break out Storybook stories into multiple variation stories, if applicable
- [ ] Create codesandbox example under `/packages/web-components/examples/codesandbox` and include in README
- [ ] Minimum 80% unit test coverage
- [ ] If a design is provided, the Designer is included as a Reviewer in the Pull Request
- [ ] Provide a direct link to the deploy preview for the designer in the Pull Request description
- [ ] A comment is posted in the Design QA issue, tagging Wonil and Roberta, when development is finished
- [ ] The Storybook link is added to the Design QA issue for their testing
- [ ] A comment is posted in the Prod QA issue, tagging Praveen and Chetan, when development is finished
| 1.0 | Web Component: Develop Card of the React version - #### User Story
<!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} -->
> As a `[user role below]`:
IBM.com Library developer
> I need to:
create the `Card`
> so that I can:
provide ibm.com adopter developers a web component version for every react version available in the ibm.com Library
#### Additional information
<!-- {{Please provide any additional information or resources for reference}} -->
- Story within Storybook with corresponding knobs
- Utilize Carbon
- Create with Shadow DOM and Custom Elements standards
- **See the Epic for the Design and Functional specs information**
- [React canary environment](https://ibmdotcom-react-canary.mybluemix.net/?path=/docs/overview-getting-started--page)
- Prod QA testing issue (#3533)
#### Acceptance criteria
- [ ] Include README for the web component and corresponding styles
- [ ] Create Web Components styles in styles package
- [ ] No custom styles in web-components package
- [ ] Do not create knobs in Storybook that include JSON objects
- [ ] Break out Storybook stories into multiple variation stories, if applicable
- [ ] Create codesandbox example under `/packages/web-components/examples/codesandbox` and include in README
- [ ] Minimum 80% unit test coverage
- [ ] If a design is provided, the Designer is included as a Reviewer in the Pull Request
- [ ] Provide a direct link to the deploy preview for the designer in the Pull Request description
- [ ] A comment is posted in the Design QA issue, tagging Wonil and Roberta, when development is finished
- [ ] The Storybook link is added to the Design QA issue for their testing
- [ ] A comment is posted in the Prod QA issue, tagging Praveen and Chetan, when development is finished
| priority | web component develop card of the react version user story as a ibm com library developer i need to create the card so that i can provide ibm com adopter developers a web component version for every react version available in the ibm com library additional information story within storybook with corresponding knobs utilize carbon create with shadow dom and custom elements standards see the epic for the design and functional specs information prod qa testing issue acceptance criteria include readme for the web component and corresponding styles create web components styles in styles package no custom styles in web components package do not create knobs in storybook that include json objects break out storybook stories into multiple variation stories if applicable create codesandbox example under packages web components examples codesandbox and include in readme minimum unit test coverage if a design is provided the designer is included as a reviewer in the pull request provide a direct link to the deploy preview for the designer in the pull request description a comment is posted in the design qa issue tagging wonil and roberta when development is finished the storybook link is added to the design qa issue for their testing a comment is posted in the prod qa issue tagging praveen and chetan when development is finished | 1 |
568,063 | 16,946,105,405 | IssuesEvent | 2021-06-28 07:02:43 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | [Semantic API] Need package name of a symbol | Area/Compiler Area/SemanticAPI Priority/High Team/CompilerFE Team/CompilerFETools Type/Improvement | **Description:**
Currently getting the symbol through Semantic API allows us to get the module name, version, orgName. Going forward we will need to get the package name for API Docs.
**Steps to reproduce:**
**Affected Versions:**
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
| 1.0 | [Semantic API] Need package name of a symbol - **Description:**
Currently getting the symbol through Semantic API allows us to get the module name, version, orgName. Going forward we will need to get the package name for API Docs.
**Steps to reproduce:**
**Affected Versions:**
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
| priority | need package name of a symbol description currently getting the symbol through semantic api allows us to get the module name version orgname going forward we will need to get the package name for api docs steps to reproduce affected versions os db other environment details and versions related issues optional suggested labels optional suggested assignees optional | 1 |
627,363 | 19,902,820,151 | IssuesEvent | 2022-01-25 09:45:05 | w3c/w3c-website | https://api.github.com/repos/w3c/w3c-website | closed | Copy change on labels for switching between different localised websites | high priority cms | In the CMS, you can select which language you are viewing the site in.

For the entries under 'W3C localised websites' can the copy be updated for each site in the following way:
"W3C CMS" to "English"
"W3C 日本" to "Japanese 日本"
"W3C 中国" to "Chinese 中国"
This will make it easier for English speakers to know the different between the Japanese and Chinese sites.
| 1.0 | Copy change on labels for switching between different localised websites - In the CMS, you can select which language you are viewing the site in.

For the entries under 'W3C localised websites' can the copy be updated for each site in the following way:
"W3C CMS" to "English"
"W3C 日本" to "Japanese 日本"
"W3C 中国" to "Chinese 中国"
This will make it easier for English speakers to know the different between the Japanese and Chinese sites.
| priority | copy change on labels for switching between different localised websites in the cms you can select which language you are viewing the site in for the entries under localised websites can the copy be updated for each site in the following way cms to english 日本 to japanese 日本 中国 to chinese 中国 this will make it easier for english speakers to know the different between the japanese and chinese sites | 1 |
79,894 | 3,548,342,507 | IssuesEvent | 2016-01-20 14:06:26 | pombase/canto | https://api.github.com/repos/pombase/canto | closed | cannot duplicate multigene phenotype rows | bug high priority next |
session appears to be bugging
curs/558b22ef42aa6ec2
I can no longer duplicate and edit (or just duplicate) multigene phenotype rows?????
| 1.0 | cannot duplicate multigene phenotype rows -
session appears to be bugging
curs/558b22ef42aa6ec2
I can no longer duplicate and edit (or just duplicate) multigene phenotype rows?????
| priority | cannot duplicate multigene phenotype rows session appears to be bugging curs i can no longer duplicate and edit or just duplicate multigene phenotype rows | 1 |
322,972 | 9,834,514,988 | IssuesEvent | 2019-06-17 09:52:26 | workcraft/workcraft | https://api.github.com/repos/workcraft/workcraft | closed | Trace conversion to DTD misses first level if initial state is high | bug priority:high status:confirmed tag:model:wtg | To reproduce, create a simulation trace "a-" and convert it to DTD -- the resultant waveform misses the initial high level. | 1.0 | Trace conversion to DTD misses first level if initial state is high - To reproduce, create a simulation trace "a-" and convert it to DTD -- the resultant waveform misses the initial high level. | priority | trace conversion to dtd misses first level if initial state is high to reproduce create a simulation trace a and convert it to dtd the resultant waveform misses the initial high level | 1 |
485,070 | 13,960,627,415 | IssuesEvent | 2020-10-24 22:04:28 | ofekmiz/Habitica-Pomodoro-SiteKeeper | https://api.github.com/repos/ofekmiz/Habitica-Pomodoro-SiteKeeper | closed | Notification Volume | New Feature Priority : High | Hi!
The notification sound is really, really loud, it scares me everytime haha
I work on mac and can't adjust the volume of google chrome or any application individually
hope a kind person can help solving this problem
thank you so much
| 1.0 | Notification Volume - Hi!
The notification sound is really, really loud, it scares me everytime haha
I work on mac and can't adjust the volume of google chrome or any application individually
hope a kind person can help solving this problem
thank you so much
| priority | notification volume hi the notification sound is really really loud it scares me everytime haha i work on mac and can t adjust the volume of google chrome or any application individually hope a kind person can help solving this problem thank you so much | 1 |
393,165 | 11,611,239,444 | IssuesEvent | 2020-02-26 05:56:26 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | closed | New Discussion modal window not working. | Has-PR bug priority: high | **Describe the bug**
New Discussion modal window is not opening because of a JS error when the plugin is activated with LearnDash Ratings, Reviews & Feedback plugin.
`Uncaught ReferenceError: bp_select2 is not defined
at HTMLDocument.<anonymous> (common.js?ver=2.5.14-6684:13)
at i (jquery.js?ver=1.12.4-wp:2)
at Object.fireWith [as resolveWith] (jquery.js?ver=1.12.4-wp:2)
at Function.ready (jquery.js?ver=1.12.4-wp:2)
at HTMLDocument.J (jquery.js?ver=1.12.4-wp:2)`
On further debugging found that the bp_select2 variable is only localized if wp_get_available_translations doesn't exist.
This code is present in bp-core/bp-core-cssjs.php. Please check the following link
https://github.com/buddyboss/buddyboss-platform/blob/dev/src/bp-core/bp-core-cssjs.php#L219
This issue is occurring because the LearnDash Ratings, Reviews & Feedback plugin also uses wp_get_available_translations function.
We can further extrapolate it to mean that if any plugin uses wp_get_available_translations in the frontend and its code is executed before buddyboss-platform then the code inside the "if condition" will not execute and all the js files which use bp_select2 variable will throw a JS error.
**Expected behavior**
The if condition should only wrap wp_get_available_translations function. The localization condition should be moved outside the if condition.
Alternatively, you can add checks in the js file for bp_select2 variable before using it.
I am attaching a patch file with a fix for this and I would appreciate it if someone will review this and
get the fix released as soon as possible.
[patch.zip](https://github.com/buddyboss/buddyboss-platform/files/4234779/patch.zip)
**Screenshots**

| 1.0 | New Discussion modal window not working. - **Describe the bug**
New Discussion modal window is not opening because of a JS error when the plugin is activated with LearnDash Ratings, Reviews & Feedback plugin.
`Uncaught ReferenceError: bp_select2 is not defined
at HTMLDocument.<anonymous> (common.js?ver=2.5.14-6684:13)
at i (jquery.js?ver=1.12.4-wp:2)
at Object.fireWith [as resolveWith] (jquery.js?ver=1.12.4-wp:2)
at Function.ready (jquery.js?ver=1.12.4-wp:2)
at HTMLDocument.J (jquery.js?ver=1.12.4-wp:2)`
On further debugging found that the bp_select2 variable is only localized if wp_get_available_translations doesn't exist.
This code is present in bp-core/bp-core-cssjs.php. Please check the following link
https://github.com/buddyboss/buddyboss-platform/blob/dev/src/bp-core/bp-core-cssjs.php#L219
This issue is occurring because the LearnDash Ratings, Reviews & Feedback plugin also uses wp_get_available_translations function.
We can further extrapolate it to mean that if any plugin uses wp_get_available_translations in the frontend and its code is executed before buddyboss-platform then the code inside the "if condition" will not execute and all the js files which use bp_select2 variable will throw a JS error.
**Expected behavior**
The if condition should only wrap wp_get_available_translations function. The localization condition should be moved outside the if condition.
Alternatively, you can add checks in the js file for bp_select2 variable before using it.
I am attaching a patch file with a fix for this and I would appreciate it if someone will review this and
get the fix released as soon as possible.
[patch.zip](https://github.com/buddyboss/buddyboss-platform/files/4234779/patch.zip)
**Screenshots**

| priority | new discussion modal window not working describe the bug new discussion modal window is not opening because of a js error when the plugin is activated with learndash ratings reviews feedback plugin uncaught referenceerror bp is not defined at htmldocument common js ver at i jquery js ver wp at object firewith jquery js ver wp at function ready jquery js ver wp at htmldocument j jquery js ver wp on further debugging found that the bp variable is only localized if wp get available translations doesn t exist this code is present in bp core bp core cssjs php please check the following link this issue is occurring because the learndash ratings reviews feedback plugin also uses wp get available translations function we can further extrapolate it to mean that if any plugin uses wp get available translations in the frontend and its code is executed before buddyboss platform then the code inside the if condition will not execute and all the js files which use bp variable will throw a js error expected behavior the if condition should only wrap wp get available translations function the localization condition should be moved outside the if condition alternatively you can add checks in the js file for bp variable before using it i am attaching a patch file with a fix for this and i would appreciate it if someone will review this and get the fix released as soon as possible screenshots | 1 |
361,407 | 10,708,349,074 | IssuesEvent | 2019-10-24 19:29:43 | seanlesch/CilantroAudit | https://api.github.com/repos/seanlesch/CilantroAudit | closed | Merge front end branches into new import scheme | high_priority | All imports are now done through the CilantroAudit folder. The front end widgets and source files need to be relocated and any path issues with them changed to match this import scheme. | 1.0 | Merge front end branches into new import scheme - All imports are now done through the CilantroAudit folder. The front end widgets and source files need to be relocated and any path issues with them changed to match this import scheme. | priority | merge front end branches into new import scheme all imports are now done through the cilantroaudit folder the front end widgets and source files need to be relocated and any path issues with them changed to match this import scheme | 1 |
538,051 | 15,761,557,849 | IssuesEvent | 2021-03-31 10:06:34 | netdata/netdata-cloud | https://api.github.com/repos/netdata/netdata-cloud | closed | [BUG] Inconsistency between individual node alarm and room alarm counters | alerts-team-bugs bug priority/high | A single node's alarm counters are much higher than the total for the room:

| 1.0 | [BUG] Inconsistency between individual node alarm and room alarm counters - A single node's alarm counters are much higher than the total for the room:

| priority | inconsistency between individual node alarm and room alarm counters a single node s alarm counters are much higher than the total for the room | 1 |
698,397 | 23,978,175,388 | IssuesEvent | 2022-09-13 13:14:38 | saesrpg/saesrpg | https://api.github.com/repos/saesrpg/saesrpg | closed | Mt. Chiliad to LV Creek Trucking Payout Issue | Type: Bug Priority: High Status: Done | Upon doing the Trucking delivery from the top of Mount Chiliad to the Creek Warehousing in Las Venturas, it only gives the normal payout cash, **_NOT GIVING THE DIFFICULTY BONUS AT ALL._**
**Proof:**



 | 1.0 | Mt. Chiliad to LV Creek Trucking Payout Issue - Upon doing the Trucking delivery from the top of Mount Chiliad to the Creek Warehousing in Las Venturas, it only gives the normal payout cash, **_NOT GIVING THE DIFFICULTY BONUS AT ALL._**
**Proof:**



 | priority | mt chiliad to lv creek trucking payout issue upon doing the trucking delivery from the top of mount chiliad to the creek warehousing in las venturas it only gives the normal payout cash not giving the difficulty bonus at all proof | 1 |
792,857 | 27,975,836,701 | IssuesEvent | 2023-03-25 15:17:06 | ItKlubBozoLagan/kontestis | https://api.github.com/repos/ItKlubBozoLagan/kontestis | closed | Create organisations | backend frontend high priority | Allow schools or other institutions to create organisations to more easily group students and organize exams.
- Select organisation page after login
- Manage organisation members
- Private organisation contests separate from the rest of the site | 1.0 | Create organisations - Allow schools or other institutions to create organisations to more easily group students and organize exams.
- Select organisation page after login
- Manage organisation members
- Private organisation contests separate from the rest of the site | priority | create organisations allow schools or other institutions to create organisations to more easily group students and organize exams select organisation page after login manage organisation members private organisation contests separate from the rest of the site | 1 |
63,992 | 3,203,390,348 | IssuesEvent | 2015-10-02 18:45:54 | twosigma/beaker-notebook | https://api.github.com/repos/twosigma/beaker-notebook | opened | ipython v4 still broken?? | Bug Language Plugins Priority High | beaker no longer seems to work with ipython v4 and even if I got back to https://github.com/twosigma/beaker-notebook/pull/2280
it still doesn't work. what happened?? | 1.0 | ipython v4 still broken?? - beaker no longer seems to work with ipython v4 and even if I got back to https://github.com/twosigma/beaker-notebook/pull/2280
it still doesn't work. what happened?? | priority | ipython still broken beaker no longer seems to work with ipython and even if i got back to it still doesn t work what happened | 1 |
117,509 | 4,717,051,601 | IssuesEvent | 2016-10-16 12:05:14 | Rello/audioplayer | https://api.github.com/repos/Rello/audioplayer | closed | Search for cover in album folder | enhancement high priority | It would be nice if the player would search for a cover/folder.jpg/png in the album folder, and use this as the cover. | 1.0 | Search for cover in album folder - It would be nice if the player would search for a cover/folder.jpg/png in the album folder, and use this as the cover. | priority | search for cover in album folder it would be nice if the player would search for a cover folder jpg png in the album folder and use this as the cover | 1 |
553,471 | 16,372,449,092 | IssuesEvent | 2021-05-15 12:15:59 | sopra-fs21-group-11/sopra-client | https://api.github.com/repos/sopra-fs21-group-11/sopra-client | closed | Fixes: fixing wait time for creating account as well as creation of lobby, close game when you created game in lobby | M4 bug high priority task | estimated time: 1.5h | 1.0 | Fixes: fixing wait time for creating account as well as creation of lobby, close game when you created game in lobby - estimated time: 1.5h | priority | fixes fixing wait time for creating account as well as creation of lobby close game when you created game in lobby estimated time | 1 |
321,295 | 9,796,551,081 | IssuesEvent | 2019-06-11 07:55:28 | WoWManiaUK/Blackwing-Lair | https://api.github.com/repos/WoWManiaUK/Blackwing-Lair | closed | Can't leave Kezan because of bugged quests | Fixed Confirmed Fixed in Dev Priority-High | **Links:**
from WoWHead or our Armory
**What is happening:**
Both "Off to the Bank" and "Fourth and Goal" quests can't be completed.
**What should happen:**
In "Off to the Bank" you can't buy the needed item to complete the quest and in "Fourth and Goal" quest the Bildgewater Buccaneer doesn't spawn to you get in and complete the quest.
Without these two quests you can't progress ingame to leave the starting zone.
| 1.0 | Can't leave Kezan because of bugged quests - **Links:**
from WoWHead or our Armory
**What is happening:**
Both "Off to the Bank" and "Fourth and Goal" quests can't be completed.
**What should happen:**
In "Off to the Bank" you can't buy the needed item to complete the quest and in "Fourth and Goal" quest the Bildgewater Buccaneer doesn't spawn to you get in and complete the quest.
Without these two quests you can't progress ingame to leave the starting zone.
| priority | can t leave kezan because of bugged quests links from wowhead or our armory what is happening both off to the bank and fourth and goal quests can t be completed what should happen in off to the bank you can t buy the needed item to complete the quest and in fourth and goal quest the bildgewater buccaneer doesn t spawn to you get in and complete the quest without these two quests you can t progress ingame to leave the starting zone | 1 |
69,915 | 3,316,304,749 | IssuesEvent | 2015-11-06 16:21:19 | TeselaGen/ve | https://api.github.com/repos/TeselaGen/ve | closed | Can't import or create a valid FASTA AA file | Customer: DAS Phase I Milestone #4 - Oracle Rewrite Priority: High Type: Bug | Can't import or create a valid FASTA AA file. Should be able to create in a simple text box editor for direct typing or cut/paste.
>gi|129295|sp|P01013|OVAX_CHICK GENE X PROTEIN (OVALBUMIN-RELATED) QIKDLLVSSSTDLDTTLVLVNAIYFKGMWKTAFNAEDTREMPFHVTKQESKPVQMMCMNNSFNVAT PAEKMKILELPFASGDLSMLVLLPDEVSDLERIE TINFEKLTEWTNPNTMEKRRVKVYLPQMKIEEK NLTSVLMALGMTDLFIPSANLTGISSAESLKISQ VHGAFMELSEDGIEMAGSTGVIEDIKHSPESEQ RADHPFLFLIKHNPTNTIVYFGRYWSP | 1.0 | Can't import or create a valid FASTA AA file - Can't import or create a valid FASTA AA file. Should be able to create in a simple text box editor for direct typing or cut/paste.
>gi|129295|sp|P01013|OVAX_CHICK GENE X PROTEIN (OVALBUMIN-RELATED) QIKDLLVSSSTDLDTTLVLVNAIYFKGMWKTAFNAEDTREMPFHVTKQESKPVQMMCMNNSFNVAT PAEKMKILELPFASGDLSMLVLLPDEVSDLERIE TINFEKLTEWTNPNTMEKRRVKVYLPQMKIEEK NLTSVLMALGMTDLFIPSANLTGISSAESLKISQ VHGAFMELSEDGIEMAGSTGVIEDIKHSPESEQ RADHPFLFLIKHNPTNTIVYFGRYWSP | priority | can t import or create a valid fasta aa file can t import or create a valid fasta aa file should be able to create in a simple text box editor for direct typing or cut paste gi sp ovax chick gene x protein ovalbumin related qikdllvssstdldttlvlvnaiyfkgmwktafnaedtrempfhvtkqeskpvqmmcmnnsfnvat paekmkilelpfasgdlsmlvllpdevsdlerie tinfekltewtnpntmekrrvkvylpqmkieek nltsvlmalgmtdlfipsanltgissaeslkisq vhgafmelsedgiemagstgviedikhspeseq radhpflflikhnptntivyfgrywsp | 1 |
341,548 | 10,297,577,900 | IssuesEvent | 2019-08-28 11:26:20 | wso2/ballerina-integrator | https://api.github.com/repos/wso2/ballerina-integrator | opened | Modules with java dependencies doesn't work properly after pulling from BC | Ballerina Priority/High Severity/Blocker | **Related Issues:**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
relates to: https://github.com/ballerina-platform/ballerina-lang/issues/18255 | 1.0 | Modules with java dependencies doesn't work properly after pulling from BC - **Related Issues:**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
relates to: https://github.com/ballerina-platform/ballerina-lang/issues/18255 | priority | modules with java dependencies doesn t work properly after pulling from bc related issues relates to | 1 |
107,007 | 4,288,609,764 | IssuesEvent | 2016-07-17 15:35:47 | rekcuFniarB/forum-theprodigy-ru | https://api.github.com/repos/rekcuFniarB/forum-theprodigy-ru | opened | Добавить линк-кнопку для переключения между десктопным и мобильным видом | Priority-High Type-Enhancement Usability | Собственно как принято на большинстве сайтов:
в конце страницы "Перейти в дектопный/мобильный вид" линком | 1.0 | Добавить линк-кнопку для переключения между десктопным и мобильным видом - Собственно как принято на большинстве сайтов:
в конце страницы "Перейти в дектопный/мобильный вид" линком | priority | добавить линк кнопку для переключения между десктопным и мобильным видом собственно как принято на большинстве сайтов в конце страницы перейти в дектопный мобильный вид линком | 1 |
101,334 | 4,113,135,427 | IssuesEvent | 2016-06-07 13:13:13 | Metaswitch/clearwater-docker | https://api.github.com/repos/Metaswitch/clearwater-docker | closed | clearwater-cluster-manager doesn't restart Cassandra under Docker | bug high-priority | #### Symptoms
Spin up a deployment under Docker using an etcd-using version (i.e. commit later than https://github.com/Metaswitch/clearwater-docker/commit/ae892d71cb2c1e377d74fec4bf25e32eac1541db).
Cassandra doesn't start, so the deployment is not functional.
Killing Cassandra (using `pkill -f cassandra`) and then restarting clearwater-infrastructure (using `/etc/init.d/clearwater-infrastructure restart` on both Homestead and Homer seems to resolve the problem.
#### Impact
The deployment is not functional.
#### Release and environment
Seen on release-98.
#### Steps to reproduce
Simply start up a deployment using an etcd-using version. | 1.0 | clearwater-cluster-manager doesn't restart Cassandra under Docker - #### Symptoms
Spin up a deployment under Docker using an etcd-using version (i.e. commit later than https://github.com/Metaswitch/clearwater-docker/commit/ae892d71cb2c1e377d74fec4bf25e32eac1541db).
Cassandra doesn't start, so the deployment is not functional.
Killing Cassandra (using `pkill -f cassandra`) and then restarting clearwater-infrastructure (using `/etc/init.d/clearwater-infrastructure restart` on both Homestead and Homer seems to resolve the problem.
#### Impact
The deployment is not functional.
#### Release and environment
Seen on release-98.
#### Steps to reproduce
Simply start up a deployment using an etcd-using version. | priority | clearwater cluster manager doesn t restart cassandra under docker symptoms spin up a deployment under docker using an etcd using version i e commit later than cassandra doesn t start so the deployment is not functional killing cassandra using pkill f cassandra and then restarting clearwater infrastructure using etc init d clearwater infrastructure restart on both homestead and homer seems to resolve the problem impact the deployment is not functional release and environment seen on release steps to reproduce simply start up a deployment using an etcd using version | 1 |
491,502 | 14,165,314,270 | IssuesEvent | 2020-11-12 06:59:48 | bounswe/bounswe2020group3 | https://api.github.com/repos/bounswe/bounswe2020group3 | closed | Add color palette and implement style for UI elements | Android Priority: High Status: Completed :100: Type: Enhancement | * **Project: ANDROID**
* **This is a: FEATURE REQUEST**
* **Description of the issue**
Color palette and styling for UI elements such as Button, TextView should be added to project.
| 1.0 | Add color palette and implement style for UI elements - * **Project: ANDROID**
* **This is a: FEATURE REQUEST**
* **Description of the issue**
Color palette and styling for UI elements such as Button, TextView should be added to project.
| priority | add color palette and implement style for ui elements project android this is a feature request description of the issue color palette and styling for ui elements such as button textview should be added to project | 1 |
604,626 | 18,715,748,255 | IssuesEvent | 2021-11-03 04:15:00 | FoxxieBot/Foxxie | https://api.github.com/repos/FoxxieBot/Foxxie | closed | Request(Languages): Switch core system to i18next | enhancement Status: In progress Priority: High Refactor New feature | Currently the language system is managed by tails, which is fine for now but will eventually get to a point where it is unsustainable and disorganized as more languages are added.
i18next is a world class language translation system that can be used in conjunction with node.js easily, it allows for better origination of subfolders, parsing variables, and language fallback.
- [ ] Events
- [x] Monitors
- [x] Tasks
- [x] Inhibitors
- [ ] Arguments
- [x] Automod
- [x] Moderation
- [ ] Titles
Commands
- [x] Admin
Fun cmds
- [x] Animals
- [x] Leveling
- [x] Pride
- [ ] Websearch
Moderation cmds
- [ ] Basic
- [x] Util
- [ ] Voice
- [ ] Settings
Utility cmds
- [x] General
- [ ] Info
- [x] Misc | 1.0 | Request(Languages): Switch core system to i18next - Currently the language system is managed by tails, which is fine for now but will eventually get to a point where it is unsustainable and disorganized as more languages are added.
i18next is a world class language translation system that can be used in conjunction with node.js easily, it allows for better origination of subfolders, parsing variables, and language fallback.
- [ ] Events
- [x] Monitors
- [x] Tasks
- [x] Inhibitors
- [ ] Arguments
- [x] Automod
- [x] Moderation
- [ ] Titles
Commands
- [x] Admin
Fun cmds
- [x] Animals
- [x] Leveling
- [x] Pride
- [ ] Websearch
Moderation cmds
- [ ] Basic
- [x] Util
- [ ] Voice
- [ ] Settings
Utility cmds
- [x] General
- [ ] Info
- [x] Misc | priority | request languages switch core system to currently the language system is managed by tails which is fine for now but will eventually get to a point where it is unsustainable and disorganized as more languages are added is a world class language translation system that can be used in conjunction with node js easily it allows for better origination of subfolders parsing variables and language fallback events monitors tasks inhibitors arguments automod moderation titles commands admin fun cmds animals leveling pride websearch moderation cmds basic util voice settings utility cmds general info misc | 1 |
757,499 | 26,515,478,639 | IssuesEvent | 2023-01-18 20:24:31 | wasmerio/wasmer | https://api.github.com/repos/wasmerio/wasmer | closed | Create-exe with Serialized object fail | priority-high | ### Describe the bug
CI tests `create_exe_serialized_works` and `create_exe_with_object_input_serialized` have been disable because they are not working.
The generated EXE is somehow incorect, and just segfault during some Rust's Stdlib initialization.
Isolating a binary and running it with gdb gives this output:
````
─── Output/messages ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff7d1eff3 in __GI_getenv (name=0x16b0810 "RUST_BACKTRACE") at ./stdlib/getenv.c:51
51 ./stdlib/getenv.c: No such file or directory.
─── Assembly ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
0x00007ffff7d1efe1 __GI_getenv+33 movzbl (%rdi),%eax
0x00007ffff7d1efe4 __GI_getenv+36 mov %rdi,%r13
0x00007ffff7d1efe7 __GI_getenv+39 test %al,%al
0x00007ffff7d1efe9 __GI_getenv+41 je 0x7ffff7d1f088 <__GI_getenv+200>
0x00007ffff7d1efef __GI_getenv+47 cmpb $0x0,0x1(%rdi)
0x00007ffff7d1eff3 __GI_getenv+51 mov 0x0(%rbp),%rbx
0x00007ffff7d1eff7 __GI_getenv+55 jne 0x7ffff7d1f030 <__GI_getenv+112>
0x00007ffff7d1eff9 __GI_getenv+57 or $0x3d,%ah
0x00007ffff7d1effc __GI_getenv+60 test %rbx,%rbx
0x00007ffff7d1efff __GI_getenv+63 jne 0x7ffff7d1f015 <__GI_getenv+85>
─── Breakpoints ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─── Expressions ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─── History ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─── Memory ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─── Registers ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
rax 0x0000000000000052 rbx 0x0000000000000000 rcx 0x0000000000000001 rdx 0x000000000000000f rsi 0x000000000000000f rdi 0x00000000016b0810 rbp 0x00000000016d1000 rsp 0x00007fffffffb440 r8 0x00007ffff7eb2c60 r9 0x0000000000000000 r10 0x0000000000000022 r11 0x0000000000000020
r12 0x0000000000000001 r13 0x00000000016b0810 r14 0x00007fffffffb4e8 r15 0x00007fffffffb680 rip 0x00007ffff7d1eff3 eflags [ PF IF RF ] cs 0x00000033 ss 0x0000002b ds 0x00000000 es 0x00000000 fs 0x00000000 gs 0x00000000
─── Source ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Cannot display "getenv.c"
─── Stack ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
[0] from 0x00007ffff7d1eff3 in __GI_getenv+51 at ./stdlib/getenv.c:51
[1] from 0x000000000038ec50 in std::sys::unix::os::getenv+141 at library/std/src/sys/unix/os.rs:552
[2] from 0x000000000038ec50 in std::env::_var_os+160 at library/std/src/env.rs:273
[3] from 0x000000000039548f in std::env::var_os<&str>+23 at library/std/src/env.rs:269
[4] from 0x000000000039548f in std::panic::get_backtrace_style+63 at library/std/src/panic.rs:303
[5] from 0x0000000000399226 in std::panicking::default_hook+54 at library/std/src/panicking.rs:269
[6] from 0x0000000000399d36 in std::panicking::rust_panic_with_hook+246 at library/std/src/panicking.rs:698
[7] from 0x0000000000399be9 in std::panicking::begin_panic_handler::{closure#0}+121 at library/std/src/panicking.rs:586
[8] from 0x0000000000398454 in std::sys_common::backtrace::__rust_end_short_backtrace<std::panicking::begin_panic_handler::{closure_env#0}, !>+20 at library/std/src/sys_common/backtrace.rs:138
[9] from 0x0000000000399959 in std::panicking::begin_panic_handler+73 at library/std/src/panicking.rs:584
[+]
─── Threads ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
[1] id 2683 name wasm.out from 0x00007ffff7d1eff3 in __GI_getenv+51 at ./stdlib/getenv.c:51
─── Variables ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
arg name = 0x16b0810 "RUST_BACKTRACE": 82 'R'
loc ep = <optimized out>, name_start = <optimized out>
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
>>> x/x $rbp
0x16d1000: Cannot access memory at address 0x16d1000
>>> q
```
| 1.0 | Create-exe with Serialized object fail - ### Describe the bug
CI tests `create_exe_serialized_works` and `create_exe_with_object_input_serialized` have been disable because they are not working.
The generated EXE is somehow incorect, and just segfault during some Rust's Stdlib initialization.
Isolating a binary and running it with gdb gives this output:
````
─── Output/messages ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff7d1eff3 in __GI_getenv (name=0x16b0810 "RUST_BACKTRACE") at ./stdlib/getenv.c:51
51 ./stdlib/getenv.c: No such file or directory.
─── Assembly ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
0x00007ffff7d1efe1 __GI_getenv+33 movzbl (%rdi),%eax
0x00007ffff7d1efe4 __GI_getenv+36 mov %rdi,%r13
0x00007ffff7d1efe7 __GI_getenv+39 test %al,%al
0x00007ffff7d1efe9 __GI_getenv+41 je 0x7ffff7d1f088 <__GI_getenv+200>
0x00007ffff7d1efef __GI_getenv+47 cmpb $0x0,0x1(%rdi)
0x00007ffff7d1eff3 __GI_getenv+51 mov 0x0(%rbp),%rbx
0x00007ffff7d1eff7 __GI_getenv+55 jne 0x7ffff7d1f030 <__GI_getenv+112>
0x00007ffff7d1eff9 __GI_getenv+57 or $0x3d,%ah
0x00007ffff7d1effc __GI_getenv+60 test %rbx,%rbx
0x00007ffff7d1efff __GI_getenv+63 jne 0x7ffff7d1f015 <__GI_getenv+85>
─── Breakpoints ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─── Expressions ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─── History ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─── Memory ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─── Registers ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
rax 0x0000000000000052 rbx 0x0000000000000000 rcx 0x0000000000000001 rdx 0x000000000000000f rsi 0x000000000000000f rdi 0x00000000016b0810 rbp 0x00000000016d1000 rsp 0x00007fffffffb440 r8 0x00007ffff7eb2c60 r9 0x0000000000000000 r10 0x0000000000000022 r11 0x0000000000000020
r12 0x0000000000000001 r13 0x00000000016b0810 r14 0x00007fffffffb4e8 r15 0x00007fffffffb680 rip 0x00007ffff7d1eff3 eflags [ PF IF RF ] cs 0x00000033 ss 0x0000002b ds 0x00000000 es 0x00000000 fs 0x00000000 gs 0x00000000
─── Source ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Cannot display "getenv.c"
─── Stack ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
[0] from 0x00007ffff7d1eff3 in __GI_getenv+51 at ./stdlib/getenv.c:51
[1] from 0x000000000038ec50 in std::sys::unix::os::getenv+141 at library/std/src/sys/unix/os.rs:552
[2] from 0x000000000038ec50 in std::env::_var_os+160 at library/std/src/env.rs:273
[3] from 0x000000000039548f in std::env::var_os<&str>+23 at library/std/src/env.rs:269
[4] from 0x000000000039548f in std::panic::get_backtrace_style+63 at library/std/src/panic.rs:303
[5] from 0x0000000000399226 in std::panicking::default_hook+54 at library/std/src/panicking.rs:269
[6] from 0x0000000000399d36 in std::panicking::rust_panic_with_hook+246 at library/std/src/panicking.rs:698
[7] from 0x0000000000399be9 in std::panicking::begin_panic_handler::{closure#0}+121 at library/std/src/panicking.rs:586
[8] from 0x0000000000398454 in std::sys_common::backtrace::__rust_end_short_backtrace<std::panicking::begin_panic_handler::{closure_env#0}, !>+20 at library/std/src/sys_common/backtrace.rs:138
[9] from 0x0000000000399959 in std::panicking::begin_panic_handler+73 at library/std/src/panicking.rs:584
[+]
─── Threads ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
[1] id 2683 name wasm.out from 0x00007ffff7d1eff3 in __GI_getenv+51 at ./stdlib/getenv.c:51
─── Variables ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
arg name = 0x16b0810 "RUST_BACKTRACE": 82 'R'
loc ep = <optimized out>, name_start = <optimized out>
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
>>> x/x $rbp
0x16d1000: Cannot access memory at address 0x16d1000
>>> q
```
| priority | create exe with serialized object fail describe the bug ci tests create exe serialized works and create exe with object input serialized have been disable because they are not working the generated exe is somehow incorect and just segfault during some rust s stdlib initialization isolating a binary and running it with gdb gives this output ─── output messages ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── using host libthread db library lib linux gnu libthread db so program received signal sigsegv segmentation fault in gi getenv name rust backtrace at stdlib getenv c stdlib getenv c no such file or directory ─── assembly ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── gi getenv movzbl rdi eax gi getenv mov rdi gi getenv test al al gi getenv je gi getenv cmpb rdi gi getenv mov rbp rbx gi getenv jne gi getenv or ah gi getenv test rbx rbx gi getenv jne ─── breakpoints ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ─── expressions ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ─── history ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ─── memory ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ─── registers ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── rax rbx rcx rdx rsi rdi rbp rsp rip eflags cs ss ds es fs gs ─── source ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── cannot display getenv c ─── stack ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── from in gi getenv at stdlib getenv c from in std sys unix os getenv at library std src sys unix os rs from in std env var os at library std src env rs from in std env var os at library std src env rs from in std panic get backtrace style at library std src panic rs from in std panicking default hook at library std src panicking rs from in std panicking rust panic with hook at library std src panicking rs from in std panicking begin panic handler closure at library std src panicking rs from in std sys common backtrace rust end short backtrace at library std src sys common backtrace rs from in std panicking begin panic handler at library std src panicking rs ─── threads ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── id name wasm out from in gi getenv at stdlib getenv c ─── variables ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── arg name rust backtrace r loc ep name start ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── x x rbp cannot access memory at address q | 1 |
804,311 | 29,483,696,585 | IssuesEvent | 2023-06-02 08:06:59 | OpenBioML/chemnlp | https://api.github.com/repos/OpenBioML/chemnlp | opened | Run 1B model over all of the PubMed corpus (30B tokens) | priority: high work package: model training | * As a test run to get ready for the Euro PMC corpus we are going to finetune the Pythia-1B model on a PubMed corpus of research papers to test the pipeline. | 1.0 | Run 1B model over all of the PubMed corpus (30B tokens) - * As a test run to get ready for the Euro PMC corpus we are going to finetune the Pythia-1B model on a PubMed corpus of research papers to test the pipeline. | priority | run model over all of the pubmed corpus tokens as a test run to get ready for the euro pmc corpus we are going to finetune the pythia model on a pubmed corpus of research papers to test the pipeline | 1 |
260,695 | 8,213,641,180 | IssuesEvent | 2018-09-04 20:13:43 | unicef/magicbox-maps-prototype | https://api.github.com/repos/unicef/magicbox-maps-prototype | closed | Add authentication to protect connectivity data | enhancement priority:high | For this app, our data providers are fine with the mobility data being public, but the connectivity data must be protected. | 1.0 | Add authentication to protect connectivity data - For this app, our data providers are fine with the mobility data being public, but the connectivity data must be protected. | priority | add authentication to protect connectivity data for this app our data providers are fine with the mobility data being public but the connectivity data must be protected | 1 |
270,018 | 8,445,567,348 | IssuesEvent | 2018-10-18 21:58:56 | hydroshare/hydroshare | https://api.github.com/repos/hydroshare/hydroshare | closed | Placement of Cite link in DOI line on landing page is erratic | High Priority bug | In testing beta.hydroshare.org I saw the following
Case 1. Cite quite close to DOI https://beta.hydroshare.org/resource/7b2c6e5e63f34d348bde5fa8be51a85a/

Case 2. Cite way to the right https://beta.hydroshare.org/resource/e76494ebad394bdfb06343f54682978c/

This appears to be related to the length of the line listing authors. I think the cite link (which per #2939) should be clarified to "How to cite" should be immediately to the right of the DOI as for case 1.
| 1.0 | Placement of Cite link in DOI line on landing page is erratic - In testing beta.hydroshare.org I saw the following
Case 1. Cite quite close to DOI https://beta.hydroshare.org/resource/7b2c6e5e63f34d348bde5fa8be51a85a/

Case 2. Cite way to the right https://beta.hydroshare.org/resource/e76494ebad394bdfb06343f54682978c/

This appears to be related to the length of the line listing authors. I think the cite link (which per #2939) should be clarified to "How to cite" should be immediately to the right of the DOI as for case 1.
| priority | placement of cite link in doi line on landing page is erratic in testing beta hydroshare org i saw the following case cite quite close to doi case cite way to the right this appears to be related to the length of the line listing authors i think the cite link which per should be clarified to how to cite should be immediately to the right of the doi as for case | 1 |
494,053 | 14,244,253,427 | IssuesEvent | 2020-11-19 06:32:08 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | m.youtube.com - see bug description | browser-fenix engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical | <!-- @browser: Firefox Mobile 83.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 6.0; Mobile; rv:83.0) Gecko/83.0 Firefox/83.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/62053 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://m.youtube.com/watch?v=PKfxmFU3lWY
**Browser / Version**: Firefox Mobile 83.0
**Operating System**: Android 6.0
**Tested Another Browser**: Yes Other
**Problem type**: Something else
**Description**: Sound low/slow
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201108174701</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/11/090dfe3f-03e2-4523-a508-eec6bfe1738b)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | m.youtube.com - see bug description - <!-- @browser: Firefox Mobile 83.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 6.0; Mobile; rv:83.0) Gecko/83.0 Firefox/83.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/62053 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://m.youtube.com/watch?v=PKfxmFU3lWY
**Browser / Version**: Firefox Mobile 83.0
**Operating System**: Android 6.0
**Tested Another Browser**: Yes Other
**Problem type**: Something else
**Description**: Sound low/slow
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201108174701</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/11/090dfe3f-03e2-4523-a508-eec6bfe1738b)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | m youtube com see bug description url browser version firefox mobile operating system android tested another browser yes other problem type something else description sound low slow steps to reproduce browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 1 |
225,000 | 7,476,490,108 | IssuesEvent | 2018-04-04 03:26:38 | code-helix/slatekit | https://api.github.com/repos/code-helix/slatekit | closed | APIs: Support various registration types | effort_medium enhancement priority_high | # Overview
Currently, Universal APIs are loaded by checking for annotations on regular methods.
This enhancement involves support alternate registration methods.
# Changes
- Allow registration of the actions from a trait
- Allow registration of the actions declaratively ( like http routes but with method references )
- Allow registration of the actions from a file ( json format that matches the fields in annotations )
# Note
For item 2 in changes:
```kotlin
...
// Case 1: Register one at a time
actions.register( "admin", "users", "create" , method = Users::create )
actions.register( "admin", "users", "update", method = Users::update )
actions.register( "admin", "users", "delete" , method = Users::delete )
// Case 2: Register in bulk if actions match method names
actions.register( "admin", "users", listOf(
Users::create,
Users::update,
Users::delete
))
// Case 3: Register using an api trait
// This will take all the methods in the trait and set them up as API actions
actions.register( "admin", "users", UsersApi::class )
// Case 4: Register using a file
actions.register( "user://myapp/conf/routes.json" )
...
``` | 1.0 | APIs: Support various registration types - # Overview
Currently, Universal APIs are loaded by checking for annotations on regular methods.
This enhancement involves support alternate registration methods.
# Changes
- Allow registration of the actions from a trait
- Allow registration of the actions declaratively ( like http routes but with method references )
- Allow registration of the actions from a file ( json format that matches the fields in annotations )
# Note
For item 2 in changes:
```kotlin
...
// Case 1: Register one at a time
actions.register( "admin", "users", "create" , method = Users::create )
actions.register( "admin", "users", "update", method = Users::update )
actions.register( "admin", "users", "delete" , method = Users::delete )
// Case 2: Register in bulk if actions match method names
actions.register( "admin", "users", listOf(
Users::create,
Users::update,
Users::delete
))
// Case 3: Register using an api trait
// This will take all the methods in the trait and set them up as API actions
actions.register( "admin", "users", UsersApi::class )
// Case 4: Register using a file
actions.register( "user://myapp/conf/routes.json" )
...
``` | priority | apis support various registration types overview currently universal apis are loaded by checking for annotations on regular methods this enhancement involves support alternate registration methods changes allow registration of the actions from a trait allow registration of the actions declaratively like http routes but with method references allow registration of the actions from a file json format that matches the fields in annotations note for item in changes kotlin case register one at a time actions register admin users create method users create actions register admin users update method users update actions register admin users delete method users delete case register in bulk if actions match method names actions register admin users listof users create users update users delete case register using an api trait this will take all the methods in the trait and set them up as api actions actions register admin users usersapi class case register using a file actions register user myapp conf routes json | 1 |
732,897 | 25,279,436,991 | IssuesEvent | 2022-11-16 14:50:00 | aseprite/aseprite | https://api.github.com/repos/aseprite/aseprite | closed | Normal map color wheel not working as intended | bug high priority colorbar | **Problem:**
When selecting a color with the color picker on the image of a normal map sphere, produced outside of Aseprite, the colors selected do not match the one displayed on the color picker. The color selected on the color wheel seem to not match the desired orientation when tested in Unity with default 3D lights.
I tried to use the normal color wheel to select colors between 2 already known and working colors, but it doesn't work at all.
**Steps to reproduce:**
- import a normal map sphere (the one from the wikipedia page for example) in aseprite
- select a color with the color picker, preferably not at the center
- with the normal map palette selected, try to select the same color by clicking at the spot highlighted with a small circle on the color wheel.
- The new color is different from the previous one.
This capture shows the difference between the 2 colors, the first one (in the red circle), actually on the image, and the second, produced with the color wheel:
https://gyazo.com/e115b949781c6c0b77a3ca19bfe563fb
As per my tests, the color produced with Aseprite normal color wheel **are not correct** when the texture is imported in a program to light them.
As an added request, while testing some other programs to edit normal map I found one that shows a wireframe 3D square around the color selected on the wheel, clearly giving the information of the direction of the normal. That would be very helpful to wrap your head around what a color will really do when lighted.
If you need any feedback or to test fixes, don't hesitate to contact me.
### Aseprite and System version
* Aseprite version: 1.2.29-x64, steam version
* System: W10
| 1.0 | Normal map color wheel not working as intended - **Problem:**
When selecting a color with the color picker on the image of a normal map sphere, produced outside of Aseprite, the colors selected do not match the one displayed on the color picker. The color selected on the color wheel seem to not match the desired orientation when tested in Unity with default 3D lights.
I tried to use the normal color wheel to select colors between 2 already known and working colors, but it doesn't work at all.
**Steps to reproduce:**
- import a normal map sphere (the one from the wikipedia page for example) in aseprite
- select a color with the color picker, preferably not at the center
- with the normal map palette selected, try to select the same color by clicking at the spot highlighted with a small circle on the color wheel.
- The new color is different from the previous one.
This capture shows the difference between the 2 colors, the first one (in the red circle), actually on the image, and the second, produced with the color wheel:
https://gyazo.com/e115b949781c6c0b77a3ca19bfe563fb
As per my tests, the color produced with Aseprite normal color wheel **are not correct** when the texture is imported in a program to light them.
As an added request, while testing some other programs to edit normal map I found one that shows a wireframe 3D square around the color selected on the wheel, clearly giving the information of the direction of the normal. That would be very helpful to wrap your head around what a color will really do when lighted.
If you need any feedback or to test fixes, don't hesitate to contact me.
### Aseprite and System version
* Aseprite version: 1.2.29-x64, steam version
* System: W10
| priority | normal map color wheel not working as intended problem when selecting a color with the color picker on the image of a normal map sphere produced outside of aseprite the colors selected do not match the one displayed on the color picker the color selected on the color wheel seem to not match the desired orientation when tested in unity with default lights i tried to use the normal color wheel to select colors between already known and working colors but it doesn t work at all steps to reproduce import a normal map sphere the one from the wikipedia page for example in aseprite select a color with the color picker preferably not at the center with the normal map palette selected try to select the same color by clicking at the spot highlighted with a small circle on the color wheel the new color is different from the previous one this capture shows the difference between the colors the first one in the red circle actually on the image and the second produced with the color wheel as per my tests the color produced with aseprite normal color wheel are not correct when the texture is imported in a program to light them as an added request while testing some other programs to edit normal map i found one that shows a wireframe square around the color selected on the wheel clearly giving the information of the direction of the normal that would be very helpful to wrap your head around what a color will really do when lighted if you need any feedback or to test fixes don t hesitate to contact me aseprite and system version aseprite version steam version system | 1 |
191,293 | 6,827,739,980 | IssuesEvent | 2017-11-08 18:01:10 | hydroshare/hydroshare | https://api.github.com/repos/hydroshare/hydroshare | closed | Discover page: search box does NOT obey SOLR syntax | Discovery High Priority User Experience User Interface | The helpful text that suggests that SOLR syntax works in the search box has been wrong for over a year. It now tokenizes terms and is not compatible with SOLR syntax. | 1.0 | Discover page: search box does NOT obey SOLR syntax - The helpful text that suggests that SOLR syntax works in the search box has been wrong for over a year. It now tokenizes terms and is not compatible with SOLR syntax. | priority | discover page search box does not obey solr syntax the helpful text that suggests that solr syntax works in the search box has been wrong for over a year it now tokenizes terms and is not compatible with solr syntax | 1 |
234,225 | 7,718,923,386 | IssuesEvent | 2018-05-23 17:44:51 | michalkowal/Cake.MkDocs | https://api.github.com/repos/michalkowal/Cake.MkDocs | opened | Prepare Cake Aliases for MkDocs | Priority: High Status: On Hold Type: Enhancement | **Is your feature request related to a problem? Please describe.**
**Describe the solution you'd like**
Prepare Cake Aliases for MkDocs that can be used in cake scripts.
Meet documentation guidelines.
Support all MkDocs commands:
- new
- build
- serve
- gh-deploy
Right now, options can be omitted (prepare only base options class).
**Describe alternatives you've considered**
**Additional context**
Documenting guidelines - https://cakebuild.net/docs/contributing/documentation
| 1.0 | Prepare Cake Aliases for MkDocs - **Is your feature request related to a problem? Please describe.**
**Describe the solution you'd like**
Prepare Cake Aliases for MkDocs that can be used in cake scripts.
Meet documentation guidelines.
Support all MkDocs commands:
- new
- build
- serve
- gh-deploy
Right now, options can be omitted (prepare only base options class).
**Describe alternatives you've considered**
**Additional context**
Documenting guidelines - https://cakebuild.net/docs/contributing/documentation
| priority | prepare cake aliases for mkdocs is your feature request related to a problem please describe describe the solution you d like prepare cake aliases for mkdocs that can be used in cake scripts meet documentation guidelines support all mkdocs commands new build serve gh deploy right now options can be omitted prepare only base options class describe alternatives you ve considered additional context documenting guidelines | 1 |
363,436 | 10,741,096,802 | IssuesEvent | 2019-10-29 19:31:07 | eriq-augustine/test-issue-copy | https://api.github.com/repos/eriq-augustine/test-issue-copy | opened | [CLOSED] Deprecate the Groovy Interface | Admin - Next Release Difficulty - Easy Interfaces - Groovy Priority - High Type - Refactor | <a href="https://github.com/eriq-augustine"><img src="https://avatars0.githubusercontent.com/u/337857?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [eriq-augustine](https://github.com/eriq-augustine)**
_Friday Oct 11, 2019 at 20:50 GMT_
_Originally opened as https://github.com/eriq-augustine/psl/issues/210_
----
We are deprecating the Groovy interface in favor of the Java interface.
Mark this as deprecated (in the logging).
| 1.0 | [CLOSED] Deprecate the Groovy Interface - <a href="https://github.com/eriq-augustine"><img src="https://avatars0.githubusercontent.com/u/337857?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [eriq-augustine](https://github.com/eriq-augustine)**
_Friday Oct 11, 2019 at 20:50 GMT_
_Originally opened as https://github.com/eriq-augustine/psl/issues/210_
----
We are deprecating the Groovy interface in favor of the Java interface.
Mark this as deprecated (in the logging).
| priority | deprecate the groovy interface issue by friday oct at gmt originally opened as we are deprecating the groovy interface in favor of the java interface mark this as deprecated in the logging | 1 |
320,421 | 9,780,788,662 | IssuesEvent | 2019-06-07 17:55:07 | mantisbt-plugins/BBCodePlus | https://api.github.com/repos/mantisbt-plugins/BBCodePlus | closed | Installing the plugin breaks issue link in notification emails | Priority: High Status: Awaiting Feedback Type: Bug | In a typical notification email a user gets when, for example, the status of an issue is changed, should look like the following:
`
http://mysite.com/mantis/view.php?id=1
`
However, when I install the latest version of BBCodePlus (Ver. 2.1.3), the link format changes to the following:
`
http://mysite.com/mantis#1
`
This is a problem since clicking the second link takes you to the user's `MyView` page, not directly to the issue.
Additional Info
---

| 1.0 | Installing the plugin breaks issue link in notification emails - In a typical notification email a user gets when, for example, the status of an issue is changed, should look like the following:
`
http://mysite.com/mantis/view.php?id=1
`
However, when I install the latest version of BBCodePlus (Ver. 2.1.3), the link format changes to the following:
`
http://mysite.com/mantis#1
`
This is a problem since clicking the second link takes you to the user's `MyView` page, not directly to the issue.
Additional Info
---

| priority | installing the plugin breaks issue link in notification emails in a typical notification email a user gets when for example the status of an issue is changed should look like the following however when i install the latest version of bbcodeplus ver the link format changes to the following this is a problem since clicking the second link takes you to the user s myview page not directly to the issue additional info | 1 |
136,433 | 5,282,428,809 | IssuesEvent | 2017-02-07 18:50:20 | ampproject/amphtml | https://api.github.com/repos/ampproject/amphtml | opened | null is not an object (evaluating 'c.sentinel') | Category: Framework P1: High Priority Type: Bug | http_user_agent: "Mozilla/5.0 (iPhone; CPU iPhone OS 10_2 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) GSA/22.0.141836113 Mobile/14C92 Safari/600.1.4"
}
exception: "b@https://cdn.ampproject.org/rtv/001486166232176/v0/amp-iframe-0.1.js:5:269"
sample page: https://cdn.ampproject.org/c/www.cbssports.com/nfl/news/watch-patriots-victory-parade-with-live-tv-updates-pictures-from-wild-celebration/amp/ | 1.0 | null is not an object (evaluating 'c.sentinel') - http_user_agent: "Mozilla/5.0 (iPhone; CPU iPhone OS 10_2 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) GSA/22.0.141836113 Mobile/14C92 Safari/600.1.4"
}
exception: "b@https://cdn.ampproject.org/rtv/001486166232176/v0/amp-iframe-0.1.js:5:269"
sample page: https://cdn.ampproject.org/c/www.cbssports.com/nfl/news/watch-patriots-victory-parade-with-live-tv-updates-pictures-from-wild-celebration/amp/ | priority | null is not an object evaluating c sentinel http user agent mozilla iphone cpu iphone os like mac os x applewebkit khtml like gecko gsa mobile safari exception b sample page | 1 |
820,092 | 30,759,330,459 | IssuesEvent | 2023-07-29 13:34:13 | bigomics/omicsplayground | https://api.github.com/repos/bigomics/omicsplayground | closed | Awe snap error out of memory error in chrome browser | bug high priority | A user is reporting an "Awe snap, out of memory error"in chrome browser in V3. See email and image below. Does not happen in V2. Posting this because recently some of you also got this.
> Hi Ivo,
> Yes it can be a browser problem, I will try on Firefox or clear Chrome's browser cache. I am using windows and Chrome yes.
> Regarding the details, the crash happens everytime I try to load the dataset in, it starts to say that the data is being loaded and then that message I showed you pops up and I have to refresh the page.
> Important note is that it only happens in this version 3, if I use version 2 I can load my datasets normally and work without a problem.
> Also like I mentioned before it only happens in my PC.
> I was in the load dataset page. I tried to load the Bousioutis_TAMs dataset which I myself uploaded in the past. But it happens when I try to load any dataset.
> It takes about 10sec from the moment I try to load the data until the error occurs.
> I hope this helps.
> Best,Pedro

| 1.0 | Awe snap error out of memory error in chrome browser - A user is reporting an "Awe snap, out of memory error"in chrome browser in V3. See email and image below. Does not happen in V2. Posting this because recently some of you also got this.
> Hi Ivo,
> Yes it can be a browser problem, I will try on Firefox or clear Chrome's browser cache. I am using windows and Chrome yes.
> Regarding the details, the crash happens everytime I try to load the dataset in, it starts to say that the data is being loaded and then that message I showed you pops up and I have to refresh the page.
> Important note is that it only happens in this version 3, if I use version 2 I can load my datasets normally and work without a problem.
> Also like I mentioned before it only happens in my PC.
> I was in the load dataset page. I tried to load the Bousioutis_TAMs dataset which I myself uploaded in the past. But it happens when I try to load any dataset.
> It takes about 10sec from the moment I try to load the data until the error occurs.
> I hope this helps.
> Best,Pedro

| priority | awe snap error out of memory error in chrome browser a user is reporting an awe snap out of memory error in chrome browser in see email and image below does not happen in posting this because recently some of you also got this hi ivo yes it can be a browser problem i will try on firefox or clear chrome s browser cache i am using windows and chrome yes regarding the details the crash happens everytime i try to load the dataset in it starts to say that the data is being loaded and then that message i showed you pops up and i have to refresh the page important note is that it only happens in this version if i use version i can load my datasets normally and work without a problem also like i mentioned before it only happens in my pc i was in the load dataset page i tried to load the bousioutis tams dataset which i myself uploaded in the past but it happens when i try to load any dataset it takes about from the moment i try to load the data until the error occurs i hope this helps best pedro | 1 |
663,167 | 22,163,300,426 | IssuesEvent | 2022-06-04 20:53:58 | evo-lua/mongoose-ffi | https://api.github.com/repos/evo-lua/mongoose-ffi | opened | Add support for TLS encryption | Complexity: Low Priority: High Scope: API Status: Accepted Type: Improvement | OpenSSL is already built into the shared library, so it should be easy to integrate the 2 or so API functions that mongoose provides.
---
Blocked until #9 is implemented. | 1.0 | Add support for TLS encryption - OpenSSL is already built into the shared library, so it should be easy to integrate the 2 or so API functions that mongoose provides.
---
Blocked until #9 is implemented. | priority | add support for tls encryption openssl is already built into the shared library so it should be easy to integrate the or so api functions that mongoose provides blocked until is implemented | 1 |
25,862 | 2,684,017,515 | IssuesEvent | 2015-03-28 15:36:40 | oxyplot/oxyplot | https://api.github.com/repos/oxyplot/oxyplot | closed | PlotModel.Invalidate is not hooked up on iOS/Android (including Forms) | Android help-wanted high-priority iOS please-verify unconfirmed-bug | The other platforms have OnModelChanged logic that gets executed when the model is set on the PlotView (in particular, AttachPlotView), but this is missing from iOS/Android. Thus, calling Invalidate on the model has no effect. This can be observed in some of the examples that take advantage of this (e.g. Touch).
(this is intended to replace issue #288, which just describes one symptom of this issue) | 1.0 | PlotModel.Invalidate is not hooked up on iOS/Android (including Forms) - The other platforms have OnModelChanged logic that gets executed when the model is set on the PlotView (in particular, AttachPlotView), but this is missing from iOS/Android. Thus, calling Invalidate on the model has no effect. This can be observed in some of the examples that take advantage of this (e.g. Touch).
(this is intended to replace issue #288, which just describes one symptom of this issue) | priority | plotmodel invalidate is not hooked up on ios android including forms the other platforms have onmodelchanged logic that gets executed when the model is set on the plotview in particular attachplotview but this is missing from ios android thus calling invalidate on the model has no effect this can be observed in some of the examples that take advantage of this e g touch this is intended to replace issue which just describes one symptom of this issue | 1 |
195,783 | 6,918,281,545 | IssuesEvent | 2017-11-29 11:35:03 | ubuntudesign/vanilla-design | https://api.github.com/repos/ubuntudesign/vanilla-design | closed | Inconsistent navigation header height between Vanilla-using sites | Priority: High Status: Triaged | _From @matthewpaulthomas on May 25, 2017 11:1_
[The “Navigation” page](https://docs.vanillaframework.io/en/patterns/navigation) shows a navigation header that is 48px high:
`html {font-size: 16 px}` × (`.p-navigation__link > a {font-size: .875rem}` +
`.p-navigation .p-navigation__links .p-navigation__link {padding: .75rem 1.25rem;}`)
= 16px × (0.875 + 0.75 top + 0.75 bottom) = 48px.
Some sites using Vanilla are consistent with this:
48px [www.ubuntu.com](https://www.ubuntu.com/)
48px [www.canonical.com](https://www.canonical.com/)
48px [maas.io](https://maas.io/)
Unfortunately, others are not:
50px [build.snapcraft.io](https://build.snapcraft.io/) ([#800](https://github.com/canonical-websites/build.snapcraft.io/issues/800))
51px [snapcraft.io](https://snapcraft.io/), including [snapcraft.io/docs](https://snapcraft.io/docs/)
53px [docs.ubuntu.com](https://docs.ubuntu.com/core/en/)
53px [jujucharms.com](https://jujucharms.com/how-it-works)
The jujucharms.com example is apparently from the past couple of weeks, and build.snapcraft.io from only 3~4 months ago, which suggests that this is not just a matter of some sites using out-of-date Vanilla versions.
So, it may be worthwhile investigating how they became inconsistent, and how this can be avoided in future.
_Copied from original issue: vanilla-framework/vanilla-framework#1083_ | 1.0 | Inconsistent navigation header height between Vanilla-using sites - _From @matthewpaulthomas on May 25, 2017 11:1_
[The “Navigation” page](https://docs.vanillaframework.io/en/patterns/navigation) shows a navigation header that is 48px high:
`html {font-size: 16 px}` × (`.p-navigation__link > a {font-size: .875rem}` +
`.p-navigation .p-navigation__links .p-navigation__link {padding: .75rem 1.25rem;}`)
= 16px × (0.875 + 0.75 top + 0.75 bottom) = 48px.
Some sites using Vanilla are consistent with this:
48px [www.ubuntu.com](https://www.ubuntu.com/)
48px [www.canonical.com](https://www.canonical.com/)
48px [maas.io](https://maas.io/)
Unfortunately, others are not:
50px [build.snapcraft.io](https://build.snapcraft.io/) ([#800](https://github.com/canonical-websites/build.snapcraft.io/issues/800))
51px [snapcraft.io](https://snapcraft.io/), including [snapcraft.io/docs](https://snapcraft.io/docs/)
53px [docs.ubuntu.com](https://docs.ubuntu.com/core/en/)
53px [jujucharms.com](https://jujucharms.com/how-it-works)
The jujucharms.com example is apparently from the past couple of weeks, and build.snapcraft.io from only 3~4 months ago, which suggests that this is not just a matter of some sites using out-of-date Vanilla versions.
So, it may be worthwhile investigating how they became inconsistent, and how this can be avoided in future.
_Copied from original issue: vanilla-framework/vanilla-framework#1083_ | priority | inconsistent navigation header height between vanilla using sites from matthewpaulthomas on may shows a navigation header that is high html font size px × p navigation link a font size p navigation p navigation links p navigation link padding × top bottom some sites using vanilla are consistent with this unfortunately others are not including the jujucharms com example is apparently from the past couple of weeks and build snapcraft io from only months ago which suggests that this is not just a matter of some sites using out of date vanilla versions so it may be worthwhile investigating how they became inconsistent and how this can be avoided in future copied from original issue vanilla framework vanilla framework | 1 |
823,741 | 31,031,672,366 | IssuesEvent | 2023-08-10 12:52:39 | usdevs/usc-website-hackathon-frontend | https://api.github.com/repos/usdevs/usc-website-hackathon-frontend | closed | Admin Page | priority.High | ## Add the ability to create, edit and delete an org
- [x] We need UI to let the user select from a dropdown of existing IGs to edit, or add a button/link to create a new IG
- [x] It's fine to reuse the same form - values from an existing IG can be populated in the form, else the form can be blank when creating a new IG. We can send all the form values (even if they are unchanged for `edit` to the backend)
- [x] An org will need the following fields:
```
id: number // can leave id as -1 if creating org
name: string
description: string
isAdminOrg: boolean
inviteLink: string
category: IGCategory // dropdown list
isInactive: boolean
isInvisible: boolean
igHead: number // userId // dropdown list
otherMembers: number[] // userId[], a multi-select component?
```
- [x] We need exactly one IG Head, and the ability to change the IG head from a list of existing users registered in the DB
- [x] We need the ability to add and remove other ExCo members of the org
- [x] [Parth] Only existing IG heads of that IG have edit rights to an IG
- [x] [Parth] Admin have create and edit rights
- [x] [Parth] Only admin can select or unselect `isAdminOrg`
- [x] UI to delete an org
- [x] The following descriptions for:
1. isInactive: Check this box if the student group is no longer active.
2. isInvisible: Check this box if you do not want to display this student group on the website.
3. isAdminOrg: Check this box if the organisation is part of the NUSC administration.
## Add the ability to create, edit and delete an user in the DB
- [x] Same idea as the organisation form, we can reuse data flow logic from the organisation form
- [x] The following fields are needed: `name, telegramUserName`
## Form validation
- [x] Would be nice to have frontend form validation for all 3 forms - create/edit org, create/edit user, and add/delete MC admin
- [x] Keep the button to generate Telegram token for us frontend developers lol
| 1.0 | Admin Page - ## Add the ability to create, edit and delete an org
- [x] We need UI to let the user select from a dropdown of existing IGs to edit, or add a button/link to create a new IG
- [x] It's fine to reuse the same form - values from an existing IG can be populated in the form, else the form can be blank when creating a new IG. We can send all the form values (even if they are unchanged for `edit` to the backend)
- [x] An org will need the following fields:
```
id: number // can leave id as -1 if creating org
name: string
description: string
isAdminOrg: boolean
inviteLink: string
category: IGCategory // dropdown list
isInactive: boolean
isInvisible: boolean
igHead: number // userId // dropdown list
otherMembers: number[] // userId[], a multi-select component?
```
- [x] We need exactly one IG Head, and the ability to change the IG head from a list of existing users registered in the DB
- [x] We need the ability to add and remove other ExCo members of the org
- [x] [Parth] Only existing IG heads of that IG have edit rights to an IG
- [x] [Parth] Admin have create and edit rights
- [x] [Parth] Only admin can select or unselect `isAdminOrg`
- [x] UI to delete an org
- [x] The following descriptions for:
1. isInactive: Check this box if the student group is no longer active.
2. isInvisible: Check this box if you do not want to display this student group on the website.
3. isAdminOrg: Check this box if the organisation is part of the NUSC administration.
## Add the ability to create, edit and delete an user in the DB
- [x] Same idea as the organisation form, we can reuse data flow logic from the organisation form
- [x] The following fields are needed: `name, telegramUserName`
## Form validation
- [x] Would be nice to have frontend form validation for all 3 forms - create/edit org, create/edit user, and add/delete MC admin
- [x] Keep the button to generate Telegram token for us frontend developers lol
| priority | admin page add the ability to create edit and delete an org we need ui to let the user select from a dropdown of existing igs to edit or add a button link to create a new ig it s fine to reuse the same form values from an existing ig can be populated in the form else the form can be blank when creating a new ig we can send all the form values even if they are unchanged for edit to the backend an org will need the following fields id number can leave id as if creating org name string description string isadminorg boolean invitelink string category igcategory dropdown list isinactive boolean isinvisible boolean ighead number userid dropdown list othermembers number userid a multi select component we need exactly one ig head and the ability to change the ig head from a list of existing users registered in the db we need the ability to add and remove other exco members of the org only existing ig heads of that ig have edit rights to an ig admin have create and edit rights only admin can select or unselect isadminorg ui to delete an org the following descriptions for isinactive check this box if the student group is no longer active isinvisible check this box if you do not want to display this student group on the website isadminorg check this box if the organisation is part of the nusc administration add the ability to create edit and delete an user in the db same idea as the organisation form we can reuse data flow logic from the organisation form the following fields are needed name telegramusername form validation would be nice to have frontend form validation for all forms create edit org create edit user and add delete mc admin keep the button to generate telegram token for us frontend developers lol | 1 |
515,278 | 14,959,021,686 | IssuesEvent | 2021-01-27 02:09:26 | LBL-EESA/TECA | https://api.github.com/repos/LBL-EESA/TECA | closed | fix tc_wind_radii_stats warning | 1_high_priority bug python | ```
/global/common/software/m1517/installs/teca/develop-2021_01_21/lib/teca_py.py:20485: MatplotlibDeprecationWarning: Passing non-integers as three-element position specification is deprecated since 3.3 and will be removed two minor releases later.
/global/common/software/m1517/installs/teca/develop-2021_01_21/lib/teca_py.py:20519: MatplotlibDeprecationWarning: Passing non-integers as three-element position specification is deprecated since 3.3 and will be removed two minor releases later.
/global/common/software/m1517/installs/teca/develop-2021_01_21/lib/teca_py.py:20558: MatplotlibDeprecationWarning: Passing non-integers as three-element position specification is deprecated since 3.3 and will be removed two minor releases later.
/global/common/software/m1517/installs/teca/develop-2021_01_21/lib/teca_py.py:20599: MatplotlibDeprecationWarning: Passing non-integers as three-element position specification is deprecated since 3.3 and will be removed two minor releases later.
``` | 1.0 | fix tc_wind_radii_stats warning - ```
/global/common/software/m1517/installs/teca/develop-2021_01_21/lib/teca_py.py:20485: MatplotlibDeprecationWarning: Passing non-integers as three-element position specification is deprecated since 3.3 and will be removed two minor releases later.
/global/common/software/m1517/installs/teca/develop-2021_01_21/lib/teca_py.py:20519: MatplotlibDeprecationWarning: Passing non-integers as three-element position specification is deprecated since 3.3 and will be removed two minor releases later.
/global/common/software/m1517/installs/teca/develop-2021_01_21/lib/teca_py.py:20558: MatplotlibDeprecationWarning: Passing non-integers as three-element position specification is deprecated since 3.3 and will be removed two minor releases later.
/global/common/software/m1517/installs/teca/develop-2021_01_21/lib/teca_py.py:20599: MatplotlibDeprecationWarning: Passing non-integers as three-element position specification is deprecated since 3.3 and will be removed two minor releases later.
``` | priority | fix tc wind radii stats warning global common software installs teca develop lib teca py py matplotlibdeprecationwarning passing non integers as three element position specification is deprecated since and will be removed two minor releases later global common software installs teca develop lib teca py py matplotlibdeprecationwarning passing non integers as three element position specification is deprecated since and will be removed two minor releases later global common software installs teca develop lib teca py py matplotlibdeprecationwarning passing non integers as three element position specification is deprecated since and will be removed two minor releases later global common software installs teca develop lib teca py py matplotlibdeprecationwarning passing non integers as three element position specification is deprecated since and will be removed two minor releases later | 1 |
6,249 | 2,586,394,565 | IssuesEvent | 2015-02-17 11:13:15 | HubTurbo/HubTurbo | https://api.github.com/repos/HubTurbo/HubTurbo | closed | Sync happens more frequently than intended | priority.high type.bug | The problem is that the sync doesn't always respect the time kept by the visual timer shown to users. It may happen during project switching as in the past.
This could be the cause of the concurrency issues in #430, as well as the progress bar restarting in #444. | 1.0 | Sync happens more frequently than intended - The problem is that the sync doesn't always respect the time kept by the visual timer shown to users. It may happen during project switching as in the past.
This could be the cause of the concurrency issues in #430, as well as the progress bar restarting in #444. | priority | sync happens more frequently than intended the problem is that the sync doesn t always respect the time kept by the visual timer shown to users it may happen during project switching as in the past this could be the cause of the concurrency issues in as well as the progress bar restarting in | 1 |
195,698 | 6,917,187,066 | IssuesEvent | 2017-11-29 07:19:21 | ppy/osu-web | https://api.github.com/repos/ppy/osu-web | closed | Emails don't display newlines | high priority | Seems like they are displaying as html when the actual content isn't.

```Delivered-To: pe@ppy.sh
Received: by 10.157.63.253 with SMTP id i58csp3956185ote;
Tue, 28 Nov 2017 22:40:27 -0800 (PST)
X-Google-Smtp-Source: AGs4zMYSlTvRViMcVGkCO7ll189jKa8E+8tf/ChH23D5uddlPCrIxPAUgQAmzZtKiGQR6nryV190
X-Received: by 10.84.201.6 with SMTP id u6mr1840439pld.51.1511937627744;
Tue, 28 Nov 2017 22:40:27 -0800 (PST)
ARC-Seal: i=1; a=rsa-sha256; t=1511937627; cv=none;
d=google.com; s=arc-20160816;
b=AocVKGYifYqNykFEe7kGe9K11Mvgbf3h0P7wg45dUSXLTciXv5Kn2mK5aTjLVaRD0d
uZhQ1UDMnEQ38tvWbb34bq6Frkt9QyQUALhNEgxjdgW0KECRRsFIW1UuKvM2YdSUKvc+
1AADJuTvAThBsaySiyhnoFb6jZHja5z6lLCsp66i3KHDW2D480iY/xr084TbQE0LHZVa
BtovSZ9KF4A0BGPdH87KbxByu2pGylei2FOAX+i71cvLkqQmcNl/8R/18pRFhqXXRSZc
gAdNUQPygJ0cf+tTjqDMWkYWy6Vt/B6koCqx8wqbXN4+vdVW2JQrvP5YSgID6Nl9amiA
LpYA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816;
h=content-transfer-encoding:mime-version:to:from:subject:date
:message-id:arc-authentication-results;
bh=eG9e58DGcmq1gb8fF0OLgdytZiWtTDll48b94xoWMdc=;
b=i7ZA7yuIhyLTentL18aeBF7Zw6F/GWozfFHOCE4zvqT8vdtclrCG+dYmyvksrjLiRY
tVRV/s6BZB5tM471+aTKY4ZOexTqqD0ymzEJGxEZjRBPMOZeyVvgmPJoaxlumANlNKMp
AYN2O4mAM4TdXYTtS28EWd9t9z/r+i8ROn1jdi9ASXcRRabGK0PCRhimq3i/wMvIUD9i
dZKnIlkSUM4obwaIyuGPPBYDmSAXrjFK6tXlfSnGLsw6ysmM9i/L9DNxYH/qW6zItwy3
Iwp8yIBeBZtKG0xa1cNVaW88vAq89H/XCqjBepg5/rDfmKWpvJQFQ6wqmO2HxMgZdFao
H6xw==
ARC-Authentication-Results: i=1; mx.google.com;
spf=pass (google.com: domain of osu@ppy.sh designates 198.199.109.58 as permitted sender) smtp.mailfrom=osu@ppy.sh
Return-Path: <osu@ppy.sh>
Received: from smtp.ppy.sh (smtp.ppy.sh. [198.199.109.58])
by mx.google.com with ESMTP id j6si761744pll.725.2017.11.28.22.40.27
for <pe@ppy.sh>;
Tue, 28 Nov 2017 22:40:27 -0800 (PST)
Received-SPF: pass (google.com: domain of osu@ppy.sh designates 198.199.109.58 as permitted sender) client-ip=198.199.109.58;
Authentication-Results: mx.google.com;
spf=pass (google.com: domain of osu@ppy.sh designates 198.199.109.58 as permitted sender) smtp.mailfrom=osu@ppy.sh
Received: from [127.0.0.1] (localhost [127.0.0.1])
Received: by new.ppy.sh (Postfix, from userid 33) id 31B8D826CB; Wed, 29 Nov 2017 06:40:27 +0000 (UTC)
Received: from [127.0.0.1] (localhost [127.0.0.1])
Message-ID: <bc2de0446143921d226a6d82e777004c@swift.generated>
Date: Wed, 29 Nov 2017 06:40:27 +0000
Subject: You have an osu! supporter tag!
From: "osu!" <osu@ppy.sh>
To: pe@ppy.sh
MIME-Version: 1.0
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hey there peppy,
Someone has just gifted you an osu! supporter tag!
Thanks to them, you now have access osu!direct and other supporter benefits=
for the next 1 month.
You can find out more details on these features at https://osu.ppy.sh/home/=
support
The person who gifted you this tag may choose to remain anonymous, so they =
have not been mentioned in this notification
(But you likely already know who it is ;).
Regards,
osu! Management``` | 1.0 | Emails don't display newlines - Seems like they are displaying as html when the actual content isn't.

```Delivered-To: pe@ppy.sh
Received: by 10.157.63.253 with SMTP id i58csp3956185ote;
Tue, 28 Nov 2017 22:40:27 -0800 (PST)
X-Google-Smtp-Source: AGs4zMYSlTvRViMcVGkCO7ll189jKa8E+8tf/ChH23D5uddlPCrIxPAUgQAmzZtKiGQR6nryV190
X-Received: by 10.84.201.6 with SMTP id u6mr1840439pld.51.1511937627744;
Tue, 28 Nov 2017 22:40:27 -0800 (PST)
ARC-Seal: i=1; a=rsa-sha256; t=1511937627; cv=none;
d=google.com; s=arc-20160816;
b=AocVKGYifYqNykFEe7kGe9K11Mvgbf3h0P7wg45dUSXLTciXv5Kn2mK5aTjLVaRD0d
uZhQ1UDMnEQ38tvWbb34bq6Frkt9QyQUALhNEgxjdgW0KECRRsFIW1UuKvM2YdSUKvc+
1AADJuTvAThBsaySiyhnoFb6jZHja5z6lLCsp66i3KHDW2D480iY/xr084TbQE0LHZVa
BtovSZ9KF4A0BGPdH87KbxByu2pGylei2FOAX+i71cvLkqQmcNl/8R/18pRFhqXXRSZc
gAdNUQPygJ0cf+tTjqDMWkYWy6Vt/B6koCqx8wqbXN4+vdVW2JQrvP5YSgID6Nl9amiA
LpYA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816;
h=content-transfer-encoding:mime-version:to:from:subject:date
:message-id:arc-authentication-results;
bh=eG9e58DGcmq1gb8fF0OLgdytZiWtTDll48b94xoWMdc=;
b=i7ZA7yuIhyLTentL18aeBF7Zw6F/GWozfFHOCE4zvqT8vdtclrCG+dYmyvksrjLiRY
tVRV/s6BZB5tM471+aTKY4ZOexTqqD0ymzEJGxEZjRBPMOZeyVvgmPJoaxlumANlNKMp
AYN2O4mAM4TdXYTtS28EWd9t9z/r+i8ROn1jdi9ASXcRRabGK0PCRhimq3i/wMvIUD9i
dZKnIlkSUM4obwaIyuGPPBYDmSAXrjFK6tXlfSnGLsw6ysmM9i/L9DNxYH/qW6zItwy3
Iwp8yIBeBZtKG0xa1cNVaW88vAq89H/XCqjBepg5/rDfmKWpvJQFQ6wqmO2HxMgZdFao
H6xw==
ARC-Authentication-Results: i=1; mx.google.com;
spf=pass (google.com: domain of osu@ppy.sh designates 198.199.109.58 as permitted sender) smtp.mailfrom=osu@ppy.sh
Return-Path: <osu@ppy.sh>
Received: from smtp.ppy.sh (smtp.ppy.sh. [198.199.109.58])
by mx.google.com with ESMTP id j6si761744pll.725.2017.11.28.22.40.27
for <pe@ppy.sh>;
Tue, 28 Nov 2017 22:40:27 -0800 (PST)
Received-SPF: pass (google.com: domain of osu@ppy.sh designates 198.199.109.58 as permitted sender) client-ip=198.199.109.58;
Authentication-Results: mx.google.com;
spf=pass (google.com: domain of osu@ppy.sh designates 198.199.109.58 as permitted sender) smtp.mailfrom=osu@ppy.sh
Received: from [127.0.0.1] (localhost [127.0.0.1])
Received: by new.ppy.sh (Postfix, from userid 33) id 31B8D826CB; Wed, 29 Nov 2017 06:40:27 +0000 (UTC)
Received: from [127.0.0.1] (localhost [127.0.0.1])
Message-ID: <bc2de0446143921d226a6d82e777004c@swift.generated>
Date: Wed, 29 Nov 2017 06:40:27 +0000
Subject: You have an osu! supporter tag!
From: "osu!" <osu@ppy.sh>
To: pe@ppy.sh
MIME-Version: 1.0
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hey there peppy,
Someone has just gifted you an osu! supporter tag!
Thanks to them, you now have access osu!direct and other supporter benefits=
for the next 1 month.
You can find out more details on these features at https://osu.ppy.sh/home/=
support
The person who gifted you this tag may choose to remain anonymous, so they =
have not been mentioned in this notification
(But you likely already know who it is ;).
Regards,
osu! Management``` | priority | emails don t display newlines seems like they are displaying as html when the actual content isn t delivered to pe ppy sh received by with smtp id tue nov pst x google smtp source x received by with smtp id tue nov pst arc seal i a rsa t cv none d google com s arc b lpya arc message signature i a rsa c relaxed relaxed d google com s arc h content transfer encoding mime version to from subject date message id arc authentication results bh b dymyvksrjliry tvrv r arc authentication results i mx google com spf pass google com domain of osu ppy sh designates as permitted sender smtp mailfrom osu ppy sh return path received from smtp ppy sh smtp ppy sh by mx google com with esmtp id for tue nov pst received spf pass google com domain of osu ppy sh designates as permitted sender client ip authentication results mx google com spf pass google com domain of osu ppy sh designates as permitted sender smtp mailfrom osu ppy sh received from localhost received by new ppy sh postfix from userid id wed nov utc received from localhost message id date wed nov subject you have an osu supporter tag from osu to pe ppy sh mime version content type text html charset utf content transfer encoding quoted printable hey there peppy someone has just gifted you an osu supporter tag thanks to them you now have access osu direct and other supporter benefits for the next month you can find out more details on these features at support the person who gifted you this tag may choose to remain anonymous so they have not been mentioned in this notification but you likely already know who it is regards osu management | 1 |
76,816 | 3,493,819,343 | IssuesEvent | 2016-01-05 05:59:43 | twosigma/beaker-notebook | https://api.github.com/repos/twosigma/beaker-notebook | opened | SimpleTimePlot should handle Dates not just Numbers | Bug Output Plugins Priority High | <img width="690" alt="screen shot 2016-01-05 at 12 58 05 am" src="https://cloud.githubusercontent.com/assets/963093/12108582/78bf5e96-b347-11e5-8853-bc34a33076b1.png">
run it on branch spot/2755 | 1.0 | SimpleTimePlot should handle Dates not just Numbers - <img width="690" alt="screen shot 2016-01-05 at 12 58 05 am" src="https://cloud.githubusercontent.com/assets/963093/12108582/78bf5e96-b347-11e5-8853-bc34a33076b1.png">
run it on branch spot/2755 | priority | simpletimeplot should handle dates not just numbers img width alt screen shot at am src run it on branch spot | 1 |
724,875 | 24,944,042,548 | IssuesEvent | 2022-10-31 21:40:51 | Automattic/woocommerce-payments | https://api.github.com/repos/Automattic/woocommerce-payments | opened | Enhanced UPE - WCPay Compatibility: Payment Request Buttons | priority: high status: blocked impact: high component: upe | ### Description
<!-- A clear and concise description of what the new feature or improvement is. -->
This is the continuation on the integration of Stripe's new UPE payment flows. Please see [this post](pc2DNy-2e2-p2) for more details. This WCPay compatibility issue is one of many to ensure that existing WCPay functionality is unaffected and remains fully operational after refactoring the UPE to follow the new payment flow.
This issue is to ensure that existing Payment Request buttons (e.g. Apple/Google Pay) remain functioning normally.
This issue is currently blocked, as we await for Stripe to provide us access to their beta UPE release and while we wait to complete the PoC implementation of this new feature (#5019).
### Acceptance criteria
<!-- A list of predefined requirements that must be met in order to mark the issue as complete. -->
* Payment request buttons should appear at cart/checkout/store where enabled.
* Should be able to successfully complete checkout using both Apple and Google Pay.
### Additional context
<!-- Any additional context or details you think might be helpful. -->
<!-- Ticket numbers/links, P2s, project threads, etc. -->
[Stripe UPE – Options to solve WooPay compatibility and WC Pay loss of control over the payments UI/UX](pc2DNy-2e2-p2) | 1.0 | Enhanced UPE - WCPay Compatibility: Payment Request Buttons - ### Description
<!-- A clear and concise description of what the new feature or improvement is. -->
This is the continuation on the integration of Stripe's new UPE payment flows. Please see [this post](pc2DNy-2e2-p2) for more details. This WCPay compatibility issue is one of many to ensure that existing WCPay functionality is unaffected and remains fully operational after refactoring the UPE to follow the new payment flow.
This issue is to ensure that existing Payment Request buttons (e.g. Apple/Google Pay) remain functioning normally.
This issue is currently blocked, as we await for Stripe to provide us access to their beta UPE release and while we wait to complete the PoC implementation of this new feature (#5019).
### Acceptance criteria
<!-- A list of predefined requirements that must be met in order to mark the issue as complete. -->
* Payment request buttons should appear at cart/checkout/store where enabled.
* Should be able to successfully complete checkout using both Apple and Google Pay.
### Additional context
<!-- Any additional context or details you think might be helpful. -->
<!-- Ticket numbers/links, P2s, project threads, etc. -->
[Stripe UPE – Options to solve WooPay compatibility and WC Pay loss of control over the payments UI/UX](pc2DNy-2e2-p2) | priority | enhanced upe wcpay compatibility payment request buttons description this is the continuation on the integration of stripe s new upe payment flows please see for more details this wcpay compatibility issue is one of many to ensure that existing wcpay functionality is unaffected and remains fully operational after refactoring the upe to follow the new payment flow this issue is to ensure that existing payment request buttons e g apple google pay remain functioning normally this issue is currently blocked as we await for stripe to provide us access to their beta upe release and while we wait to complete the poc implementation of this new feature acceptance criteria payment request buttons should appear at cart checkout store where enabled should be able to successfully complete checkout using both apple and google pay additional context | 1 |
562,658 | 16,666,068,706 | IssuesEvent | 2021-06-07 04:07:17 | turbot/steampipe-mod-aws-compliance | https://api.github.com/repos/turbot/steampipe-mod-aws-compliance | closed | Getting error `'List' call requires an '=' qual for column: service_namespace` from table `aws_appautoscaling_target` | bug priority:high | **Describe the bug**
```
select * from aws_appautoscaling_target;
Error: 'List' call requires an '=' qual for column: service_namespace
```
**Steampipe version (`steampipe -v`)**
Example: v0.5.0
**Plugin version (`steampipe plugin list`)**
AWS: v0.18.0
**To reproduce**
**Expected behavior**
No errors
**Additional context**
N/A
| 1.0 | Getting error `'List' call requires an '=' qual for column: service_namespace` from table `aws_appautoscaling_target` - **Describe the bug**
```
select * from aws_appautoscaling_target;
Error: 'List' call requires an '=' qual for column: service_namespace
```
**Steampipe version (`steampipe -v`)**
Example: v0.5.0
**Plugin version (`steampipe plugin list`)**
AWS: v0.18.0
**To reproduce**
**Expected behavior**
No errors
**Additional context**
N/A
| priority | getting error list call requires an qual for column service namespace from table aws appautoscaling target describe the bug select from aws appautoscaling target error list call requires an qual for column service namespace steampipe version steampipe v example plugin version steampipe plugin list aws to reproduce expected behavior no errors additional context n a | 1 |
144,657 | 5,543,806,910 | IssuesEvent | 2017-03-22 17:42:28 | multidadosti-erp/multidadosti-addons | https://api.github.com/repos/multidadosti-erp/multidadosti-addons | opened | Adicionar Cliente e Empresa no Timesheet | Category: Backend Category: Frontend Priority: High Priority: Medium Stage: Backlog Type: Improvement | Adicionar campos cliente e empresa no Timesheet.
- [ ] Adicionar os campos no modulo de atendimento
- [ ] Adicionar os campos no modulo de calendário
- [ ] Atualizar o código de criação de entrada de timesheet, adicionando os campos acima | 2.0 | Adicionar Cliente e Empresa no Timesheet - Adicionar campos cliente e empresa no Timesheet.
- [ ] Adicionar os campos no modulo de atendimento
- [ ] Adicionar os campos no modulo de calendário
- [ ] Atualizar o código de criação de entrada de timesheet, adicionando os campos acima | priority | adicionar cliente e empresa no timesheet adicionar campos cliente e empresa no timesheet adicionar os campos no modulo de atendimento adicionar os campos no modulo de calendário atualizar o código de criação de entrada de timesheet adicionando os campos acima | 1 |
468,293 | 13,464,797,833 | IssuesEvent | 2020-09-09 19:47:39 | denoland/deno_registry2 | https://api.github.com/repos/denoland/deno_registry2 | closed | Publishing is getting stuck in analyzing_dependencies | high priority operations | A build with these options is getting hung up (build times out after 5 minutes) in analyzing_dependencies:
```json
{
"type": "github",
"moduleName": "functional",
"repository": "sebastienfilion/functional",
"ref": "v0.5.0",
"version": "v0.5.0",
"subdir": "library/"
}
```
The build ID is `5f591efe00791e5b001a6c92`.
I am investigating. | 1.0 | Publishing is getting stuck in analyzing_dependencies - A build with these options is getting hung up (build times out after 5 minutes) in analyzing_dependencies:
```json
{
"type": "github",
"moduleName": "functional",
"repository": "sebastienfilion/functional",
"ref": "v0.5.0",
"version": "v0.5.0",
"subdir": "library/"
}
```
The build ID is `5f591efe00791e5b001a6c92`.
I am investigating. | priority | publishing is getting stuck in analyzing dependencies a build with these options is getting hung up build times out after minutes in analyzing dependencies json type github modulename functional repository sebastienfilion functional ref version subdir library the build id is i am investigating | 1 |
198,061 | 6,969,325,379 | IssuesEvent | 2017-12-11 04:30:48 | LordMonoxide/gradient | https://api.github.com/repos/LordMonoxide/gradient | closed | Flatten multi-items | enhancement high priority | This will simplify recipe registration, and will be required for the 1.13 upgrade anyway | 1.0 | Flatten multi-items - This will simplify recipe registration, and will be required for the 1.13 upgrade anyway | priority | flatten multi items this will simplify recipe registration and will be required for the upgrade anyway | 1 |
417,718 | 12,178,054,192 | IssuesEvent | 2020-04-28 08:22:33 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | [0.9.0 staging-1528] Crash when press revise | Priority: High | ```
Server encountered an exception:
<size=60,00%>Exception: ArgumentException
Message:Property set method not found.
Source:System.Private.CoreLib
System.ArgumentException: Property set method not found.
at System.Reflection.RuntimePropertyInfo.SetValue(Object obj, Object value, BindingFlags invokeAttr, Binder binder, Object[] index, CultureInfo culture)
at System.Reflection.RuntimePropertyInfo.SetValue(Object obj, Object value, Object[] index)
at Eco.Gameplay.Civics.Misc.Cloner.CopyTo(Object source, Object dest)
at Eco.Gameplay.Civics.Misc.Cloner.Clone(Object source, Boolean topLevel)
at Eco.Gameplay.Civics.Misc.Cloner.CopyList(IList sourceVal, IList destVal)
at Eco.Gameplay.Civics.Laws.TriggerSettings.CopyFrom(Object source)
at Eco.Gameplay.Civics.Misc.Cloner.CopyTo(Object source, Object dest)
at Eco.Gameplay.Civics.Misc.Cloner.Clone(Object source, Boolean topLevel)
at Eco.Gameplay.Civics.Misc.Cloner.CopyList(IList sourceVal, IList destVal)
at Eco.Gameplay.Civics.Misc.Cloner.CopyTo(Object source, Object dest)
at Eco.Gameplay.Civics.Misc.Cloner.Clone(Object source, Boolean topLevel)
at Eco.Gameplay.Civics.Misc.Cloner.CopyList(IList sourceVal, IList destVal)
at Eco.Gameplay.Civics.Misc.Cloner.CopyTo(Object source, Object dest)
at Eco.Gameplay.Components.CivicSlot.MakeNew(User user, SimpleEntry copyFrom)
at Eco.Gameplay.Components.CivicObjectComponent.Edit(Player player, Int32 index)</size>
```
[Crash when press revise law.txt](https://github.com/StrangeLoopGames/EcoIssues/files/4544285/Crash.when.press.revise.law.txt)
Step to reproduce:
- create law, /civics winelection, press revise law:

| 1.0 | [0.9.0 staging-1528] Crash when press revise - ```
Server encountered an exception:
<size=60,00%>Exception: ArgumentException
Message:Property set method not found.
Source:System.Private.CoreLib
System.ArgumentException: Property set method not found.
at System.Reflection.RuntimePropertyInfo.SetValue(Object obj, Object value, BindingFlags invokeAttr, Binder binder, Object[] index, CultureInfo culture)
at System.Reflection.RuntimePropertyInfo.SetValue(Object obj, Object value, Object[] index)
at Eco.Gameplay.Civics.Misc.Cloner.CopyTo(Object source, Object dest)
at Eco.Gameplay.Civics.Misc.Cloner.Clone(Object source, Boolean topLevel)
at Eco.Gameplay.Civics.Misc.Cloner.CopyList(IList sourceVal, IList destVal)
at Eco.Gameplay.Civics.Laws.TriggerSettings.CopyFrom(Object source)
at Eco.Gameplay.Civics.Misc.Cloner.CopyTo(Object source, Object dest)
at Eco.Gameplay.Civics.Misc.Cloner.Clone(Object source, Boolean topLevel)
at Eco.Gameplay.Civics.Misc.Cloner.CopyList(IList sourceVal, IList destVal)
at Eco.Gameplay.Civics.Misc.Cloner.CopyTo(Object source, Object dest)
at Eco.Gameplay.Civics.Misc.Cloner.Clone(Object source, Boolean topLevel)
at Eco.Gameplay.Civics.Misc.Cloner.CopyList(IList sourceVal, IList destVal)
at Eco.Gameplay.Civics.Misc.Cloner.CopyTo(Object source, Object dest)
at Eco.Gameplay.Components.CivicSlot.MakeNew(User user, SimpleEntry copyFrom)
at Eco.Gameplay.Components.CivicObjectComponent.Edit(Player player, Int32 index)</size>
```
[Crash when press revise law.txt](https://github.com/StrangeLoopGames/EcoIssues/files/4544285/Crash.when.press.revise.law.txt)
Step to reproduce:
- create law, /civics winelection, press revise law:

| priority | crash when press revise server encountered an exception exception argumentexception message property set method not found source system private corelib system argumentexception property set method not found at system reflection runtimepropertyinfo setvalue object obj object value bindingflags invokeattr binder binder object index cultureinfo culture at system reflection runtimepropertyinfo setvalue object obj object value object index at eco gameplay civics misc cloner copyto object source object dest at eco gameplay civics misc cloner clone object source boolean toplevel at eco gameplay civics misc cloner copylist ilist sourceval ilist destval at eco gameplay civics laws triggersettings copyfrom object source at eco gameplay civics misc cloner copyto object source object dest at eco gameplay civics misc cloner clone object source boolean toplevel at eco gameplay civics misc cloner copylist ilist sourceval ilist destval at eco gameplay civics misc cloner copyto object source object dest at eco gameplay civics misc cloner clone object source boolean toplevel at eco gameplay civics misc cloner copylist ilist sourceval ilist destval at eco gameplay civics misc cloner copyto object source object dest at eco gameplay components civicslot makenew user user simpleentry copyfrom at eco gameplay components civicobjectcomponent edit player player index step to reproduce create law civics winelection press revise law | 1 |
342,190 | 10,313,197,827 | IssuesEvent | 2019-08-29 21:56:01 | Sp2000/colplus-repo | https://api.github.com/repos/Sp2000/colplus-repo | closed | Remove colons from virus genera | high priority | Virus names in the CoL do not follow the rules layed out by the ICTV:
https://talk.ictvonline.org/information/w/faq/386/how-to-write-virus-and-species-names
We should strip off at least the colon after the genus name so it becomes a real genus, not a string formatting hack. | 1.0 | Remove colons from virus genera - Virus names in the CoL do not follow the rules layed out by the ICTV:
https://talk.ictvonline.org/information/w/faq/386/how-to-write-virus-and-species-names
We should strip off at least the colon after the genus name so it becomes a real genus, not a string formatting hack. | priority | remove colons from virus genera virus names in the col do not follow the rules layed out by the ictv we should strip off at least the colon after the genus name so it becomes a real genus not a string formatting hack | 1 |
38,014 | 2,838,233,487 | IssuesEvent | 2015-05-27 05:54:20 | dispansible/dispansible | https://api.github.com/repos/dispansible/dispansible | opened | Adopt convention when overriding official modules | priority: high type: question | As of Ansible 1.9, the modules are shipped with `.py` extension (formerly packaged without extension).
Consequence: customized modules stored in `ansible/library` must be `.py` suffixed to effectively override official modules with Ansible 1.9+.
Solutions:
1. use `.py` and claim Ansible 1.9+ as requirement ( :disappointed: )
1. don't use the same name, but create new modules with a "well known prefix", e.g. `disp_apt_repository`. :bulb: :ok_woman:
The second option is not that bad, as it warranties that only the "patched / improved" modules are used in the needed places, and doesn't force all the ansible tasks to rely on them (e.g. an external role can benefit from a new module feature in parallel -> forcing Ansible upgrade by the way) | 1.0 | Adopt convention when overriding official modules - As of Ansible 1.9, the modules are shipped with `.py` extension (formerly packaged without extension).
Consequence: customized modules stored in `ansible/library` must be `.py` suffixed to effectively override official modules with Ansible 1.9+.
Solutions:
1. use `.py` and claim Ansible 1.9+ as requirement ( :disappointed: )
1. don't use the same name, but create new modules with a "well known prefix", e.g. `disp_apt_repository`. :bulb: :ok_woman:
The second option is not that bad, as it warranties that only the "patched / improved" modules are used in the needed places, and doesn't force all the ansible tasks to rely on them (e.g. an external role can benefit from a new module feature in parallel -> forcing Ansible upgrade by the way) | priority | adopt convention when overriding official modules as of ansible the modules are shipped with py extension formerly packaged without extension consequence customized modules stored in ansible library must be py suffixed to effectively override official modules with ansible solutions use py and claim ansible as requirement disappointed don t use the same name but create new modules with a well known prefix e g disp apt repository bulb ok woman the second option is not that bad as it warranties that only the patched improved modules are used in the needed places and doesn t force all the ansible tasks to rely on them e g an external role can benefit from a new module feature in parallel forcing ansible upgrade by the way | 1 |
174,576 | 6,541,270,111 | IssuesEvent | 2017-09-01 19:05:51 | rails-girls-summer-of-code/rgsoc-teams | https://api.github.com/repos/rails-girls-summer-of-code/rgsoc-teams | closed | Issue with user confirming workflow: clicking on confirmation link in email throws an error | bug high-priority | This bug is probably related to https://github.com/rails-girls-summer-of-code/rgsoc-teams/issues/635; we've had a couple of reports of people being unable to:
- receive a confirmation email and
- confirm their account once the confirmation email was triggered manually
When clicking on the token in the confirmation link, users are directed to a page that throws an internal server error (500). It seems that all users affected ended up with a confirmed account after clicking the link, so the issue might really just be in the page users get redirected to?
| 1.0 | Issue with user confirming workflow: clicking on confirmation link in email throws an error - This bug is probably related to https://github.com/rails-girls-summer-of-code/rgsoc-teams/issues/635; we've had a couple of reports of people being unable to:
- receive a confirmation email and
- confirm their account once the confirmation email was triggered manually
When clicking on the token in the confirmation link, users are directed to a page that throws an internal server error (500). It seems that all users affected ended up with a confirmed account after clicking the link, so the issue might really just be in the page users get redirected to?
| priority | issue with user confirming workflow clicking on confirmation link in email throws an error this bug is probably related to we ve had a couple of reports of people being unable to receive a confirmation email and confirm their account once the confirmation email was triggered manually when clicking on the token in the confirmation link users are directed to a page that throws an internal server error it seems that all users affected ended up with a confirmed account after clicking the link so the issue might really just be in the page users get redirected to | 1 |
54,497 | 3,068,491,600 | IssuesEvent | 2015-08-18 15:52:07 | meetinghouse/cms | https://api.github.com/repos/meetinghouse/cms | closed | Light Theme: Need more numbers in sorting dropdown for Project order | High Priority | When creating or editing a project, the user can select the order in which the project will appear in its portfolio with a dropdown list of numbers 1-10.
We need more numbers! Can this be increased to, say, 50?
To see an example, see http://coastalclockandchime.corbettresearchgroup.com/projects/21/edit

| 1.0 | Light Theme: Need more numbers in sorting dropdown for Project order - When creating or editing a project, the user can select the order in which the project will appear in its portfolio with a dropdown list of numbers 1-10.
We need more numbers! Can this be increased to, say, 50?
To see an example, see http://coastalclockandchime.corbettresearchgroup.com/projects/21/edit

| priority | light theme need more numbers in sorting dropdown for project order when creating or editing a project the user can select the order in which the project will appear in its portfolio with a dropdown list of numbers we need more numbers can this be increased to say to see an example see | 1 |
705,534 | 24,238,343,244 | IssuesEvent | 2022-09-27 03:00:06 | umgc/fall2022 | https://api.github.com/repos/umgc/fall2022 | closed | Develop: Create Notification and NotificationSubscription data objects | enhancement *** HIGH PRIORITY *** | As a developer of the application, I need `Notification` and `NotificationSubscription` data objects to work with so that all developers aren't putting in temporary workaround code.
Acceptance Criteria
--
The following data objects are available for developers to reference:
- Add `Notification` object
- Timestamp
- EmailID (foreign key)
- MID (individual mail piece ID)
- NotificationSubscriptionKeyword (foreign key)
- Add `NotificationSubscription` object
- Keyword (string) (primary key to avoid duplicates)
- Objects are ideally added to the database unless it is not yet ready.
- Document any relevant content in the Programmer guide (draft thoughts of usage and importantly the "Why")
- Updates SRS and STP (and other docs if necessary)
- Add/Update relevant unit tests | 1.0 | Develop: Create Notification and NotificationSubscription data objects - As a developer of the application, I need `Notification` and `NotificationSubscription` data objects to work with so that all developers aren't putting in temporary workaround code.
Acceptance Criteria
--
The following data objects are available for developers to reference:
- Add `Notification` object
- Timestamp
- EmailID (foreign key)
- MID (individual mail piece ID)
- NotificationSubscriptionKeyword (foreign key)
- Add `NotificationSubscription` object
- Keyword (string) (primary key to avoid duplicates)
- Objects are ideally added to the database unless it is not yet ready.
- Document any relevant content in the Programmer guide (draft thoughts of usage and importantly the "Why")
- Updates SRS and STP (and other docs if necessary)
- Add/Update relevant unit tests | priority | develop create notification and notificationsubscription data objects as a developer of the application i need notification and notificationsubscription data objects to work with so that all developers aren t putting in temporary workaround code acceptance criteria the following data objects are available for developers to reference add notification object timestamp emailid foreign key mid individual mail piece id notificationsubscriptionkeyword foreign key add notificationsubscription object keyword string primary key to avoid duplicates objects are ideally added to the database unless it is not yet ready document any relevant content in the programmer guide draft thoughts of usage and importantly the why updates srs and stp and other docs if necessary add update relevant unit tests | 1 |
343,142 | 10,325,689,486 | IssuesEvent | 2019-09-01 19:28:28 | LibreTexts/metalc | https://api.github.com/repos/LibreTexts/metalc | closed | Switch to a seperate NFS file server for user data | high priority | From Richard:
> First steps here are to install CentOS on a virtual machine and set up and learn about ZFS, here: https://zfsonlinux.org | 1.0 | Switch to a seperate NFS file server for user data - From Richard:
> First steps here are to install CentOS on a virtual machine and set up and learn about ZFS, here: https://zfsonlinux.org | priority | switch to a seperate nfs file server for user data from richard first steps here are to install centos on a virtual machine and set up and learn about zfs here | 1 |
392,944 | 11,597,968,951 | IssuesEvent | 2020-02-24 22:00:15 | AY1920S2-CS2103T-T10-2/main | https://api.github.com/repos/AY1920S2-CS2103T-T10-2/main | closed | As a student, I can create decks to contain my flashcards | priority.High type.Story | so that I can organise my notes and modules. | 1.0 | As a student, I can create decks to contain my flashcards - so that I can organise my notes and modules. | priority | as a student i can create decks to contain my flashcards so that i can organise my notes and modules | 1 |
349,717 | 10,472,318,112 | IssuesEvent | 2019-09-23 09:54:32 | bradyrx/climpred | https://api.github.com/repos/bradyrx/climpred | closed | Add vectorized deterministic and probabilistic metrics | high priority | Take advantage of existing: don't reinvent the wheel
- murcss:
- https://rawgit.com/illing2005/murcss/master/doc/build/html/tutorial.html
- all calculations based on CDOs
- built primarily for MPI-ESM Decadal Prediction Project MiKlip
- can handle Murphy-Eppstein decomposition, bias (really useful for hindcasts)
- can probabilty be used to create lead year timeseries easily
- contains:
- MSESS (Mean Squared Error Skill Score)
- CRPSS (Continous Ranked Probabillity Skill Score)
- properscoring
- https://github.com/TheClimateCorporation/properscoring
- contains:
- Continuous Ranked Probability Score (CRPS)
- Brier Score
- https://github.com/PeterRochford/SkillMetrics
- not applicable to maps?
New metrics should be able to be applied to mapped data (input data dimensions lon, lat, time, ensemble, member) and run fast. Otherwise bootstrapping will not work.
Lets post in this thread metrics we want to see in this package: | 1.0 | Add vectorized deterministic and probabilistic metrics - Take advantage of existing: don't reinvent the wheel
- murcss:
- https://rawgit.com/illing2005/murcss/master/doc/build/html/tutorial.html
- all calculations based on CDOs
- built primarily for MPI-ESM Decadal Prediction Project MiKlip
- can handle Murphy-Eppstein decomposition, bias (really useful for hindcasts)
- can probabilty be used to create lead year timeseries easily
- contains:
- MSESS (Mean Squared Error Skill Score)
- CRPSS (Continous Ranked Probabillity Skill Score)
- properscoring
- https://github.com/TheClimateCorporation/properscoring
- contains:
- Continuous Ranked Probability Score (CRPS)
- Brier Score
- https://github.com/PeterRochford/SkillMetrics
- not applicable to maps?
New metrics should be able to be applied to mapped data (input data dimensions lon, lat, time, ensemble, member) and run fast. Otherwise bootstrapping will not work.
Lets post in this thread metrics we want to see in this package: | priority | add vectorized deterministic and probabilistic metrics take advantage of existing don t reinvent the wheel murcss all calculations based on cdos built primarily for mpi esm decadal prediction project miklip can handle murphy eppstein decomposition bias really useful for hindcasts can probabilty be used to create lead year timeseries easily contains msess mean squared error skill score crpss continous ranked probabillity skill score properscoring contains continuous ranked probability score crps brier score not applicable to maps new metrics should be able to be applied to mapped data input data dimensions lon lat time ensemble member and run fast otherwise bootstrapping will not work lets post in this thread metrics we want to see in this package | 1 |
375,055 | 11,099,103,076 | IssuesEvent | 2019-12-16 16:22:58 | prysmaticlabs/prysm | https://api.github.com/repos/prysmaticlabs/prysm | closed | Handle Attester and Proposer Slashings in Operations Service | Priority: High Tracking | Currently neither of the attester or proposer slashing objects are handled in the beacon node. If we receive the objects from other peers, there is no specific service to which we can hand it off to so that it is validated and then able to be packaged into a block by a proposer. This issue is to track the implementation of handlers for slashing messages in the operations service.
Relevant to #2766 | 1.0 | Handle Attester and Proposer Slashings in Operations Service - Currently neither of the attester or proposer slashing objects are handled in the beacon node. If we receive the objects from other peers, there is no specific service to which we can hand it off to so that it is validated and then able to be packaged into a block by a proposer. This issue is to track the implementation of handlers for slashing messages in the operations service.
Relevant to #2766 | priority | handle attester and proposer slashings in operations service currently neither of the attester or proposer slashing objects are handled in the beacon node if we receive the objects from other peers there is no specific service to which we can hand it off to so that it is validated and then able to be packaged into a block by a proposer this issue is to track the implementation of handlers for slashing messages in the operations service relevant to | 1 |
346,070 | 10,383,929,169 | IssuesEvent | 2019-09-10 10:46:10 | trustwallet/platform | https://api.github.com/repos/trustwallet/platform | closed | Split code into modules. Refactor code. | Priority: High | Implement architecture that allows adding chains in a simple way by adding minimal lines of code. The code should be splitted into modules. Modules should be lazy loaded. Modules should implement blockchain-provider and should be created on the fly with different configurations. That approach will allow to route different coins based on the same blockchain. | 1.0 | Split code into modules. Refactor code. - Implement architecture that allows adding chains in a simple way by adding minimal lines of code. The code should be splitted into modules. Modules should be lazy loaded. Modules should implement blockchain-provider and should be created on the fly with different configurations. That approach will allow to route different coins based on the same blockchain. | priority | split code into modules refactor code implement architecture that allows adding chains in a simple way by adding minimal lines of code the code should be splitted into modules modules should be lazy loaded modules should implement blockchain provider and should be created on the fly with different configurations that approach will allow to route different coins based on the same blockchain | 1 |
262,086 | 8,250,622,716 | IssuesEvent | 2018-09-12 03:51:03 | AguaClara/aide | https://api.github.com/repos/AguaClara/aide | closed | Team restructuring for Fall 2018 | general high priority | I've been thinking about how we can improve upon the current structure of the AIDE subteam and where we're headed in the coming academic year. Here are some ideas I have:
## More Github, less GDrive
Upon cleaning out the [AIDE GDrive folder](https://drive.google.com/open?id=0B_v6wpES3UAuRW5vQXViOEt2TUk) (which was pretty messy), I realized that _the vast majority of our organizational tools can be ported over to the functionality in Github_: issues for to-do's, discussion boards for status updates, etc. I think the GDrive folder should only be used for:
- Files that don't work well with Github repos (e.g. our presentations, drawings/diagrams)
- Documents that need to be updated and written to by multiple people at the same time
Everything else should go in AIDE repositories, wikis, or AIDE teams' discussion boards. The benefit of this is that it reduces the amount of maintaining we'll have to do, and it will make AIDE team members more comfortable with using Github - something they'll inevitably have to do in their future careers.
## Less specialization of coding teams
AIDE members almost always fall into one of two skill sets: _CAD-ing_ and _coding_. There's definitely no expectation that team members should switch between those two skill sets. As such, AIDE Template should remain as a discrete (sub-)sub-team within AIDE.
But instead of having sub-teams that each specialize in a single AIDE module, I believe we should have _general coding teams that can work on any module necessary_. This could take the form of 2-3 coding teams (called AIDE Code A/B/C, or perhaps something that doesn't imply a hierarchy 😝), such that each team's members are working together for the entire semester. These coding teams wouldn't be tied down to a single AIDE module, and instead would have the freedom to switch between working on different modules.
### Benefits
- Team members will have a better, broader understanding of the entire AIDE tool and different software development tools
- Exposing more people to the same problem can bring in new ideas
- Avoid burnout on difficult tasks
### Drawbacks
- It will take much longer for a new team member to jump in
- Switching modules/problems will take extra time and energy
## Better coherence as a team
To resolve some of the above drawbacks, we should _actively build coherence of AIDE as a centralized subteam, rather than a bunch of sub-sub-teams._ When I first joined AIDE, I didn't quite understand _why_ I was doing what I was doing, and as a result, our work was confused and we sometimes had to backtrack. The first priority for AIDE in a given semester is that each team member understands:
1. What the AIDE tool is intended to accomplish for the end user
2. How each AIDE module works and interacts within the entire AIDE tool
3. How to write good code that is easy to read and understand
When new members join, they should be _required to complete both the [AguaClara Tutorial](https://github.com/AguaClara/aguaclara_tutorial) and the [AIDE Tutorial](https://github.com/AguaClara/aide_tutorial)_. There's a lot of knowledge in the AguaClara Tutorial that would be useful for AIDE members, especially with using Git and Github. I'm working on improving both tutorials to ease new members into the AIDE workflow.
It's also important that the _subteam's leadership is actively interacting with team members_ so that they know someone to turn to when they run into problems. For this, we could hold monthly team meetings - similar to the current leadership meetings, but with more involvement (as AIDE is so large, I don't expect everyone to be able to attend, but at least one person from each team should).
To facilitate teams switching between different modules/problems, we should have _"pivots": meetings in which one team teaches another team_ about what they've accomplished with their module, and provides tips on how to continue. These pivots will continually build on the team members' understanding of AIDE.
## Semester-long planning
There are about 15 weeks from when new AguaClara team members join to the final presentations - we'll have to plan wisely. A possible schedule:
- **1-2 weeks**: Learning about how AIDE works, old members teach new members
- **5-6 weeks**: Adding features, tackling problems, pivots, team meetings
- **1 week**: "Frozen code", during which no new features are added. Ensure that code is well written and documented, culminating in code reviews
- ***Mid-semester Symposium***
- **5-6 weeks**: Adding features, tackling problems, pivots, team meetings
- **1 week**: "Frozen code"
- ***Final Presentations*** | 1.0 | Team restructuring for Fall 2018 - I've been thinking about how we can improve upon the current structure of the AIDE subteam and where we're headed in the coming academic year. Here are some ideas I have:
## More Github, less GDrive
Upon cleaning out the [AIDE GDrive folder](https://drive.google.com/open?id=0B_v6wpES3UAuRW5vQXViOEt2TUk) (which was pretty messy), I realized that _the vast majority of our organizational tools can be ported over to the functionality in Github_: issues for to-do's, discussion boards for status updates, etc. I think the GDrive folder should only be used for:
- Files that don't work well with Github repos (e.g. our presentations, drawings/diagrams)
- Documents that need to be updated and written to by multiple people at the same time
Everything else should go in AIDE repositories, wikis, or AIDE teams' discussion boards. The benefit of this is that it reduces the amount of maintaining we'll have to do, and it will make AIDE team members more comfortable with using Github - something they'll inevitably have to do in their future careers.
## Less specialization of coding teams
AIDE members almost always fall into one of two skill sets: _CAD-ing_ and _coding_. There's definitely no expectation that team members should switch between those two skill sets. As such, AIDE Template should remain as a discrete (sub-)sub-team within AIDE.
But instead of having sub-teams that each specialize in a single AIDE module, I believe we should have _general coding teams that can work on any module necessary_. This could take the form of 2-3 coding teams (called AIDE Code A/B/C, or perhaps something that doesn't imply a hierarchy 😝), such that each team's members are working together for the entire semester. These coding teams wouldn't be tied down to a single AIDE module, and instead would have the freedom to switch between working on different modules.
### Benefits
- Team members will have a better, broader understanding of the entire AIDE tool and different software development tools
- Exposing more people to the same problem can bring in new ideas
- Avoid burnout on difficult tasks
### Drawbacks
- It will take much longer for a new team member to jump in
- Switching modules/problems will take extra time and energy
## Better coherence as a team
To resolve some of the above drawbacks, we should _actively build coherence of AIDE as a centralized subteam, rather than a bunch of sub-sub-teams._ When I first joined AIDE, I didn't quite understand _why_ I was doing what I was doing, and as a result, our work was confused and we sometimes had to backtrack. The first priority for AIDE in a given semester is that each team member understands:
1. What the AIDE tool is intended to accomplish for the end user
2. How each AIDE module works and interacts within the entire AIDE tool
3. How to write good code that is easy to read and understand
When new members join, they should be _required to complete both the [AguaClara Tutorial](https://github.com/AguaClara/aguaclara_tutorial) and the [AIDE Tutorial](https://github.com/AguaClara/aide_tutorial)_. There's a lot of knowledge in the AguaClara Tutorial that would be useful for AIDE members, especially with using Git and Github. I'm working on improving both tutorials to ease new members into the AIDE workflow.
It's also important that the _subteam's leadership is actively interacting with team members_ so that they know someone to turn to when they run into problems. For this, we could hold monthly team meetings - similar to the current leadership meetings, but with more involvement (as AIDE is so large, I don't expect everyone to be able to attend, but at least one person from each team should).
To facilitate teams switching between different modules/problems, we should have _"pivots": meetings in which one team teaches another team_ about what they've accomplished with their module, and provides tips on how to continue. These pivots will continually build on the team members' understanding of AIDE.
## Semester-long planning
There are about 15 weeks from when new AguaClara team members join to the final presentations - we'll have to plan wisely. A possible schedule:
- **1-2 weeks**: Learning about how AIDE works, old members teach new members
- **5-6 weeks**: Adding features, tackling problems, pivots, team meetings
- **1 week**: "Frozen code", during which no new features are added. Ensure that code is well written and documented, culminating in code reviews
- ***Mid-semester Symposium***
- **5-6 weeks**: Adding features, tackling problems, pivots, team meetings
- **1 week**: "Frozen code"
- ***Final Presentations*** | priority | team restructuring for fall i ve been thinking about how we can improve upon the current structure of the aide subteam and where we re headed in the coming academic year here are some ideas i have more github less gdrive upon cleaning out the which was pretty messy i realized that the vast majority of our organizational tools can be ported over to the functionality in github issues for to do s discussion boards for status updates etc i think the gdrive folder should only be used for files that don t work well with github repos e g our presentations drawings diagrams documents that need to be updated and written to by multiple people at the same time everything else should go in aide repositories wikis or aide teams discussion boards the benefit of this is that it reduces the amount of maintaining we ll have to do and it will make aide team members more comfortable with using github something they ll inevitably have to do in their future careers less specialization of coding teams aide members almost always fall into one of two skill sets cad ing and coding there s definitely no expectation that team members should switch between those two skill sets as such aide template should remain as a discrete sub sub team within aide but instead of having sub teams that each specialize in a single aide module i believe we should have general coding teams that can work on any module necessary this could take the form of coding teams called aide code a b c or perhaps something that doesn t imply a hierarchy 😝 such that each team s members are working together for the entire semester these coding teams wouldn t be tied down to a single aide module and instead would have the freedom to switch between working on different modules benefits team members will have a better broader understanding of the entire aide tool and different software development tools exposing more people to the same problem can bring in new ideas avoid burnout on difficult tasks drawbacks it will take much longer for a new team member to jump in switching modules problems will take extra time and energy better coherence as a team to resolve some of the above drawbacks we should actively build coherence of aide as a centralized subteam rather than a bunch of sub sub teams when i first joined aide i didn t quite understand why i was doing what i was doing and as a result our work was confused and we sometimes had to backtrack the first priority for aide in a given semester is that each team member understands what the aide tool is intended to accomplish for the end user how each aide module works and interacts within the entire aide tool how to write good code that is easy to read and understand when new members join they should be required to complete both the and the there s a lot of knowledge in the aguaclara tutorial that would be useful for aide members especially with using git and github i m working on improving both tutorials to ease new members into the aide workflow it s also important that the subteam s leadership is actively interacting with team members so that they know someone to turn to when they run into problems for this we could hold monthly team meetings similar to the current leadership meetings but with more involvement as aide is so large i don t expect everyone to be able to attend but at least one person from each team should to facilitate teams switching between different modules problems we should have pivots meetings in which one team teaches another team about what they ve accomplished with their module and provides tips on how to continue these pivots will continually build on the team members understanding of aide semester long planning there are about weeks from when new aguaclara team members join to the final presentations we ll have to plan wisely a possible schedule weeks learning about how aide works old members teach new members weeks adding features tackling problems pivots team meetings week frozen code during which no new features are added ensure that code is well written and documented culminating in code reviews mid semester symposium weeks adding features tackling problems pivots team meetings week frozen code final presentations | 1 |
125,998 | 4,971,311,933 | IssuesEvent | 2016-12-05 18:21:03 | neuropoly/spinalcordtoolbox | https://api.github.com/repos/neuropoly/spinalcordtoolbox | closed | check that if user calls dmri.nii.gz dwi.nii.gz, there is no overwriting of file | priority: high sct_dmri_moco | because moco is supposed to output a file called dwi.nii.gz
| 1.0 | check that if user calls dmri.nii.gz dwi.nii.gz, there is no overwriting of file - because moco is supposed to output a file called dwi.nii.gz
| priority | check that if user calls dmri nii gz dwi nii gz there is no overwriting of file because moco is supposed to output a file called dwi nii gz | 1 |
547,363 | 16,041,797,127 | IssuesEvent | 2021-04-22 08:48:44 | sopra-fs21-group-05/group-05-server | https://api.github.com/repos/sopra-fs21-group-05/group-05-server | closed | create separate scoreboard controller and service | high priority task | - [x] Create Scoreboard Controller
- [x] Create Scoreboard Service
Time estimate: 1h
This task is part of user story #8 | 1.0 | create separate scoreboard controller and service - - [x] Create Scoreboard Controller
- [x] Create Scoreboard Service
Time estimate: 1h
This task is part of user story #8 | priority | create separate scoreboard controller and service create scoreboard controller create scoreboard service time estimate this task is part of user story | 1 |
751,186 | 26,232,382,895 | IssuesEvent | 2023-01-05 02:11:50 | tremorlabs/tremor | https://api.github.com/repos/tremorlabs/tremor | closed | Datepicker doesn't work if values are relying on state outside component since whole component rerenders | Priority: High Type: Question | So I am facing the following issue. I am fetching data from my API via a hook (in the code `useHeadcount`). I am using Nextjs & SWR for fetching. This hook needs the `startDate` and `endDate` from tremor's Datepicker to query the API for the headcount in that range. I am controlling those outside with a simple useState. Whenever a user selects the dates from the Datepicker component, the `handleFilterDate` is called and updates the variables.
The issue that arrises here is that the whole component is mounted and re-rendered, whenever I select something from the datepicker, resetting `startDate` and `endDate` back to the default values I set. This results that a user is unable to select a date range, always going back to the defaults. The behavior can be seen in the screen recording [here](https://youtu.be/iuVKRVLvmCE).
I tried to use useCallback (on handleFilterDate) and useMemo (on the DatePicker component) but unsuccessfully. The only ugly solution was to refactor the whole component, making Datepicker not relying on data, isLoading or error and due to the rerender. I'll paste the ugly solution below.
There has to be a better & cleaner way how to solve this. Any ideas what to do that? The ugly solution can't be the only solution...
Code:
```
const HeadcountCard = () => {
const { workspace } = useWorkspace();
const currentYear = new Date().getFullYear();
const [startDate, setStartDate] = useState(new Date(currentYear, 0, 1));
const [endDate, setEndDate] = useState(new Date());
const { isLoading, isError, data } = useHeadcount(
workspace.slug,
startDate,
endDate,
);
// TODO: Right now there is a bug that it alwaya resets the date again to original startDate & endDate
const handleFilterDate = (sDate: Date, eDate: Date) => {
setStartDate(sDate);
setEndDate(eDate);
};
if (isLoading) {
return <LoadingMetricCard />;
}
if (isError) {
return <DataFetchingErrorCard />;
}
return (
<MetricCard>
<div className="flex justify-between items-center">
<div className="flex flex-row items-center space-x-10">
<div className="flex flex-row space-x-2">
<div className="text-xl font-semibold dark:text-white">
Headcount
</div>
<div className="text-xl font-bold">{data.headcount.length}</div>
</div>
<div className="flex flex-row items-center space-x-3">
<BadgeDelta deltaType={"increase"} text={"23.9%"} />
<div className="text-sm text-stone-400">YTD</div>
</div>
</div>
<div>
<Link
className="text-blue-600"
href={`/dashboard/${workspace.slug}/metrics/headcount`}
>
Details →
</Link>
</div>
</div>
<div className="space-y-8">
<AreaChart
data={data.headcount}
categories={["employees"]}
dataKey="date"
height="h-72"
colors={["cyan", "indigo"]}
marginTop="mt-4"
/>
<Datepicker
placeholder="Select..."
enableRelativeDates={true}
handleSelect={handleFilterDate}
defaultRelativeFilterOption="y"
defaultStartDate={null}
defaultEndDate={null}
minDate={null}
maxDate={null}
color="blue"
maxWidth="max-w-none"
marginTop="mt-0"
/>
</div>
</MetricCard>
);
};
export default HeadcountCard;
```
Ugly solution that works...
```
function HeadcountCardPartial({
slug,
headcount,
loading,
error,
}: {
slug: string;
headcount: [
{
interval: number;
date: Date;
employees: number;
},
];
loading: boolean;
error: Error;
}) {
if (loading) {
return <LoadingMetricCard />;
}
if (error) {
return <DataFetchingErrorCard />;
}
return (
<>
<div className="flex justify-between items-center">
<div className="flex flex-row items-center space-x-10">
<div className="flex flex-row space-x-2">
<div className="text-xl font-semibold dark:text-white">
Headcount
</div>
<div className="text-xl font-bold">{headcount.length}</div>
</div>
<div className="flex flex-row items-center space-x-3">
<BadgeDelta deltaType={"increase"} text={"23.9%"} />
<div className="text-sm text-stone-400">YTD</div>
</div>
</div>
<div>
<Link
className="text-blue-600"
href={`/dashboard/${slug}/metrics/headcount`}
>
Details →
</Link>
</div>
</div>
</>
);
}
function HeadcountCard() {
const { workspace } = useWorkspace();
const currentYear = new Date().getFullYear();
const [startDate, setStartDate] = useState(new Date(currentYear, 0, 1));
const [endDate, setEndDate] = useState(new Date());
const { isLoading, isError, data } = useHeadcount(
workspace.slug,
startDate,
endDate,
);
const handleFilterDate = (sDate: Date, eDate: Date) => {
setStartDate(sDate);
setEndDate(eDate);
};
return (
<MetricCard>
<HeadcountCardPartial
slug={workspace.slug}
headcount={data?.headcount}
loading={isLoading}
error={isError}
/>
<div className="space-y-8">
{!isLoading && (
<AreaChart
data={data?.headcount}
categories={["employees"]}
dataKey="date"
height="h-72"
colors={["cyan", "indigo"]}
marginTop="mt-4"
/>
)}
<Datepicker
placeholder="Select..."
enableRelativeDates={true}
handleSelect={handleFilterDate}
defaultRelativeFilterOption="y"
defaultStartDate={null}
defaultEndDate={null}
minDate={null}
maxDate={null}
color="blue"
maxWidth="max-w-none"
marginTop="mt-0"
/>
</div>
</MetricCard>
);
}
export default HeadcountCard;
```
| 1.0 | Datepicker doesn't work if values are relying on state outside component since whole component rerenders - So I am facing the following issue. I am fetching data from my API via a hook (in the code `useHeadcount`). I am using Nextjs & SWR for fetching. This hook needs the `startDate` and `endDate` from tremor's Datepicker to query the API for the headcount in that range. I am controlling those outside with a simple useState. Whenever a user selects the dates from the Datepicker component, the `handleFilterDate` is called and updates the variables.
The issue that arrises here is that the whole component is mounted and re-rendered, whenever I select something from the datepicker, resetting `startDate` and `endDate` back to the default values I set. This results that a user is unable to select a date range, always going back to the defaults. The behavior can be seen in the screen recording [here](https://youtu.be/iuVKRVLvmCE).
I tried to use useCallback (on handleFilterDate) and useMemo (on the DatePicker component) but unsuccessfully. The only ugly solution was to refactor the whole component, making Datepicker not relying on data, isLoading or error and due to the rerender. I'll paste the ugly solution below.
There has to be a better & cleaner way how to solve this. Any ideas what to do that? The ugly solution can't be the only solution...
Code:
```
const HeadcountCard = () => {
const { workspace } = useWorkspace();
const currentYear = new Date().getFullYear();
const [startDate, setStartDate] = useState(new Date(currentYear, 0, 1));
const [endDate, setEndDate] = useState(new Date());
const { isLoading, isError, data } = useHeadcount(
workspace.slug,
startDate,
endDate,
);
// TODO: Right now there is a bug that it alwaya resets the date again to original startDate & endDate
const handleFilterDate = (sDate: Date, eDate: Date) => {
setStartDate(sDate);
setEndDate(eDate);
};
if (isLoading) {
return <LoadingMetricCard />;
}
if (isError) {
return <DataFetchingErrorCard />;
}
return (
<MetricCard>
<div className="flex justify-between items-center">
<div className="flex flex-row items-center space-x-10">
<div className="flex flex-row space-x-2">
<div className="text-xl font-semibold dark:text-white">
Headcount
</div>
<div className="text-xl font-bold">{data.headcount.length}</div>
</div>
<div className="flex flex-row items-center space-x-3">
<BadgeDelta deltaType={"increase"} text={"23.9%"} />
<div className="text-sm text-stone-400">YTD</div>
</div>
</div>
<div>
<Link
className="text-blue-600"
href={`/dashboard/${workspace.slug}/metrics/headcount`}
>
Details →
</Link>
</div>
</div>
<div className="space-y-8">
<AreaChart
data={data.headcount}
categories={["employees"]}
dataKey="date"
height="h-72"
colors={["cyan", "indigo"]}
marginTop="mt-4"
/>
<Datepicker
placeholder="Select..."
enableRelativeDates={true}
handleSelect={handleFilterDate}
defaultRelativeFilterOption="y"
defaultStartDate={null}
defaultEndDate={null}
minDate={null}
maxDate={null}
color="blue"
maxWidth="max-w-none"
marginTop="mt-0"
/>
</div>
</MetricCard>
);
};
export default HeadcountCard;
```
Ugly solution that works...
```
function HeadcountCardPartial({
slug,
headcount,
loading,
error,
}: {
slug: string;
headcount: [
{
interval: number;
date: Date;
employees: number;
},
];
loading: boolean;
error: Error;
}) {
if (loading) {
return <LoadingMetricCard />;
}
if (error) {
return <DataFetchingErrorCard />;
}
return (
<>
<div className="flex justify-between items-center">
<div className="flex flex-row items-center space-x-10">
<div className="flex flex-row space-x-2">
<div className="text-xl font-semibold dark:text-white">
Headcount
</div>
<div className="text-xl font-bold">{headcount.length}</div>
</div>
<div className="flex flex-row items-center space-x-3">
<BadgeDelta deltaType={"increase"} text={"23.9%"} />
<div className="text-sm text-stone-400">YTD</div>
</div>
</div>
<div>
<Link
className="text-blue-600"
href={`/dashboard/${slug}/metrics/headcount`}
>
Details →
</Link>
</div>
</div>
</>
);
}
function HeadcountCard() {
const { workspace } = useWorkspace();
const currentYear = new Date().getFullYear();
const [startDate, setStartDate] = useState(new Date(currentYear, 0, 1));
const [endDate, setEndDate] = useState(new Date());
const { isLoading, isError, data } = useHeadcount(
workspace.slug,
startDate,
endDate,
);
const handleFilterDate = (sDate: Date, eDate: Date) => {
setStartDate(sDate);
setEndDate(eDate);
};
return (
<MetricCard>
<HeadcountCardPartial
slug={workspace.slug}
headcount={data?.headcount}
loading={isLoading}
error={isError}
/>
<div className="space-y-8">
{!isLoading && (
<AreaChart
data={data?.headcount}
categories={["employees"]}
dataKey="date"
height="h-72"
colors={["cyan", "indigo"]}
marginTop="mt-4"
/>
)}
<Datepicker
placeholder="Select..."
enableRelativeDates={true}
handleSelect={handleFilterDate}
defaultRelativeFilterOption="y"
defaultStartDate={null}
defaultEndDate={null}
minDate={null}
maxDate={null}
color="blue"
maxWidth="max-w-none"
marginTop="mt-0"
/>
</div>
</MetricCard>
);
}
export default HeadcountCard;
```
| priority | datepicker doesn t work if values are relying on state outside component since whole component rerenders so i am facing the following issue i am fetching data from my api via a hook in the code useheadcount i am using nextjs swr for fetching this hook needs the startdate and enddate from tremor s datepicker to query the api for the headcount in that range i am controlling those outside with a simple usestate whenever a user selects the dates from the datepicker component the handlefilterdate is called and updates the variables the issue that arrises here is that the whole component is mounted and re rendered whenever i select something from the datepicker resetting startdate and enddate back to the default values i set this results that a user is unable to select a date range always going back to the defaults the behavior can be seen in the screen recording i tried to use usecallback on handlefilterdate and usememo on the datepicker component but unsuccessfully the only ugly solution was to refactor the whole component making datepicker not relying on data isloading or error and due to the rerender i ll paste the ugly solution below there has to be a better cleaner way how to solve this any ideas what to do that the ugly solution can t be the only solution code const headcountcard const workspace useworkspace const currentyear new date getfullyear const usestate new date currentyear const usestate new date const isloading iserror data useheadcount workspace slug startdate enddate todo right now there is a bug that it alwaya resets the date again to original startdate enddate const handlefilterdate sdate date edate date setstartdate sdate setenddate edate if isloading return if iserror return return headcount data headcount length ytd link classname text blue href dashboard workspace slug metrics headcount details rarr areachart data data headcount categories datakey date height h colors margintop mt datepicker placeholder select enablerelativedates true handleselect handlefilterdate defaultrelativefilteroption y defaultstartdate null defaultenddate null mindate null maxdate null color blue maxwidth max w none margintop mt export default headcountcard ugly solution that works function headcountcardpartial slug headcount loading error slug string headcount interval number date date employees number loading boolean error error if loading return if error return return headcount headcount length ytd link classname text blue href dashboard slug metrics headcount details rarr function headcountcard const workspace useworkspace const currentyear new date getfullyear const usestate new date currentyear const usestate new date const isloading iserror data useheadcount workspace slug startdate enddate const handlefilterdate sdate date edate date setstartdate sdate setenddate edate return headcountcardpartial slug workspace slug headcount data headcount loading isloading error iserror isloading areachart data data headcount categories datakey date height h colors margintop mt datepicker placeholder select enablerelativedates true handleselect handlefilterdate defaultrelativefilteroption y defaultstartdate null defaultenddate null mindate null maxdate null color blue maxwidth max w none margintop mt export default headcountcard | 1 |
292,760 | 8,967,615,644 | IssuesEvent | 2019-01-29 04:18:03 | tadoku/api | https://api.github.com/repos/tadoku/api | closed | Add middleware for endpoint with authentication | complexity: high help wanted priority: high todo | ## What
- [x] Research JWT authentication with echo
- [x] Figure out how to abstract that away with the current implementation of routes
- [x] Implement a solution so it can be easily reused | 1.0 | Add middleware for endpoint with authentication - ## What
- [x] Research JWT authentication with echo
- [x] Figure out how to abstract that away with the current implementation of routes
- [x] Implement a solution so it can be easily reused | priority | add middleware for endpoint with authentication what research jwt authentication with echo figure out how to abstract that away with the current implementation of routes implement a solution so it can be easily reused | 1 |
465,249 | 13,369,233,639 | IssuesEvent | 2020-09-01 08:32:24 | zeebe-io/zeebe | https://api.github.com/repos/zeebe-io/zeebe | closed | Change default values for diskUsage watermarks | Impact: Usability Priority: High Scope: broker Status: Needs Review Type: Maintenance | **Description**
Current defaults for diskUsageCommandWatermark and diskUsageReplicationWatermark are 0.8 and 0.9. That means, we need to have atleast 20% unused disk space. With these defaults, I am not able to run a broker on a machine that has 50GB free :smile:
I would suggest that we increase the defaults. Otherwise some one trying out Zeebe for first time might fail to run it. This would mean that defaults may be not good for production and one should tune them for production deployment.
| 1.0 | Change default values for diskUsage watermarks - **Description**
Current defaults for diskUsageCommandWatermark and diskUsageReplicationWatermark are 0.8 and 0.9. That means, we need to have atleast 20% unused disk space. With these defaults, I am not able to run a broker on a machine that has 50GB free :smile:
I would suggest that we increase the defaults. Otherwise some one trying out Zeebe for first time might fail to run it. This would mean that defaults may be not good for production and one should tune them for production deployment.
| priority | change default values for diskusage watermarks description current defaults for diskusagecommandwatermark and diskusagereplicationwatermark are and that means we need to have atleast unused disk space with these defaults i am not able to run a broker on a machine that has free smile i would suggest that we increase the defaults otherwise some one trying out zeebe for first time might fail to run it this would mean that defaults may be not good for production and one should tune them for production deployment | 1 |
207,032 | 7,123,896,948 | IssuesEvent | 2018-01-19 16:51:02 | conveyal/trimet-mod-otp | https://api.github.com/repos/conveyal/trimet-mod-otp | reopened | BUG (High): Map Click Origin/Destination Select | bug high priority | Double Click is a defacto map zoom control. Don't put down origin / destination calipers when double clicking the map. Detect double clicks and click-and move map, and don't place calipers on those common map manipulations.

| 1.0 | BUG (High): Map Click Origin/Destination Select - Double Click is a defacto map zoom control. Don't put down origin / destination calipers when double clicking the map. Detect double clicks and click-and move map, and don't place calipers on those common map manipulations.

| priority | bug high map click origin destination select double click is a defacto map zoom control don t put down origin destination calipers when double clicking the map detect double clicks and click and move map and don t place calipers on those common map manipulations | 1 |
282,577 | 8,708,145,784 | IssuesEvent | 2018-12-06 10:04:07 | Dcom-KHU/2018-2-smart-light | https://api.github.com/repos/Dcom-KHU/2018-2-smart-light | closed | 전구 밝기, 전원 조절 | High Priority | - setBrightness(lightId ,number)
- [x] hue 밝기 조절 코드 [로컬]
- [x] hue remote api 인증 토큰 받기
- [ ] hue remote api 사용, 예제 돌려보기
- [ ] aws lambda로 hue 밝기 조절
| 1.0 | 전구 밝기, 전원 조절 - - setBrightness(lightId ,number)
- [x] hue 밝기 조절 코드 [로컬]
- [x] hue remote api 인증 토큰 받기
- [ ] hue remote api 사용, 예제 돌려보기
- [ ] aws lambda로 hue 밝기 조절
| priority | 전구 밝기 전원 조절 setbrightness lightid number hue 밝기 조절 코드 hue remote api 인증 토큰 받기 hue remote api 사용 예제 돌려보기 aws lambda로 hue 밝기 조절 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.