Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 844 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 12 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 248k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
470,079 | 13,530,669,420 | IssuesEvent | 2020-09-15 20:17:53 | creativecommons/creativecommons.github.io-source | https://api.github.com/repos/creativecommons/creativecommons.github.io-source | closed | Split Sass into smaller files | good first issue help wanted ✨ goal: improvement 🕹 aspect: interface 🟩 priority: low 🤖 aspect: dx 🧹 status: ticket work required | Currently all styling from the site comes from a single Sass file `main.scss` in `webpack/sass`. This causes:
- a maintenance burden (at the time of writing, the file contains > 700 lines)
- longer loading times (the single `.css` output for the entire site is loaded on the first page visit)
Since Webpack is being used, a better way would be to split the file into smaller chunks and use a number of different entrypoints to facilitate code splitter for a more manageable experience for developers and faster loading experience for end users. | 1.0 | Split Sass into smaller files - Currently all styling from the site comes from a single Sass file `main.scss` in `webpack/sass`. This causes:
- a maintenance burden (at the time of writing, the file contains > 700 lines)
- longer loading times (the single `.css` output for the entire site is loaded on the first page visit)
Since Webpack is being used, a better way would be to split the file into smaller chunks and use a number of different entrypoints to facilitate code splitter for a more manageable experience for developers and faster loading experience for end users. | priority | split sass into smaller files currently all styling from the site comes from a single sass file main scss in webpack sass this causes a maintenance burden at the time of writing the file contains lines longer loading times the single css output for the entire site is loaded on the first page visit since webpack is being used a better way would be to split the file into smaller chunks and use a number of different entrypoints to facilitate code splitter for a more manageable experience for developers and faster loading experience for end users | 1 |
104,460 | 4,211,988,527 | IssuesEvent | 2016-06-29 15:03:23 | BugBusterSWE/MaaS | https://api.github.com/repos/BugBusterSWE/MaaS | closed | Aggiungere export mancanti | priority:low Programmer | *Codice in cui si trova il problema*:
activity #65
*Descrizione del problema*:
Aggiungere gli `export` necessari su `databaseModel.ts` per il testing.
Link task: [https://bugbusters.teamwork.com/tasks/7410598](https://bugbusters.teamwork.com/tasks/7410598) | 1.0 | Aggiungere export mancanti - *Codice in cui si trova il problema*:
activity #65
*Descrizione del problema*:
Aggiungere gli `export` necessari su `databaseModel.ts` per il testing.
Link task: [https://bugbusters.teamwork.com/tasks/7410598](https://bugbusters.teamwork.com/tasks/7410598) | priority | aggiungere export mancanti codice in cui si trova il problema activity descrizione del problema aggiungere gli export necessari su databasemodel ts per il testing link task | 1 |
283,137 | 8,717,054,293 | IssuesEvent | 2018-12-07 16:04:12 | FlorianMaak/p0weruser | https://api.github.com/repos/FlorianMaak/p0weruser | closed | Vollbildmodus | Low Priority feature request wontfix | ## Allgemeine Informationen
**Browser:** Alle
**Version:** Alle
**Modul:** WidescreenMode
**Description**
> Ein Button für einen Vollbildmodus sollte ergänzt werden, wenn es sich bei dem Medium um ein Video handelt.
| 1.0 | Vollbildmodus - ## Allgemeine Informationen
**Browser:** Alle
**Version:** Alle
**Modul:** WidescreenMode
**Description**
> Ein Button für einen Vollbildmodus sollte ergänzt werden, wenn es sich bei dem Medium um ein Video handelt.
| priority | vollbildmodus allgemeine informationen browser alle version alle modul widescreenmode description ein button für einen vollbildmodus sollte ergänzt werden wenn es sich bei dem medium um ein video handelt | 1 |
84,023 | 3,647,496,535 | IssuesEvent | 2016-02-16 01:10:28 | shayanik/Shayannon-Wedding | https://api.github.com/repos/shayanik/Shayannon-Wedding | opened | Mini Bachelor Party | Ideation Low Priority | What needs to get done:
- [ ] Decide on a date
- [ ] Email everyone:
* Shayan
* Andrew
* Mike
* Rod
* Shauhin
* Payam
* Reza
* Kevin
* Jason
* Nitin
* Rishey
* Walter
* Nikki
- [ ] Decide on activities
- [ ] Make reservations
| 1.0 | Mini Bachelor Party - What needs to get done:
- [ ] Decide on a date
- [ ] Email everyone:
* Shayan
* Andrew
* Mike
* Rod
* Shauhin
* Payam
* Reza
* Kevin
* Jason
* Nitin
* Rishey
* Walter
* Nikki
- [ ] Decide on activities
- [ ] Make reservations
| priority | mini bachelor party what needs to get done decide on a date email everyone shayan andrew mike rod shauhin payam reza kevin jason nitin rishey walter nikki decide on activities make reservations | 1 |
225,178 | 7,479,215,261 | IssuesEvent | 2018-04-04 14:03:45 | cjlee112/socraticqs2 | https://api.github.com/repos/cjlee112/socraticqs2 | closed | Error model is repeated in the Chat UI after editing it | Bug Low [Priority] | STR:
1. Add an error model to a course
2. Open it in the preview course or as enrolled student and take a look at the created error model
3. Edit the error model (change title or text) and save it
4. Open it in the preview course or as enrolled student again and take a look at the amount of error
ER: Amount of errors in the Chat UI is the same as amount of errors in the unit
AR: Amount of errors in the Chat UI is bigger than amount of errors in the


unit
| 1.0 | Error model is repeated in the Chat UI after editing it - STR:
1. Add an error model to a course
2. Open it in the preview course or as enrolled student and take a look at the created error model
3. Edit the error model (change title or text) and save it
4. Open it in the preview course or as enrolled student again and take a look at the amount of error
ER: Amount of errors in the Chat UI is the same as amount of errors in the unit
AR: Amount of errors in the Chat UI is bigger than amount of errors in the


unit
| priority | error model is repeated in the chat ui after editing it str add an error model to a course open it in the preview course or as enrolled student and take a look at the created error model edit the error model change title or text and save it open it in the preview course or as enrolled student again and take a look at the amount of error er amount of errors in the chat ui is the same as amount of errors in the unit ar amount of errors in the chat ui is bigger than amount of errors in the unit | 1 |
544,335 | 15,892,524,199 | IssuesEvent | 2021-04-11 00:29:59 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | tests/subsys/canbus/isotp/implementation: failing on nucleo_f746zg | Stale area: CAN bug platform: STM32 priority: low | **Describe the bug**
tests/subsys/canbus/isotp/implementation: failing on nucleo_f746zg (and stm32f3_disco)
**To Reproduce**
1- twister --hardware-map ../map.yaml --device-testing -p nucleo_746zg -T tests/subsys/canbus/isotp/conformance
2- See error
**Expected behavior**
Test should pass
**Impact**
What impact does this issue have on your progress (e.g., annoyance, showstopper)
**Logs and console output**
$ cat /local/mcu/zephyrproject/zephyr/twister out/nucleo_f746zg/tests/subsys/canbus/isotp/implementation/can.isotp.implemmentation/handler.log
```
I: Init of CAN_1 done
*** Booting Zephyr OS build zephyr-v2.4.0-2807-gb34d05592666 ***
Running test suite isotp
===================================================================
START - test_bind_unbind
I: Got a frame in a state where it is unexpected.
E: ***** USAGE FAULT *****
E: Unaligned memory access
E: r0/a1: 0x080040ad r1/a2: 0x080040ad r2/a3: 0x00000040
E: r3/a4: 0x080040b5 r12/ip: 0x20012070 r14/lr: 0x08002165
E: xpsr: 0x01000024
E: Faulting instruction address (r15/pc): 0x08003fc6
E: >>> ZEPHYR FATAL ERROR 0: CPU exception on CPU 0
E: Fault during interrupt handling
E: Current thread: 0x20010788 (sysworkq)
E: Halting system
```
**Environment (please complete the following information):**
- OS: Ubuntu 18.04
- Toolchain: Zephyr SDK 0.12
- Commit SHA: zephyr-v2.4.0-2807-gb34d055926
**Additional context**
No additional wire used with the board
| 1.0 | tests/subsys/canbus/isotp/implementation: failing on nucleo_f746zg - **Describe the bug**
tests/subsys/canbus/isotp/implementation: failing on nucleo_f746zg (and stm32f3_disco)
**To Reproduce**
1- twister --hardware-map ../map.yaml --device-testing -p nucleo_746zg -T tests/subsys/canbus/isotp/conformance
2- See error
**Expected behavior**
Test should pass
**Impact**
What impact does this issue have on your progress (e.g., annoyance, showstopper)
**Logs and console output**
$ cat /local/mcu/zephyrproject/zephyr/twister out/nucleo_f746zg/tests/subsys/canbus/isotp/implementation/can.isotp.implemmentation/handler.log
```
I: Init of CAN_1 done
*** Booting Zephyr OS build zephyr-v2.4.0-2807-gb34d05592666 ***
Running test suite isotp
===================================================================
START - test_bind_unbind
I: Got a frame in a state where it is unexpected.
E: ***** USAGE FAULT *****
E: Unaligned memory access
E: r0/a1: 0x080040ad r1/a2: 0x080040ad r2/a3: 0x00000040
E: r3/a4: 0x080040b5 r12/ip: 0x20012070 r14/lr: 0x08002165
E: xpsr: 0x01000024
E: Faulting instruction address (r15/pc): 0x08003fc6
E: >>> ZEPHYR FATAL ERROR 0: CPU exception on CPU 0
E: Fault during interrupt handling
E: Current thread: 0x20010788 (sysworkq)
E: Halting system
```
**Environment (please complete the following information):**
- OS: Ubuntu 18.04
- Toolchain: Zephyr SDK 0.12
- Commit SHA: zephyr-v2.4.0-2807-gb34d055926
**Additional context**
No additional wire used with the board
| priority | tests subsys canbus isotp implementation failing on nucleo describe the bug tests subsys canbus isotp implementation failing on nucleo and disco to reproduce twister hardware map map yaml device testing p nucleo t tests subsys canbus isotp conformance see error expected behavior test should pass impact what impact does this issue have on your progress e g annoyance showstopper logs and console output cat local mcu zephyrproject zephyr twister out nucleo tests subsys canbus isotp implementation can isotp implemmentation handler log i init of can done booting zephyr os build zephyr running test suite isotp start test bind unbind i got a frame in a state where it is unexpected e usage fault e unaligned memory access e e ip lr e xpsr e faulting instruction address pc e zephyr fatal error cpu exception on cpu e fault during interrupt handling e current thread sysworkq e halting system environment please complete the following information os ubuntu toolchain zephyr sdk commit sha zephyr additional context no additional wire used with the board | 1 |
693,749 | 23,788,821,826 | IssuesEvent | 2022-09-02 12:47:05 | wp-media/wp-rocket | https://api.github.com/repos/wp-media/wp-rocket | closed | RUCSS compatibility with CloudFlare Server Push | type: enhancement 3rd party compatibility priority: low effort: [S] module: remove unused css | **Before submitting an issue please check that you’ve completed the following steps:**
- [x] Made sure you’re on the latest version `3.10.7`
- [x] Used the search feature to ensure that the bug hasn’t been reported before
**Describe the bug**
When using the CloudFlare plugin and enabling Server Push using the following constant, CloudFlare will add all resources with the `rel=preload` hint on the page. Including CSS.
~~~
define('CLOUDFLARE_HTTP2_SERVER_PUSH_ACTIVE', true);
~~~
We will end up with resources loaded by the page. Example:
~~~
<link rel="preload" href="/wp-content/themes/astra/assets/css/minified/main.min.css?ver=3.7.7" as="style">
<link rel="preload" href="/wp-includes/css/dist/block-library/style.min.css?ver=5.9" as="style">
~~~
What this does is load the CSS resources even if Remove Unused CSS is used.
It will give a console warning:
~~~
The resource /wp-content/themes/astra/assets/css/minified/main.min.css?ver=3.7.7 was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and is preloaded intentionally.
~~~
This warning might show on the PageSpeed Insights results as well.
The customer might think that the Remove Unused CSS is not operating properly. Or think that we are preloading the CSS files.
**To Reproduce**
Steps to reproduce the behavior:
1. Using the official CloudFlare plugin
2. Add `define('CLOUDFLARE_HTTP2_SERVER_PUSH_ACTIVE', true);` to the `wp-config.php` file
3. See error
**Expected behavior**
Ideally, we should remove the resources hints related to CSS from the page after adding the Used CSS.
Or at least warn the user using a notice that `CLOUDFLARE_HTTP2_SERVER_PUSH_ACTIVE` is set to `true`, and what it will implicate (loading CSS files + getting the warning about the preloaded assets).
**Screenshots**


**Additional context**
Ticket - https://secure.helpscout.net/conversation/1763607803/320562/
Potentially linked to the effort for https://github.com/wp-media/wp-rocket/issues/3180 if running CF APO compatibility keeps Cloudflare's official plugin enabled.
**Backlog Grooming (for WP Media dev team use only)**
- [ ] Reproduce the problem
- [ ] Identify the root cause
- [ ] Scope a solution
- [ ] Estimate the effort
| 1.0 | RUCSS compatibility with CloudFlare Server Push - **Before submitting an issue please check that you’ve completed the following steps:**
- [x] Made sure you’re on the latest version `3.10.7`
- [x] Used the search feature to ensure that the bug hasn’t been reported before
**Describe the bug**
When using the CloudFlare plugin and enabling Server Push using the following constant, CloudFlare will add all resources with the `rel=preload` hint on the page. Including CSS.
~~~
define('CLOUDFLARE_HTTP2_SERVER_PUSH_ACTIVE', true);
~~~
We will end up with resources loaded by the page. Example:
~~~
<link rel="preload" href="/wp-content/themes/astra/assets/css/minified/main.min.css?ver=3.7.7" as="style">
<link rel="preload" href="/wp-includes/css/dist/block-library/style.min.css?ver=5.9" as="style">
~~~
What this does is load the CSS resources even if Remove Unused CSS is used.
It will give a console warning:
~~~
The resource /wp-content/themes/astra/assets/css/minified/main.min.css?ver=3.7.7 was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and is preloaded intentionally.
~~~
This warning might show on the PageSpeed Insights results as well.
The customer might think that the Remove Unused CSS is not operating properly. Or think that we are preloading the CSS files.
**To Reproduce**
Steps to reproduce the behavior:
1. Using the official CloudFlare plugin
2. Add `define('CLOUDFLARE_HTTP2_SERVER_PUSH_ACTIVE', true);` to the `wp-config.php` file
3. See error
**Expected behavior**
Ideally, we should remove the resources hints related to CSS from the page after adding the Used CSS.
Or at least warn the user using a notice that `CLOUDFLARE_HTTP2_SERVER_PUSH_ACTIVE` is set to `true`, and what it will implicate (loading CSS files + getting the warning about the preloaded assets).
**Screenshots**


**Additional context**
Ticket - https://secure.helpscout.net/conversation/1763607803/320562/
Potentially linked to the effort for https://github.com/wp-media/wp-rocket/issues/3180 if running CF APO compatibility keeps Cloudflare's official plugin enabled.
**Backlog Grooming (for WP Media dev team use only)**
- [ ] Reproduce the problem
- [ ] Identify the root cause
- [ ] Scope a solution
- [ ] Estimate the effort
| priority | rucss compatibility with cloudflare server push before submitting an issue please check that you’ve completed the following steps made sure you’re on the latest version used the search feature to ensure that the bug hasn’t been reported before describe the bug when using the cloudflare plugin and enabling server push using the following constant cloudflare will add all resources with the rel preload hint on the page including css define cloudflare server push active true we will end up with resources loaded by the page example what this does is load the css resources even if remove unused css is used it will give a console warning the resource wp content themes astra assets css minified main min css ver was preloaded using link preload but not used within a few seconds from the window s load event please make sure it has an appropriate as value and is preloaded intentionally this warning might show on the pagespeed insights results as well the customer might think that the remove unused css is not operating properly or think that we are preloading the css files to reproduce steps to reproduce the behavior using the official cloudflare plugin add define cloudflare server push active true to the wp config php file see error expected behavior ideally we should remove the resources hints related to css from the page after adding the used css or at least warn the user using a notice that cloudflare server push active is set to true and what it will implicate loading css files getting the warning about the preloaded assets screenshots additional context ticket potentially linked to the effort for if running cf apo compatibility keeps cloudflare s official plugin enabled backlog grooming for wp media dev team use only reproduce the problem identify the root cause scope a solution estimate the effort | 1 |
41,236 | 2,868,988,369 | IssuesEvent | 2015-06-05 22:24:16 | dart-lang/sdk | https://api.github.com/repos/dart-lang/sdk | closed | Add install/uninstall/activate/deactivate hooks to pub | Area-Pub Priority-Low Triaged Type-Enhancement | To repro:
pub global activate stagehand
mkdir empty
cd empty
stagehand webapp
(say Y to analytics)
pub global deactivate stagehand
cat ~/.stagehand
Expected: ~/.stagehand file is deleted (or, a notice is sent to stagehand that the package is getting deleted)
Actual: apparently (?) no signal on uninstall | 1.0 | Add install/uninstall/activate/deactivate hooks to pub - To repro:
pub global activate stagehand
mkdir empty
cd empty
stagehand webapp
(say Y to analytics)
pub global deactivate stagehand
cat ~/.stagehand
Expected: ~/.stagehand file is deleted (or, a notice is sent to stagehand that the package is getting deleted)
Actual: apparently (?) no signal on uninstall | priority | add install uninstall activate deactivate hooks to pub to repro pub global activate stagehand mkdir empty cd empty stagehand webapp nbsp nbsp say y to analytics pub global deactivate stagehand cat stagehand expected stagehand file is deleted or a notice is sent to stagehand that the package is getting deleted actual apparently no signal on uninstall | 1 |
655,903 | 21,714,000,710 | IssuesEvent | 2022-05-10 16:05:30 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | LPCXpresso55S69 incorrect device name for JLink runner | bug priority: low area: Toolchains platform: NXP has-pr | **Describe the bug**
Device name passed to JLink runner is incorrect for the LPC55S69 device. This causes issues flashing the device when flash size is large.
**To Reproduce**
Build an application where flash size exceeds 300KB. Then use west flash command with JLink runner. JLink programming will fail.
**Expected behavior**
JLink should be able to program the flash up to maximum flash size.
**Impact**
Minimal
**Logs and console output**
west flash
-- west flash: rebuilding
ninja: no work to do.
-- west flash: using runner jlink
-- runners.jlink: JLink version: 7.52d
-- runners.jlink: Flashing file: zephyr/zephyr.bin
FATAL ERROR: command exited with status 1: /opt/SEGGER/JLink_V752d/JLinkExe -nogui 1 -if swd -speed auto -device LPC55S69_core0 -CommanderScript /tmp/tmpj2ul5gx7jlink/runner.jlink -nogui 1
| 1.0 | LPCXpresso55S69 incorrect device name for JLink runner - **Describe the bug**
Device name passed to JLink runner is incorrect for the LPC55S69 device. This causes issues flashing the device when flash size is large.
**To Reproduce**
Build an application where flash size exceeds 300KB. Then use west flash command with JLink runner. JLink programming will fail.
**Expected behavior**
JLink should be able to program the flash up to maximum flash size.
**Impact**
Minimal
**Logs and console output**
west flash
-- west flash: rebuilding
ninja: no work to do.
-- west flash: using runner jlink
-- runners.jlink: JLink version: 7.52d
-- runners.jlink: Flashing file: zephyr/zephyr.bin
FATAL ERROR: command exited with status 1: /opt/SEGGER/JLink_V752d/JLinkExe -nogui 1 -if swd -speed auto -device LPC55S69_core0 -CommanderScript /tmp/tmpj2ul5gx7jlink/runner.jlink -nogui 1
| priority | incorrect device name for jlink runner describe the bug device name passed to jlink runner is incorrect for the device this causes issues flashing the device when flash size is large to reproduce build an application where flash size exceeds then use west flash command with jlink runner jlink programming will fail expected behavior jlink should be able to program the flash up to maximum flash size impact minimal logs and console output west flash west flash rebuilding ninja no work to do west flash using runner jlink runners jlink jlink version runners jlink flashing file zephyr zephyr bin fatal error command exited with status opt segger jlink jlinkexe nogui if swd speed auto device commanderscript tmp runner jlink nogui | 1 |
556,700 | 16,488,909,600 | IssuesEvent | 2021-05-24 22:52:23 | kubeapps/kubeapps | https://api.github.com/repos/kubeapps/kubeapps | opened | pinniped-proxy should cache credentials for lifetime of credential request | component/pinniped-proxy kind/feature priority/low size/M | ### Description:
When initially creating the pinniped-proxy service, we had planned to cache the response of the token credential request and re-use it for subsequent requests.
As it is, it worked OK without this initially, but it is still overkill to be doing so for every request, and may be too slow in the future when pinniped tries to use the [certificate signing requests API](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/). | 1.0 | pinniped-proxy should cache credentials for lifetime of credential request - ### Description:
When initially creating the pinniped-proxy service, we had planned to cache the response of the token credential request and re-use it for subsequent requests.
As it is, it worked OK without this initially, but it is still overkill to be doing so for every request, and may be too slow in the future when pinniped tries to use the [certificate signing requests API](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/). | priority | pinniped proxy should cache credentials for lifetime of credential request description when initially creating the pinniped proxy service we had planned to cache the response of the token credential request and re use it for subsequent requests as it is it worked ok without this initially but it is still overkill to be doing so for every request and may be too slow in the future when pinniped tries to use the | 1 |
700,187 | 24,050,076,375 | IssuesEvent | 2022-09-16 12:00:29 | CS3219-AY2223S1/cs3219-project-ay2223s1-g42 | https://api.github.com/repos/CS3219-AY2223S1/cs3219-project-ay2223s1-g42 | closed | feat: add swagger docs to backend API | backend priority:low | Add swagger documentation to the backend api for easy API documentation, as well as enforce proper http exceptions for every endpoint | 1.0 | feat: add swagger docs to backend API - Add swagger documentation to the backend api for easy API documentation, as well as enforce proper http exceptions for every endpoint | priority | feat add swagger docs to backend api add swagger documentation to the backend api for easy api documentation as well as enforce proper http exceptions for every endpoint | 1 |
119,750 | 4,775,382,928 | IssuesEvent | 2016-10-27 10:11:14 | rndsolutions/hawkcd | https://api.github.com/repos/rndsolutions/hawkcd | opened | Job's name overlaps with icon | bug low priority ui | If a Job's name is too long it overlaps with the burger icon on its right, on the Run Management screen.

| 1.0 | Job's name overlaps with icon - If a Job's name is too long it overlaps with the burger icon on its right, on the Run Management screen.

| priority | job s name overlaps with icon if a job s name is too long it overlaps with the burger icon on its right on the run management screen | 1 |
359,479 | 10,676,881,757 | IssuesEvent | 2019-10-21 14:32:04 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | hci_usb: NRF52840 connecting addtional peripheral fails | Waiting for response area: Bluetooth bug platform: nRF priority: low | **Describe the bug**
When trying to connect an addtional peripheral, while another is already connected and exchanging data, one the following may happen (for me roughtly 75%).
1. Most of the time the new connection fails to be established in addition to failing to read on the first connected peripheral.
2. The first connected peripheral fails to read characteristic values. The second connection attempt succeeds but still fails to read data (service discovery is never finished).
This error does **not** occur, if both peripherals are connected **before** exchanging data. After both connections are established, data transfer works as expected.
I tested the same peripherals with a BCM dongle, where this problem does not occur.
**To Reproduce**
Steps to reproduce the behavior on the nRF52840_pca10056 (all 3 boards):
1. usb_hci example on one board with CONFIG_BT_MAX_CONN increased to 10
2. use the bluetooth peripheral example on two other boards
3. Connect one peripheral and read characterisic values continuously
4. Connect the other peripheral and read characteristics values.
**Expected behavior**
(An) additional peripheral(s) can be connected while another exchanges data. The data exchange may still be delayed while the connection procedure is performed.
**Impact**
This is a showstopper as addtional peripherals must be connected, while other are already exchanging data.
**btmon output**
Log of the failure where the first connected peripheral fails to read data after the second is connected:
[btmon-log-Frist_peripheral_fails_to_read_pca10056.log](https://github.com/zephyrproject-rtos/zephyr/files/3530302/btmon-log-Frist_peripheral_fails_to_read_pca10056.log)
Log of the succes of the same steps with a BCM BT USB dongle
[btmon-log_bcm_dongle_two_peripheral_success.log](https://github.com/zephyrproject-rtos/zephyr/files/3530303/btmon-log_bcm_dongle_two_peripheral_success.log)
Log of the success of the same steps with pca10056 and BT_DEBUG enabled (see Additional context)
[btmon-log_dual_peripheral_success_pca10056_BT_DEBUG.log](https://github.com/zephyrproject-rtos/zephyr/files/3530399/btmon-log_dual_peripheral_success_pca10056_BT_DEBUG.log)
**Environment:**
- OS: Linux with Bluetooth subsystem version 2.22 and bluez5.50
- Toolchain (macos/gcc-arm-none-eabi 7.3.1)
- zephyr v1.14.0
**Additional context**
1. hci_usb was also tested with zephyr version v1.14.1-rc2, with the same errors
2. hci_usb was also tested with zephyr version v1.14.1-rc2, with BT_DEBUG, BT_DEBUG_HCI, BT_DEBUG_ATT to get some logging. **The error was not reproducable any more**, therefore no such logs attached here. | 1.0 | hci_usb: NRF52840 connecting addtional peripheral fails - **Describe the bug**
When trying to connect an addtional peripheral, while another is already connected and exchanging data, one the following may happen (for me roughtly 75%).
1. Most of the time the new connection fails to be established in addition to failing to read on the first connected peripheral.
2. The first connected peripheral fails to read characteristic values. The second connection attempt succeeds but still fails to read data (service discovery is never finished).
This error does **not** occur, if both peripherals are connected **before** exchanging data. After both connections are established, data transfer works as expected.
I tested the same peripherals with a BCM dongle, where this problem does not occur.
**To Reproduce**
Steps to reproduce the behavior on the nRF52840_pca10056 (all 3 boards):
1. usb_hci example on one board with CONFIG_BT_MAX_CONN increased to 10
2. use the bluetooth peripheral example on two other boards
3. Connect one peripheral and read characterisic values continuously
4. Connect the other peripheral and read characteristics values.
**Expected behavior**
(An) additional peripheral(s) can be connected while another exchanges data. The data exchange may still be delayed while the connection procedure is performed.
**Impact**
This is a showstopper as addtional peripherals must be connected, while other are already exchanging data.
**btmon output**
Log of the failure where the first connected peripheral fails to read data after the second is connected:
[btmon-log-Frist_peripheral_fails_to_read_pca10056.log](https://github.com/zephyrproject-rtos/zephyr/files/3530302/btmon-log-Frist_peripheral_fails_to_read_pca10056.log)
Log of the succes of the same steps with a BCM BT USB dongle
[btmon-log_bcm_dongle_two_peripheral_success.log](https://github.com/zephyrproject-rtos/zephyr/files/3530303/btmon-log_bcm_dongle_two_peripheral_success.log)
Log of the success of the same steps with pca10056 and BT_DEBUG enabled (see Additional context)
[btmon-log_dual_peripheral_success_pca10056_BT_DEBUG.log](https://github.com/zephyrproject-rtos/zephyr/files/3530399/btmon-log_dual_peripheral_success_pca10056_BT_DEBUG.log)
**Environment:**
- OS: Linux with Bluetooth subsystem version 2.22 and bluez5.50
- Toolchain (macos/gcc-arm-none-eabi 7.3.1)
- zephyr v1.14.0
**Additional context**
1. hci_usb was also tested with zephyr version v1.14.1-rc2, with the same errors
2. hci_usb was also tested with zephyr version v1.14.1-rc2, with BT_DEBUG, BT_DEBUG_HCI, BT_DEBUG_ATT to get some logging. **The error was not reproducable any more**, therefore no such logs attached here. | priority | hci usb connecting addtional peripheral fails describe the bug when trying to connect an addtional peripheral while another is already connected and exchanging data one the following may happen for me roughtly most of the time the new connection fails to be established in addition to failing to read on the first connected peripheral the first connected peripheral fails to read characteristic values the second connection attempt succeeds but still fails to read data service discovery is never finished this error does not occur if both peripherals are connected before exchanging data after both connections are established data transfer works as expected i tested the same peripherals with a bcm dongle where this problem does not occur to reproduce steps to reproduce the behavior on the all boards usb hci example on one board with config bt max conn increased to use the bluetooth peripheral example on two other boards connect one peripheral and read characterisic values continuously connect the other peripheral and read characteristics values expected behavior an additional peripheral s can be connected while another exchanges data the data exchange may still be delayed while the connection procedure is performed impact this is a showstopper as addtional peripherals must be connected while other are already exchanging data btmon output log of the failure where the first connected peripheral fails to read data after the second is connected log of the succes of the same steps with a bcm bt usb dongle log of the success of the same steps with and bt debug enabled see additional context environment os linux with bluetooth subsystem version and toolchain macos gcc arm none eabi zephyr additional context hci usb was also tested with zephyr version with the same errors hci usb was also tested with zephyr version with bt debug bt debug hci bt debug att to get some logging the error was not reproducable any more therefore no such logs attached here | 1 |
606,425 | 18,762,207,821 | IssuesEvent | 2021-11-05 17:51:48 | OpenNebula/one | https://api.github.com/repos/OpenNebula/one | opened | Add support for Firecracker snapshots | Type: Backlog Status: Accepted Priority: Low Category: Firecracker | **Description**
From version 0.23.0 Firecracker added snapshots feature for microVMs. It would be nice to integrate this with the existing snapshot management workflow.
**Use case**
Support snapshots for microVMs.
**Interface Changes**
The existing interfaces should be used.
**Additional Context**
[Firecracker snapshots documentation](https://github.com/firecracker-microvm/firecracker/blob/main/docs/snapshotting/snapshot-support.md#about-microvm-snapshotting)
<!--////////////////////////////////////////////-->
<!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM -->
<!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS -->
<!-- PROGRESS WILL BE REFLECTED HERE -->
<!--////////////////////////////////////////////-->
## Progress Status
- [ ] Branch created
- [ ] Code committed to development branch
- [ ] Testing - QA
- [ ] Documentation
- [ ] Release notes - resolved issues, compatibility, known issues
- [ ] Code committed to upstream release/hotfix branches
- [ ] Documentation committed to upstream release/hotfix branches
| 1.0 | Add support for Firecracker snapshots - **Description**
From version 0.23.0 Firecracker added snapshots feature for microVMs. It would be nice to integrate this with the existing snapshot management workflow.
**Use case**
Support snapshots for microVMs.
**Interface Changes**
The existing interfaces should be used.
**Additional Context**
[Firecracker snapshots documentation](https://github.com/firecracker-microvm/firecracker/blob/main/docs/snapshotting/snapshot-support.md#about-microvm-snapshotting)
<!--////////////////////////////////////////////-->
<!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM -->
<!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS -->
<!-- PROGRESS WILL BE REFLECTED HERE -->
<!--////////////////////////////////////////////-->
## Progress Status
- [ ] Branch created
- [ ] Code committed to development branch
- [ ] Testing - QA
- [ ] Documentation
- [ ] Release notes - resolved issues, compatibility, known issues
- [ ] Code committed to upstream release/hotfix branches
- [ ] Documentation committed to upstream release/hotfix branches
| priority | add support for firecracker snapshots description from version firecracker added snapshots feature for microvms it would be nice to integrate this with the existing snapshot management workflow use case support snapshots for microvms interface changes the existing interfaces should be used additional context progress status branch created code committed to development branch testing qa documentation release notes resolved issues compatibility known issues code committed to upstream release hotfix branches documentation committed to upstream release hotfix branches | 1 |
279,721 | 8,672,189,709 | IssuesEvent | 2018-11-29 21:23:00 | bounswe/bounswe2018group5 | https://api.github.com/repos/bounswe/bounswe2018group5 | closed | Add password validation to register screen | Effort: Low Platform: Android Priority: Medium Status: Available Type: Enhancement | **DoD**
* Passwords are validated before making a request to server, according to rules set in requirements
* User is warned on invalid passwords | 1.0 | Add password validation to register screen - **DoD**
* Passwords are validated before making a request to server, according to rules set in requirements
* User is warned on invalid passwords | priority | add password validation to register screen dod passwords are validated before making a request to server according to rules set in requirements user is warned on invalid passwords | 1 |
432,750 | 12,497,753,009 | IssuesEvent | 2020-06-01 17:02:48 | mapbox/mapbox-navigation-android | https://api.github.com/repos/mapbox/mapbox-navigation-android | closed | IllegalArgumentException when simulating a route | bug low priority needs investigation | Hi Mapbox team, I'm having a strange bug using the`NavigationView`, I generate a route then pass the route to the `NavigationViewOptions.Builder` and displays it on the `NavigationView`, everything works fine until I set the `shouldSimulateRoute` to true for the builder. It crashes every time with the following error:
```
E/AndroidRuntime: FATAL EXCEPTION: main
Process: pro.mobile4.vision, PID: 20109
java.lang.IllegalArgumentException: Non-null and non-empty location list required.
at com.mapbox.services.android.navigation.v5.location.replay.ReplayLocationDispatcher.checkValidInput(ReplayLocationDispatcher.java:79)
at com.mapbox.services.android.navigation.v5.location.replay.ReplayLocationDispatcher.<init>(ReplayLocationDispatcher.java:22)
at com.mapbox.services.android.navigation.v5.location.replay.ReplayRouteLocationEngine.obtainDispatcher(ReplayRouteLocationEngine.java:164)
at com.mapbox.services.android.navigation.v5.location.replay.ReplayRouteLocationEngine.start(ReplayRouteLocationEngine.java:154)
at com.mapbox.services.android.navigation.v5.location.replay.ReplayRouteLocationEngine.beginReplayWith(ReplayRouteLocationEngine.java:203)
at com.mapbox.services.android.navigation.v5.location.replay.ReplayRouteLocationEngine.requestLocationUpdates(ReplayRouteLocationEngine.java:113)
at com.mapbox.services.android.navigation.v5.navigation.LocationUpdater.requestInitialLocationUpdates(LocationUpdater.java:58)
at com.mapbox.services.android.navigation.v5.navigation.LocationUpdater.<init>(LocationUpdater.java:31)
at com.mapbox.services.android.navigation.v5.navigation.NavigationService.initializeLocationUpdater(NavigationService.java:124)
at com.mapbox.services.android.navigation.v5.navigation.NavigationService.initialize(NavigationService.java:95)
at com.mapbox.services.android.navigation.v5.navigation.NavigationService.startNavigation(NavigationService.java:66)
at com.mapbox.services.android.navigation.v5.navigation.MapboxNavigation.onServiceConnected(MapboxNavigation.java:810)
at android.app.LoadedApk$ServiceDispatcher.doConnected(LoadedApk.java:1848)
at android.app.LoadedApk$ServiceDispatcher$RunConnection.run(LoadedApk.java:1880)
at android.os.Handler.handleCallback(Handler.java:873)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:214)
at android.app.ActivityThread.main(ActivityThread.java:7073)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:493)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:964)
```
My code to start the navigation is as follow:
```
private fun startNavigation() {
val options = NavigationViewOptions.builder()
.directionsRoute(directionsRoute)
.navigationListener(this@NavigationFragment)
.progressChangeListener(this@NavigationFragment)
.shouldSimulateRoute(true)
.build()
navigationView.startNavigation(options)
navigationView.findViewById<View>(R.id.feedbackFab).visibility = View.GONE
navigationView.findViewById<View>(R.id.alertView).visibility = View.GONE
}
```
When I debug a little bit I found that the ReplayRoute gets an empty array of points even though the navigation has all the necessary point.
I tried passing the route to the `ReplayRouteLocationEngine` and then pass the engine to the builder but still getting the same issue.
**Android API:** 28
**Mapbox Navigation SDK version:** 0.41.0
| 1.0 | IllegalArgumentException when simulating a route - Hi Mapbox team, I'm having a strange bug using the`NavigationView`, I generate a route then pass the route to the `NavigationViewOptions.Builder` and displays it on the `NavigationView`, everything works fine until I set the `shouldSimulateRoute` to true for the builder. It crashes every time with the following error:
```
E/AndroidRuntime: FATAL EXCEPTION: main
Process: pro.mobile4.vision, PID: 20109
java.lang.IllegalArgumentException: Non-null and non-empty location list required.
at com.mapbox.services.android.navigation.v5.location.replay.ReplayLocationDispatcher.checkValidInput(ReplayLocationDispatcher.java:79)
at com.mapbox.services.android.navigation.v5.location.replay.ReplayLocationDispatcher.<init>(ReplayLocationDispatcher.java:22)
at com.mapbox.services.android.navigation.v5.location.replay.ReplayRouteLocationEngine.obtainDispatcher(ReplayRouteLocationEngine.java:164)
at com.mapbox.services.android.navigation.v5.location.replay.ReplayRouteLocationEngine.start(ReplayRouteLocationEngine.java:154)
at com.mapbox.services.android.navigation.v5.location.replay.ReplayRouteLocationEngine.beginReplayWith(ReplayRouteLocationEngine.java:203)
at com.mapbox.services.android.navigation.v5.location.replay.ReplayRouteLocationEngine.requestLocationUpdates(ReplayRouteLocationEngine.java:113)
at com.mapbox.services.android.navigation.v5.navigation.LocationUpdater.requestInitialLocationUpdates(LocationUpdater.java:58)
at com.mapbox.services.android.navigation.v5.navigation.LocationUpdater.<init>(LocationUpdater.java:31)
at com.mapbox.services.android.navigation.v5.navigation.NavigationService.initializeLocationUpdater(NavigationService.java:124)
at com.mapbox.services.android.navigation.v5.navigation.NavigationService.initialize(NavigationService.java:95)
at com.mapbox.services.android.navigation.v5.navigation.NavigationService.startNavigation(NavigationService.java:66)
at com.mapbox.services.android.navigation.v5.navigation.MapboxNavigation.onServiceConnected(MapboxNavigation.java:810)
at android.app.LoadedApk$ServiceDispatcher.doConnected(LoadedApk.java:1848)
at android.app.LoadedApk$ServiceDispatcher$RunConnection.run(LoadedApk.java:1880)
at android.os.Handler.handleCallback(Handler.java:873)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:214)
at android.app.ActivityThread.main(ActivityThread.java:7073)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:493)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:964)
```
My code to start the navigation is as follow:
```
private fun startNavigation() {
val options = NavigationViewOptions.builder()
.directionsRoute(directionsRoute)
.navigationListener(this@NavigationFragment)
.progressChangeListener(this@NavigationFragment)
.shouldSimulateRoute(true)
.build()
navigationView.startNavigation(options)
navigationView.findViewById<View>(R.id.feedbackFab).visibility = View.GONE
navigationView.findViewById<View>(R.id.alertView).visibility = View.GONE
}
```
When I debug a little bit I found that the ReplayRoute gets an empty array of points even though the navigation has all the necessary point.
I tried passing the route to the `ReplayRouteLocationEngine` and then pass the engine to the builder but still getting the same issue.
**Android API:** 28
**Mapbox Navigation SDK version:** 0.41.0
| priority | illegalargumentexception when simulating a route hi mapbox team i m having a strange bug using the navigationview i generate a route then pass the route to the navigationviewoptions builder and displays it on the navigationview everything works fine until i set the shouldsimulateroute to true for the builder it crashes every time with the following error e androidruntime fatal exception main process pro vision pid java lang illegalargumentexception non null and non empty location list required at com mapbox services android navigation location replay replaylocationdispatcher checkvalidinput replaylocationdispatcher java at com mapbox services android navigation location replay replaylocationdispatcher replaylocationdispatcher java at com mapbox services android navigation location replay replayroutelocationengine obtaindispatcher replayroutelocationengine java at com mapbox services android navigation location replay replayroutelocationengine start replayroutelocationengine java at com mapbox services android navigation location replay replayroutelocationengine beginreplaywith replayroutelocationengine java at com mapbox services android navigation location replay replayroutelocationengine requestlocationupdates replayroutelocationengine java at com mapbox services android navigation navigation locationupdater requestinitiallocationupdates locationupdater java at com mapbox services android navigation navigation locationupdater locationupdater java at com mapbox services android navigation navigation navigationservice initializelocationupdater navigationservice java at com mapbox services android navigation navigation navigationservice initialize navigationservice java at com mapbox services android navigation navigation navigationservice startnavigation navigationservice java at com mapbox services android navigation navigation mapboxnavigation onserviceconnected mapboxnavigation java at android app loadedapk servicedispatcher doconnected loadedapk java at android app loadedapk servicedispatcher runconnection run loadedapk java at android os handler handlecallback handler java at android os handler dispatchmessage handler java at android os looper loop looper java at android app activitythread main activitythread java at java lang reflect method invoke native method at com android internal os runtimeinit methodandargscaller run runtimeinit java at com android internal os zygoteinit main zygoteinit java my code to start the navigation is as follow private fun startnavigation val options navigationviewoptions builder directionsroute directionsroute navigationlistener this navigationfragment progresschangelistener this navigationfragment shouldsimulateroute true build navigationview startnavigation options navigationview findviewbyid r id feedbackfab visibility view gone navigationview findviewbyid r id alertview visibility view gone when i debug a little bit i found that the replayroute gets an empty array of points even though the navigation has all the necessary point i tried passing the route to the replayroutelocationengine and then pass the engine to the builder but still getting the same issue android api mapbox navigation sdk version | 1 |
314,521 | 9,598,245,693 | IssuesEvent | 2019-05-10 00:35:54 | Da-Technomancer/Essentials | https://api.github.com/repos/Da-Technomancer/Essentials | closed | 'Potentially Dangerous alternative prefix' | bug low priority | As of essentials-1.12.2-1.2.0, forge is reporting warnings in the console:
```
[00:34:24] [main/WARN] [FML]: Potentially Dangerous alternative prefix `minecraft` for name `crossroads_brazier`, expected `essentials`. This could be a intended override, but in most cases indicates a broken mod.
[00:34:24] [main/WARN] [FML]: Potentially Dangerous alternative prefix `minecraft` for name `crossroads_slottedchest`, expected `essentials`. This could be a intended override, but in most cases indicates a broken mod.
[00:34:24] [main/WARN] [FML]: Potentially Dangerous alternative prefix `minecraft` for name `crossroads_sortinghopper`, expected `essentials`. This could be a intended override, but in most cases indicates a broken mod.
[00:34:24] [main/WARN] [FML]: Potentially Dangerous alternative prefix `minecraft` for name `crossroads_itemchuteport`, expected `essentials`. This could be a intended override, but in most cases indicates a broken mod.
[00:34:24] [main/WARN] [FML]: Potentially Dangerous alternative prefix `crossroads` for name `port_extender`, expected `essentials`. This could be a intended override, but in most cases indicates a broken mod.
```
Some of those seem to overlap with your crossroads mod, so you probably need to fix both in one shot :). | 1.0 | 'Potentially Dangerous alternative prefix' - As of essentials-1.12.2-1.2.0, forge is reporting warnings in the console:
```
[00:34:24] [main/WARN] [FML]: Potentially Dangerous alternative prefix `minecraft` for name `crossroads_brazier`, expected `essentials`. This could be a intended override, but in most cases indicates a broken mod.
[00:34:24] [main/WARN] [FML]: Potentially Dangerous alternative prefix `minecraft` for name `crossroads_slottedchest`, expected `essentials`. This could be a intended override, but in most cases indicates a broken mod.
[00:34:24] [main/WARN] [FML]: Potentially Dangerous alternative prefix `minecraft` for name `crossroads_sortinghopper`, expected `essentials`. This could be a intended override, but in most cases indicates a broken mod.
[00:34:24] [main/WARN] [FML]: Potentially Dangerous alternative prefix `minecraft` for name `crossroads_itemchuteport`, expected `essentials`. This could be a intended override, but in most cases indicates a broken mod.
[00:34:24] [main/WARN] [FML]: Potentially Dangerous alternative prefix `crossroads` for name `port_extender`, expected `essentials`. This could be a intended override, but in most cases indicates a broken mod.
```
Some of those seem to overlap with your crossroads mod, so you probably need to fix both in one shot :). | priority | potentially dangerous alternative prefix as of essentials forge is reporting warnings in the console potentially dangerous alternative prefix minecraft for name crossroads brazier expected essentials this could be a intended override but in most cases indicates a broken mod potentially dangerous alternative prefix minecraft for name crossroads slottedchest expected essentials this could be a intended override but in most cases indicates a broken mod potentially dangerous alternative prefix minecraft for name crossroads sortinghopper expected essentials this could be a intended override but in most cases indicates a broken mod potentially dangerous alternative prefix minecraft for name crossroads itemchuteport expected essentials this could be a intended override but in most cases indicates a broken mod potentially dangerous alternative prefix crossroads for name port extender expected essentials this could be a intended override but in most cases indicates a broken mod some of those seem to overlap with your crossroads mod so you probably need to fix both in one shot | 1 |
567,609 | 16,887,735,519 | IssuesEvent | 2021-06-23 04:14:12 | revarbat/BOSL2 | https://api.github.com/repos/revarbat/BOSL2 | closed | prismoid(), cube(), and similar should reject negative chamfer/rounding | Enhancement Low Priority | **Describe the bug**
Setting a negative chamfer on `prismoid()` has no effect.
**Code To Reproduce Bug**
```
cmf=-1;
xdistribute(20) {
cyl(r=5, h=10, chamfer=cmf);
prismoid(size1=[2,5], size2=[10,5], h=20, chamfer=cmf);
}
```
**Expected behavior**
Negatives chamfers on both the cylinder and the prismoid.
**Screenshots**
* [`cmf=1`](https://user-images.githubusercontent.com/1885701/119321468-87a3a400-bc31-11eb-8f8a-66ccd10a6cec.png)
* [`cmf=-1`](https://user-images.githubusercontent.com/1885701/119321520-9722ed00-bc31-11eb-93ae-c33d7a8ca6b5.png)
**Additional context**
- BOSL2 Version: 2.0.638
- OpenSCAD Version: 2021.01
I mentioned this issue in #536 but I figured I should just file a separate issue. | 1.0 | prismoid(), cube(), and similar should reject negative chamfer/rounding - **Describe the bug**
Setting a negative chamfer on `prismoid()` has no effect.
**Code To Reproduce Bug**
```
cmf=-1;
xdistribute(20) {
cyl(r=5, h=10, chamfer=cmf);
prismoid(size1=[2,5], size2=[10,5], h=20, chamfer=cmf);
}
```
**Expected behavior**
Negatives chamfers on both the cylinder and the prismoid.
**Screenshots**
* [`cmf=1`](https://user-images.githubusercontent.com/1885701/119321468-87a3a400-bc31-11eb-8f8a-66ccd10a6cec.png)
* [`cmf=-1`](https://user-images.githubusercontent.com/1885701/119321520-9722ed00-bc31-11eb-93ae-c33d7a8ca6b5.png)
**Additional context**
- BOSL2 Version: 2.0.638
- OpenSCAD Version: 2021.01
I mentioned this issue in #536 but I figured I should just file a separate issue. | priority | prismoid cube and similar should reject negative chamfer rounding describe the bug setting a negative chamfer on prismoid has no effect code to reproduce bug cmf xdistribute cyl r h chamfer cmf prismoid h chamfer cmf expected behavior negatives chamfers on both the cylinder and the prismoid screenshots additional context version openscad version i mentioned this issue in but i figured i should just file a separate issue | 1 |
323,308 | 9,853,055,952 | IssuesEvent | 2019-06-19 14:04:36 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | Index internal API is exposed in public IndexAwarePredicate | Module: IMap Priority: Low Public API/SPI breaking change Source: Internal Team: Core Type: Enhancement | `IndexAwarePredicate` is considered to be a part of the public API, but its methods accept `QueryContext` which is internal to the implementation, moreover a `QueryContext` instance allows to obtain an instance of `Index` which is internal too and exposes dangerous methods like `destroy`.
Private API parts of the mentioned classes/interfaces must be factored out from the public API. New purely public classes/interfaces must be placed under the public packages. <s>The existing semipublic classes/interfaces contained in the internal packages must be marked deprecated. Special care must be taken to preserve the binary compatibility.</s> Backward compatibility is not required, since we are breaking things in 4.0. | 1.0 | Index internal API is exposed in public IndexAwarePredicate - `IndexAwarePredicate` is considered to be a part of the public API, but its methods accept `QueryContext` which is internal to the implementation, moreover a `QueryContext` instance allows to obtain an instance of `Index` which is internal too and exposes dangerous methods like `destroy`.
Private API parts of the mentioned classes/interfaces must be factored out from the public API. New purely public classes/interfaces must be placed under the public packages. <s>The existing semipublic classes/interfaces contained in the internal packages must be marked deprecated. Special care must be taken to preserve the binary compatibility.</s> Backward compatibility is not required, since we are breaking things in 4.0. | priority | index internal api is exposed in public indexawarepredicate indexawarepredicate is considered to be a part of the public api but its methods accept querycontext which is internal to the implementation moreover a querycontext instance allows to obtain an instance of index which is internal too and exposes dangerous methods like destroy private api parts of the mentioned classes interfaces must be factored out from the public api new purely public classes interfaces must be placed under the public packages the existing semipublic classes interfaces contained in the internal packages must be marked deprecated special care must be taken to preserve the binary compatibility backward compatibility is not required since we are breaking things in | 1 |
34,780 | 2,787,713,321 | IssuesEvent | 2015-05-08 08:26:57 | barbacenasmc/skosay | https://api.github.com/repos/barbacenasmc/skosay | closed | Sign Out link is partially blocked in the Menu | Low Priority Low Severity | Description:
The Sign Out link is partially hidden by the header (not completely shown) in the hamburger menu. See image # 2 below. This bug was originally sent by Jason via text message. See attached screenshots.
Steps to Replicate:
1. Go to app.skosay.com
2. Sign-in
3. Click the hamburger menu.
Actual Result: The "Sign Out" link is not completely shown.
Expected Result: It should completely show "Sign Out" in the hamburger menu.
Other Details: The bug was encountered using Samsung S3.
Attachments:


| 1.0 | Sign Out link is partially blocked in the Menu - Description:
The Sign Out link is partially hidden by the header (not completely shown) in the hamburger menu. See image # 2 below. This bug was originally sent by Jason via text message. See attached screenshots.
Steps to Replicate:
1. Go to app.skosay.com
2. Sign-in
3. Click the hamburger menu.
Actual Result: The "Sign Out" link is not completely shown.
Expected Result: It should completely show "Sign Out" in the hamburger menu.
Other Details: The bug was encountered using Samsung S3.
Attachments:


| priority | sign out link is partially blocked in the menu description the sign out link is partially hidden by the header not completely shown in the hamburger menu see image below this bug was originally sent by jason via text message see attached screenshots steps to replicate go to app skosay com sign in click the hamburger menu actual result the sign out link is not completely shown expected result it should completely show sign out in the hamburger menu other details the bug was encountered using samsung attachments | 1 |
502,171 | 14,541,685,775 | IssuesEvent | 2020-12-15 14:51:27 | staxrip/staxrip | https://api.github.com/repos/staxrip/staxrip | closed | Use system time for calculations when writing log file (DST issue) | feature request priority low | **Describe the bug**
Although time changes don't happen often, Staxrip doesn't appear to check for system time when calculating job time for logs. This is a very, very, very low priority request to check system time and calculate time taken based on that. This may be a tool issue.
**Expected behaviour**
When clocks go backwards for forwards, I expect applications to consider that when outputting data.
**How to reproduce the issue**
Adjust your system time in the middle of an encode, or if you were encoding at 1am in the UK on the 25th October, you will have a similar log to the one posted below.
**Provide information**
I have removed non-important information. Full log attached. **Log file was created at 01.25:03** on 25/10/2020
```
------------------------- System Environment -------------------------
StaxRip : 2.1.3.0
Windows : Windows 10 Pro 1909
Language : English (United Kingdom)
CPU : Intel(R) Xeon(R) CPU E3-1220 v5 @ 3.00GHz
GPU : Microsoft Remote Display Adapter, ASPEED Graphics Family(WDDM)
Resolution : 3840 x 2160
DPI : 96
----------------------- Media Info Source File -----------------------
...
------------------------------ Demux MKV ------------------------------
mkvextract 47
...
Start: 01:36:41
End: 01:36:55
Duration: 00:00:13
...
--------------------------- Mux AAC to M4A ---------------------------
MP4Box 0.9.0-DEV-rev0-g81b4481e1-gcc10.0.1 Patman
...
Start: 01:36:55
End: 01:36:56
Duration: 00:00:01
...
---------------------- Indexing using ffmsindex ----------------------
...
Writing index... done.
Start: 01:36:57
End: 01:36:57
Duration: 00:00:00
--------------------------- AviSynth Script ---------------------------
...
---------------------- Media Info Audio Source 1 ----------------------
...
-------------------- Audio Encoding: Command Line --------------------
...
Metadata:
major_brand : M4A
minor_version : 1
compatible_brands: isomM4A mp42
creation_time : 2020-10-25T00:36:55.000000Z
Duration: 00:29:52.09, start: 0.000000, bitrate: 155 kb/s
Stream #0:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 154 kb/s (default)
Metadata:
creation_time : 2020-10-25T00:36:55.000000Z
handler_name :
Stream mapping:
Stream #0:0 -> #0:0 (aac (native) -> eac3 (native))
...
Metadata:
major_brand : M4A
minor_version : 1
compatible_brands: isomM4A mp42
encoder : Lavf58.43.100
Stream #0:0(und): Audio: eac3, 48000 Hz, stereo, fltp, 224 kb/s (default)
Metadata:
creation_time : 2020-10-25T00:36:55.000000Z
handler_name :
encoder : Lavc58.86.101 eac3
video:0kB audio:49003kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.000000%
Start: 01:36:58
End: 01:37:04
Duration: 00:00:06
...
--------------------------- Video encoding ---------------------------
x264 M-0.160.3000-33f9e14-gcc10.0.1 Patman
avs2pipemod[info]: writing 53707 frames of 30000/1001 fps, 1280x720,
sar 0:0, YUV-420-planar-8bit progressive video.
y4m [info]: 1280x720p 0:0 @ 30000/1001 fps (cfr)
x264 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
x264 [info]: profile High, level 3.1, 4:2:0, 8-bit
avs2pipemod[info]: total elapsed time is 2880.319 sec.
...
encoded 53707 frames, 18.64 fps, 2644.63 kb/s
Start: 01:36:58
End: 01:25:00
Duration: -01:-11:-57
...
------------------------------- Muxing -------------------------------
mkvmerge 47
mkvmerge v47.0.0 ('Black Flag') 64-bit
'I:\...
Start: 01:25:00
End: 01:25:03
Duration: 00:00:02
General
Format : Matroska
Format version : Version 4
File size : 613 MiB
Duration : 29 min 52 s
Overall bit rate : 2 871 kb/s
Encoded date : UTC 2020-10-25 01:25:00
Writing application : mkvmerge v47.0.0 ('Black Flag') 64-bit
Writing library : libebml v1.3.10 + libmatroska v1.5.2
...
---------------------------- Job Complete ----------------------------
Start: 01:36:40
End: 01:25:03
Duration: -01:-11:-37
```
**Notes before posting**
As mentioned above, this is an incredibly low-priority issue. I just though I'd inform Stax and the team.
_PS: I'm fully aware than transcoding AAC ~150kbps to E-AC-3 224kbps is pointless._
**Additional context**
[2020-10-25 - 01.25.03 - Last Week Tonight with John Oliver - 1x21 - Episode 21_new_staxrip.log](https://github.com/staxrip/staxrip/files/5434062/2020-10-25.-.01.25.03.-.Last.Week.Tonight.with.John.Oliver.-.1x21.-.Episode.21_new_staxrip.log)
| 1.0 | Use system time for calculations when writing log file (DST issue) - **Describe the bug**
Although time changes don't happen often, Staxrip doesn't appear to check for system time when calculating job time for logs. This is a very, very, very low priority request to check system time and calculate time taken based on that. This may be a tool issue.
**Expected behaviour**
When clocks go backwards for forwards, I expect applications to consider that when outputting data.
**How to reproduce the issue**
Adjust your system time in the middle of an encode, or if you were encoding at 1am in the UK on the 25th October, you will have a similar log to the one posted below.
**Provide information**
I have removed non-important information. Full log attached. **Log file was created at 01.25:03** on 25/10/2020
```
------------------------- System Environment -------------------------
StaxRip : 2.1.3.0
Windows : Windows 10 Pro 1909
Language : English (United Kingdom)
CPU : Intel(R) Xeon(R) CPU E3-1220 v5 @ 3.00GHz
GPU : Microsoft Remote Display Adapter, ASPEED Graphics Family(WDDM)
Resolution : 3840 x 2160
DPI : 96
----------------------- Media Info Source File -----------------------
...
------------------------------ Demux MKV ------------------------------
mkvextract 47
...
Start: 01:36:41
End: 01:36:55
Duration: 00:00:13
...
--------------------------- Mux AAC to M4A ---------------------------
MP4Box 0.9.0-DEV-rev0-g81b4481e1-gcc10.0.1 Patman
...
Start: 01:36:55
End: 01:36:56
Duration: 00:00:01
...
---------------------- Indexing using ffmsindex ----------------------
...
Writing index... done.
Start: 01:36:57
End: 01:36:57
Duration: 00:00:00
--------------------------- AviSynth Script ---------------------------
...
---------------------- Media Info Audio Source 1 ----------------------
...
-------------------- Audio Encoding: Command Line --------------------
...
Metadata:
major_brand : M4A
minor_version : 1
compatible_brands: isomM4A mp42
creation_time : 2020-10-25T00:36:55.000000Z
Duration: 00:29:52.09, start: 0.000000, bitrate: 155 kb/s
Stream #0:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 154 kb/s (default)
Metadata:
creation_time : 2020-10-25T00:36:55.000000Z
handler_name :
Stream mapping:
Stream #0:0 -> #0:0 (aac (native) -> eac3 (native))
...
Metadata:
major_brand : M4A
minor_version : 1
compatible_brands: isomM4A mp42
encoder : Lavf58.43.100
Stream #0:0(und): Audio: eac3, 48000 Hz, stereo, fltp, 224 kb/s (default)
Metadata:
creation_time : 2020-10-25T00:36:55.000000Z
handler_name :
encoder : Lavc58.86.101 eac3
video:0kB audio:49003kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.000000%
Start: 01:36:58
End: 01:37:04
Duration: 00:00:06
...
--------------------------- Video encoding ---------------------------
x264 M-0.160.3000-33f9e14-gcc10.0.1 Patman
avs2pipemod[info]: writing 53707 frames of 30000/1001 fps, 1280x720,
sar 0:0, YUV-420-planar-8bit progressive video.
y4m [info]: 1280x720p 0:0 @ 30000/1001 fps (cfr)
x264 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
x264 [info]: profile High, level 3.1, 4:2:0, 8-bit
avs2pipemod[info]: total elapsed time is 2880.319 sec.
...
encoded 53707 frames, 18.64 fps, 2644.63 kb/s
Start: 01:36:58
End: 01:25:00
Duration: -01:-11:-57
...
------------------------------- Muxing -------------------------------
mkvmerge 47
mkvmerge v47.0.0 ('Black Flag') 64-bit
'I:\...
Start: 01:25:00
End: 01:25:03
Duration: 00:00:02
General
Format : Matroska
Format version : Version 4
File size : 613 MiB
Duration : 29 min 52 s
Overall bit rate : 2 871 kb/s
Encoded date : UTC 2020-10-25 01:25:00
Writing application : mkvmerge v47.0.0 ('Black Flag') 64-bit
Writing library : libebml v1.3.10 + libmatroska v1.5.2
...
---------------------------- Job Complete ----------------------------
Start: 01:36:40
End: 01:25:03
Duration: -01:-11:-37
```
**Notes before posting**
As mentioned above, this is an incredibly low-priority issue. I just though I'd inform Stax and the team.
_PS: I'm fully aware than transcoding AAC ~150kbps to E-AC-3 224kbps is pointless._
**Additional context**
[2020-10-25 - 01.25.03 - Last Week Tonight with John Oliver - 1x21 - Episode 21_new_staxrip.log](https://github.com/staxrip/staxrip/files/5434062/2020-10-25.-.01.25.03.-.Last.Week.Tonight.with.John.Oliver.-.1x21.-.Episode.21_new_staxrip.log)
| priority | use system time for calculations when writing log file dst issue describe the bug although time changes don t happen often staxrip doesn t appear to check for system time when calculating job time for logs this is a very very very low priority request to check system time and calculate time taken based on that this may be a tool issue expected behaviour when clocks go backwards for forwards i expect applications to consider that when outputting data how to reproduce the issue adjust your system time in the middle of an encode or if you were encoding at in the uk on the october you will have a similar log to the one posted below provide information i have removed non important information full log attached log file was created at on system environment staxrip windows windows pro language english united kingdom cpu intel r xeon r cpu gpu microsoft remote display adapter aspeed graphics family wddm resolution x dpi media info source file demux mkv mkvextract start end duration mux aac to dev patman start end duration indexing using ffmsindex writing index done start end duration avisynth script media info audio source audio encoding command line metadata major brand minor version compatible brands creation time duration start bitrate kb s stream und audio aac lc hz stereo fltp kb s default metadata creation time handler name stream mapping stream aac native native metadata major brand minor version compatible brands encoder stream und audio hz stereo fltp kb s default metadata creation time handler name encoder video audio subtitle other streams global headers muxing overhead start end duration video encoding m patman writing frames of fps sar yuv planar progressive video fps cfr using cpu capabilities avx profile high level bit total elapsed time is sec encoded frames fps kb s start end duration muxing mkvmerge mkvmerge black flag bit i start end duration general format matroska format version version file size mib duration min s overall bit rate kb s encoded date utc writing application mkvmerge black flag bit writing library libebml libmatroska job complete start end duration notes before posting as mentioned above this is an incredibly low priority issue i just though i d inform stax and the team ps i m fully aware than transcoding aac to e ac is pointless additional context | 1 |
397,240 | 11,725,598,958 | IssuesEvent | 2020-03-10 13:14:40 | eJourn-al/eJournal | https://api.github.com/repos/eJourn-al/eJournal | closed | Show version number | Priority: High Status: In Progress Type: Enhancement Workload: Low | **Describe the solution you'd like**
Be able to see the version currently deployed as a user of the web app.
**User interface**
Can be added in the footer.
| 1.0 | Show version number - **Describe the solution you'd like**
Be able to see the version currently deployed as a user of the web app.
**User interface**
Can be added in the footer.
| priority | show version number describe the solution you d like be able to see the version currently deployed as a user of the web app user interface can be added in the footer | 1 |
398,092 | 11,737,752,759 | IssuesEvent | 2020-03-11 15:04:53 | ntop/ntopng | https://api.github.com/repos/ntop/ntopng | closed | Manage Data - Delete Interface Data Redirects to a Broken Page | low-priority bug | After clicking on 'Delete Interface Data', the resulting page is broken


| 1.0 | Manage Data - Delete Interface Data Redirects to a Broken Page - After clicking on 'Delete Interface Data', the resulting page is broken


| priority | manage data delete interface data redirects to a broken page after clicking on delete interface data the resulting page is broken | 1 |
435,651 | 12,537,398,394 | IssuesEvent | 2020-06-05 03:17:41 | rbdannenberg/soundcool | https://api.github.com/repos/rbdannenberg/soundcool | opened | "Select a sound page" style sheet | Priority Low frontend | "Select a sound" page seems to not having a proper style sheet now:
<img width="792" alt="image" src="https://user-images.githubusercontent.com/31784445/83832605-a6591400-a71c-11ea-8be9-0abfd97d11e3.png">
but previously it looks like:
<img width="795" alt="image" src="https://user-images.githubusercontent.com/31784445/83833013-8118d580-a71d-11ea-8ff5-32749c71127a.png">
I traced back old commits and noticed that bootstrap stylesheets were removed on the commits about dependencies on May 26th and the page looks like this since then.
| 1.0 | "Select a sound page" style sheet - "Select a sound" page seems to not having a proper style sheet now:
<img width="792" alt="image" src="https://user-images.githubusercontent.com/31784445/83832605-a6591400-a71c-11ea-8be9-0abfd97d11e3.png">
but previously it looks like:
<img width="795" alt="image" src="https://user-images.githubusercontent.com/31784445/83833013-8118d580-a71d-11ea-8ff5-32749c71127a.png">
I traced back old commits and noticed that bootstrap stylesheets were removed on the commits about dependencies on May 26th and the page looks like this since then.
| priority | select a sound page style sheet select a sound page seems to not having a proper style sheet now img width alt image src but previously it looks like img width alt image src i traced back old commits and noticed that bootstrap stylesheets were removed on the commits about dependencies on may and the page looks like this since then | 1 |
245,788 | 7,890,931,157 | IssuesEvent | 2018-06-28 10:20:17 | telerik/kendo-ui-core | https://api.github.com/repos/telerik/kendo-ui-core | opened | Spreadsheet Export to Excel does not work properly when the validation type is 'list' | Bug C: Spreadsheet Kendo2 Priority 5 SEV: Low | ### Bug report
When there is a cell with validation type 'list' and the Spreadsheet is exported to Excel, there is an alert when trying to open the file.
### Reproduction of the problem
1. Open the[ Dojo example](https://dojo.telerik.com/@NeliKondova/iLIxoMuW)
2. Click the 'Export' button
3. Open the exported excel file
### Current behavior
Message "We found problem with some content in 'Workbook.xlsx'. Do you want us to try to recover as much as we can?... " appears. Such message does not appear with the other validation types such as 'date'.
### Expected/desired behavior
The behavior should be consistent. There should be no alerts when opening a file exported from Spreadsheet.
Last working version: 2016.3.118
### Environment
* **Kendo UI version:** 2018.2.620
* **Browser:** [all ]
| 1.0 | Spreadsheet Export to Excel does not work properly when the validation type is 'list' - ### Bug report
When there is a cell with validation type 'list' and the Spreadsheet is exported to Excel, there is an alert when trying to open the file.
### Reproduction of the problem
1. Open the[ Dojo example](https://dojo.telerik.com/@NeliKondova/iLIxoMuW)
2. Click the 'Export' button
3. Open the exported excel file
### Current behavior
Message "We found problem with some content in 'Workbook.xlsx'. Do you want us to try to recover as much as we can?... " appears. Such message does not appear with the other validation types such as 'date'.
### Expected/desired behavior
The behavior should be consistent. There should be no alerts when opening a file exported from Spreadsheet.
Last working version: 2016.3.118
### Environment
* **Kendo UI version:** 2018.2.620
* **Browser:** [all ]
| priority | spreadsheet export to excel does not work properly when the validation type is list bug report when there is a cell with validation type list and the spreadsheet is exported to excel there is an alert when trying to open the file reproduction of the problem open the click the export button open the exported excel file current behavior message we found problem with some content in workbook xlsx do you want us to try to recover as much as we can appears such message does not appear with the other validation types such as date expected desired behavior the behavior should be consistent there should be no alerts when opening a file exported from spreadsheet last working version environment kendo ui version browser | 1 |
682,193 | 23,336,033,883 | IssuesEvent | 2022-08-09 10:00:55 | TheRetroWeb/TRW-Issues | https://api.github.com/repos/TheRetroWeb/TRW-Issues | closed | Add browsers to the legacy support list | enhancement priority_low front_end | > imported from GDocs
Add browser types and versions to the filter for providing lighter CSS | 1.0 | Add browsers to the legacy support list - > imported from GDocs
Add browser types and versions to the filter for providing lighter CSS | priority | add browsers to the legacy support list imported from gdocs add browser types and versions to the filter for providing lighter css | 1 |
286,343 | 8,786,893,947 | IssuesEvent | 2018-12-20 16:56:22 | rathena/rathena | https://api.github.com/repos/rathena/rathena | closed | Unknown Map Character Selection | component:core mode:prerenewal mode:renewal priority:low status:confirmed | I noticed that everytime I create my 3rd character and login then relog the map show in character selection is unknown but when i delete 1 of those 3 characters the map shows correct.
I am using pre renewal system with a 2015-05-13 ragexe client my trunk is 2 weeks older.
Any idea why this is happening?
Edit -- I've search on the internet and saw the same problem
`https://rathena.org/board/tracker/issue-8235-character-selection-map-bug/`
and this one
`https://rathena.org/board/topic/97902-login-interface-unknown-area/`
| 1.0 | Unknown Map Character Selection - I noticed that everytime I create my 3rd character and login then relog the map show in character selection is unknown but when i delete 1 of those 3 characters the map shows correct.
I am using pre renewal system with a 2015-05-13 ragexe client my trunk is 2 weeks older.
Any idea why this is happening?
Edit -- I've search on the internet and saw the same problem
`https://rathena.org/board/tracker/issue-8235-character-selection-map-bug/`
and this one
`https://rathena.org/board/topic/97902-login-interface-unknown-area/`
| priority | unknown map character selection i noticed that everytime i create my character and login then relog the map show in character selection is unknown but when i delete of those characters the map shows correct i am using pre renewal system with a ragexe client my trunk is weeks older any idea why this is happening edit i ve search on the internet and saw the same problem and this one | 1 |
762,844 | 26,733,189,980 | IssuesEvent | 2023-01-30 07:12:57 | deutsche-nationalbibliothek/pica-rs | https://api.github.com/repos/deutsche-nationalbibliothek/pica-rs | closed | Support to choose PICA Plain as input and output format | priority:low discussion | I use to store PICA data in PICA Plain for easy inspection and analysis with command line tools and text editors. To make use of `pica-rs`, the records need to be converted between Normalized PICA+ and PICA plain, e.g.:
picadata -t plus records.pp | pica filter -s "002@.0 =~ '^O'" | picadata -f plus -t plain
picadata -t plus records.pp | pica filter -s "002@.0 =~ '^O'" | pica print # equivalent
I would like to use [picadata options -t, --to and -f, --from](https://metacpan.org/dist/PICA-Data/view/script/picadata#OPTIONS) in `pica-rs` as well:
* Default value is `plus` (with alias `norm`) to read and write normalized PICA+ (as implemented)
* Parse PICA Plain when `--from` is `pp` or `plain` or input filename has file extension `.pp` or `.plain`
* Write PICA Plain when `--to` is `pp` or `plain` (as implemented with command `print`)
* If only `--from` is given, also use its value as `--to`
The example above would then not need the `picadata` script:
pica filter -s "002@.0 =~ '^O'" records.pp
pica filter -s "002@.0 =~ '^O'" -f plain < records.pp # equivalent
pica filter -s "002@.0 =~ '^O'" -f plain -t plain < records.pp # equivalent | 1.0 | Support to choose PICA Plain as input and output format - I use to store PICA data in PICA Plain for easy inspection and analysis with command line tools and text editors. To make use of `pica-rs`, the records need to be converted between Normalized PICA+ and PICA plain, e.g.:
picadata -t plus records.pp | pica filter -s "002@.0 =~ '^O'" | picadata -f plus -t plain
picadata -t plus records.pp | pica filter -s "002@.0 =~ '^O'" | pica print # equivalent
I would like to use [picadata options -t, --to and -f, --from](https://metacpan.org/dist/PICA-Data/view/script/picadata#OPTIONS) in `pica-rs` as well:
* Default value is `plus` (with alias `norm`) to read and write normalized PICA+ (as implemented)
* Parse PICA Plain when `--from` is `pp` or `plain` or input filename has file extension `.pp` or `.plain`
* Write PICA Plain when `--to` is `pp` or `plain` (as implemented with command `print`)
* If only `--from` is given, also use its value as `--to`
The example above would then not need the `picadata` script:
pica filter -s "002@.0 =~ '^O'" records.pp
pica filter -s "002@.0 =~ '^O'" -f plain < records.pp # equivalent
pica filter -s "002@.0 =~ '^O'" -f plain -t plain < records.pp # equivalent | priority | support to choose pica plain as input and output format i use to store pica data in pica plain for easy inspection and analysis with command line tools and text editors to make use of pica rs the records need to be converted between normalized pica and pica plain e g picadata t plus records pp pica filter s o picadata f plus t plain picadata t plus records pp pica filter s o pica print equivalent i would like to use in pica rs as well default value is plus with alias norm to read and write normalized pica as implemented parse pica plain when from is pp or plain or input filename has file extension pp or plain write pica plain when to is pp or plain as implemented with command print if only from is given also use its value as to the example above would then not need the picadata script pica filter s o records pp pica filter s o f plain records pp equivalent pica filter s o f plain t plain records pp equivalent | 1 |
642,264 | 20,871,970,475 | IssuesEvent | 2022-03-22 12:49:16 | Daves-Astrophotography/UniUSBSQMServer | https://api.github.com/repos/Daves-Astrophotography/UniUSBSQMServer | closed | Trend width is not constrained by Logging Limit | bug Low Priority | **Describe the bug**
At present, by design, the trend width is not constrained by the record limit. Need to consider the impact of this under the longer term. If the application was left running for an extended period, the trend bitmap would continue to grow indefinitely. This is the same outcome as setting No Logging Limit.
This will need to be investigated over the longer term to see the impact on memory and stability, may have to add an artificial constraint as fall back. e.g 24/48/168/(?) hours etc. depending on the logging interval.
**To Reproduce**
Steps to reproduce the behavior:
Connect to SQM unit with the logging limit set to 'No Limit'
**Expected behavior**
Long term effect not yet established as not run the application for longer than around 12 hours at 10 second logging interval.
| 1.0 | Trend width is not constrained by Logging Limit - **Describe the bug**
At present, by design, the trend width is not constrained by the record limit. Need to consider the impact of this under the longer term. If the application was left running for an extended period, the trend bitmap would continue to grow indefinitely. This is the same outcome as setting No Logging Limit.
This will need to be investigated over the longer term to see the impact on memory and stability, may have to add an artificial constraint as fall back. e.g 24/48/168/(?) hours etc. depending on the logging interval.
**To Reproduce**
Steps to reproduce the behavior:
Connect to SQM unit with the logging limit set to 'No Limit'
**Expected behavior**
Long term effect not yet established as not run the application for longer than around 12 hours at 10 second logging interval.
| priority | trend width is not constrained by logging limit describe the bug at present by design the trend width is not constrained by the record limit need to consider the impact of this under the longer term if the application was left running for an extended period the trend bitmap would continue to grow indefinitely this is the same outcome as setting no logging limit this will need to be investigated over the longer term to see the impact on memory and stability may have to add an artificial constraint as fall back e g hours etc depending on the logging interval to reproduce steps to reproduce the behavior connect to sqm unit with the logging limit set to no limit expected behavior long term effect not yet established as not run the application for longer than around hours at second logging interval | 1 |
340,606 | 10,275,860,584 | IssuesEvent | 2019-08-24 12:06:33 | augmentmy-world/artoolkitX.js | https://api.github.com/repos/augmentmy-world/artoolkitX.js | opened | minified version of artoolkitx.js | -- Low priority enhancement javascript | At the moment there is no a artoolkitx.min.js but we need the minified version of the libs
A reminder for the future. | 1.0 | minified version of artoolkitx.js - At the moment there is no a artoolkitx.min.js but we need the minified version of the libs
A reminder for the future. | priority | minified version of artoolkitx js at the moment there is no a artoolkitx min js but we need the minified version of the libs a reminder for the future | 1 |
349,470 | 10,469,908,246 | IssuesEvent | 2019-09-23 00:34:26 | core-plot/core-plot | https://api.github.com/repos/core-plot/core-plot | closed | Implicit Conversion errors with Xcode 10.2.1 | Bug Priority-Low | I am integrating `core-plot release 2.3` using Carthage into my project but it failed due to implicit conversion errors in `CPTLegend.m`. Please find the error log below
`Carthage/Checkouts/core-plot/framework/Source/CPTLegend.m:589:153: error: implicit conversion loses floating-point precision: 'double' to 'CGFloat' (aka 'float') [-Werror,-Wconversion]
columnPositions[col + 1] = columnPositions[col] + padLeft + width + padRight + (isHorizontalLayout ? theOffset + theSwatchSize.width : 0.0) + theColumnMargin;`
I have also tried building `Core Plot` with Xcode 10.2.1 choosing the device target as `Generic iOS Device` and it gave the same set of errors.
Any help would be appreciated.
Thanks in Advance.
| 1.0 | Implicit Conversion errors with Xcode 10.2.1 - I am integrating `core-plot release 2.3` using Carthage into my project but it failed due to implicit conversion errors in `CPTLegend.m`. Please find the error log below
`Carthage/Checkouts/core-plot/framework/Source/CPTLegend.m:589:153: error: implicit conversion loses floating-point precision: 'double' to 'CGFloat' (aka 'float') [-Werror,-Wconversion]
columnPositions[col + 1] = columnPositions[col] + padLeft + width + padRight + (isHorizontalLayout ? theOffset + theSwatchSize.width : 0.0) + theColumnMargin;`
I have also tried building `Core Plot` with Xcode 10.2.1 choosing the device target as `Generic iOS Device` and it gave the same set of errors.
Any help would be appreciated.
Thanks in Advance.
| priority | implicit conversion errors with xcode i am integrating core plot release using carthage into my project but it failed due to implicit conversion errors in cptlegend m please find the error log below carthage checkouts core plot framework source cptlegend m error implicit conversion loses floating point precision double to cgfloat aka float columnpositions columnpositions padleft width padright ishorizontallayout theoffset theswatchsize width thecolumnmargin i have also tried building core plot with xcode choosing the device target as generic ios device and it gave the same set of errors any help would be appreciated thanks in advance | 1 |
434,528 | 12,519,679,810 | IssuesEvent | 2020-06-03 14:46:57 | decentraland/explorer | https://api.github.com/repos/decentraland/explorer | closed | Wrap mode on texture enum starts on 1, should start in 0 | bug low priority | Looks like the wrap mode accepts values 0, 1 ,2
But if I go to the function definition, the smart tip that I see suggests values 1, 2, 3
```
/**
* Enables texture wrapping for this material.
* | Value | Type |
* |-------|-----------|
* | 1 | CLAMP |
* | 2 | WRAP |
* | 3 | MIRROR |
*/
readonly wrap: number
/**
* Defines if this texture has an alpha channel
*/
readonly hasAlpha: boolean
constructor(src: string, opts?: Partial<Pick<Texture, 'samplingMode' | 'wrap' | 'hasAlpha'>>)
}
```
So for example if I set the wrap mode to 2, the reference says I should get "wrap", but what I really see is "mirror"
This code lets you try it out easily. Here the wrap mode should be wrapping the texture, but it's really mirroring it
```ts
const fishtTexture = new Texture('images/pufferfish.jpg', { wrap: 2 })
const fishMaterial = new Material()
fishMaterial.albedoTexture = fishtTexture
function spawnPlane(x: number, y: number, z: number, material: Material) {
// create the entity
const plane = new Entity()
// add a transform to the entity
plane.addComponent(new Transform({ position: new Vector3(x, y, z) }))
// add a shape to the entity
plane.addComponent(new PlaneShape())
plane.addComponent(material)
// add the entity to the engine
engine.addEntity(plane)
return plane
}
const plane = spawnPlane(4, 1, 4, fishMaterial)
plane.getComponent(PlaneShape).uvs = setUVs(3, 3)
function setUVs(rows: number, cols: number) {
return [
// North side of unrortated plane
0, //lower-left corner
0,
cols, //lower-right corner
0,
cols, //upper-right corner
rows,
0, //upper left-corner
rows,
// South side of unrortated plane
cols, // lower-right corner
0,
0, // lower-left corner
0,
0, // upper-left corner
rows,
cols, // upper-right corner
rows,
]
}
```
| 1.0 | Wrap mode on texture enum starts on 1, should start in 0 - Looks like the wrap mode accepts values 0, 1 ,2
But if I go to the function definition, the smart tip that I see suggests values 1, 2, 3
```
/**
* Enables texture wrapping for this material.
* | Value | Type |
* |-------|-----------|
* | 1 | CLAMP |
* | 2 | WRAP |
* | 3 | MIRROR |
*/
readonly wrap: number
/**
* Defines if this texture has an alpha channel
*/
readonly hasAlpha: boolean
constructor(src: string, opts?: Partial<Pick<Texture, 'samplingMode' | 'wrap' | 'hasAlpha'>>)
}
```
So for example if I set the wrap mode to 2, the reference says I should get "wrap", but what I really see is "mirror"
This code lets you try it out easily. Here the wrap mode should be wrapping the texture, but it's really mirroring it
```ts
const fishtTexture = new Texture('images/pufferfish.jpg', { wrap: 2 })
const fishMaterial = new Material()
fishMaterial.albedoTexture = fishtTexture
function spawnPlane(x: number, y: number, z: number, material: Material) {
// create the entity
const plane = new Entity()
// add a transform to the entity
plane.addComponent(new Transform({ position: new Vector3(x, y, z) }))
// add a shape to the entity
plane.addComponent(new PlaneShape())
plane.addComponent(material)
// add the entity to the engine
engine.addEntity(plane)
return plane
}
const plane = spawnPlane(4, 1, 4, fishMaterial)
plane.getComponent(PlaneShape).uvs = setUVs(3, 3)
function setUVs(rows: number, cols: number) {
return [
// North side of unrortated plane
0, //lower-left corner
0,
cols, //lower-right corner
0,
cols, //upper-right corner
rows,
0, //upper left-corner
rows,
// South side of unrortated plane
cols, // lower-right corner
0,
0, // lower-left corner
0,
0, // upper-left corner
rows,
cols, // upper-right corner
rows,
]
}
```
| priority | wrap mode on texture enum starts on should start in looks like the wrap mode accepts values but if i go to the function definition the smart tip that i see suggests values enables texture wrapping for this material value type clamp wrap mirror readonly wrap number defines if this texture has an alpha channel readonly hasalpha boolean constructor src string opts partial so for example if i set the wrap mode to the reference says i should get wrap but what i really see is mirror this code lets you try it out easily here the wrap mode should be wrapping the texture but it s really mirroring it ts const fishttexture new texture images pufferfish jpg wrap const fishmaterial new material fishmaterial albedotexture fishttexture function spawnplane x number y number z number material material create the entity const plane new entity add a transform to the entity plane addcomponent new transform position new x y z add a shape to the entity plane addcomponent new planeshape plane addcomponent material add the entity to the engine engine addentity plane return plane const plane spawnplane fishmaterial plane getcomponent planeshape uvs setuvs function setuvs rows number cols number return north side of unrortated plane lower left corner cols lower right corner cols upper right corner rows upper left corner rows south side of unrortated plane cols lower right corner lower left corner upper left corner rows cols upper right corner rows | 1 |
357,052 | 10,601,165,698 | IssuesEvent | 2019-10-10 11:44:19 | conan-io/conan | https://api.github.com/repos/conan-io/conan | closed | b2 generator doesn't specify transitive dependencies | complex: low priority: medium stage: review type: bug | Boost.Build targets created by files produced by b2 generator don't list their dependencies. As a result of that, if you `require` package `A` which depends on package `B`, and then you use a target for package `A`, you are likely to encounter a build error, because the build system is not aware of dependency between `A` and `B`.
Example project:
```python
# conanfile.py
from conans import ConanFile
class FoobarConan(ConanFile):
name = 'foobar'
version = '0.0.1'
settings = 'arch', 'os', 'compiler'
requires = 'boost_core/[>=1.68]@bincrafters/stable'
build_requires = 'b2/4.0.0'
generators = "b2"
exports_sources = "jamroot.jam", "*.cpp"
def build(self):
self.run('b2')
```
```jam
# jamroot.jam
project foobar ;
# Boost.Core actually depends on Boost.Config
lib foobar : foobar.cpp /boost_core//libs ;
```
```c++
// foobar.cpp
#include <boost/core/null_deleter.hpp>
```
Relevant Conan version: 1.18.5
- [X] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [X] I've specified the Conan version, operating system version and any tool that can be relevant.
- [X] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion. | 1.0 | b2 generator doesn't specify transitive dependencies - Boost.Build targets created by files produced by b2 generator don't list their dependencies. As a result of that, if you `require` package `A` which depends on package `B`, and then you use a target for package `A`, you are likely to encounter a build error, because the build system is not aware of dependency between `A` and `B`.
Example project:
```python
# conanfile.py
from conans import ConanFile
class FoobarConan(ConanFile):
name = 'foobar'
version = '0.0.1'
settings = 'arch', 'os', 'compiler'
requires = 'boost_core/[>=1.68]@bincrafters/stable'
build_requires = 'b2/4.0.0'
generators = "b2"
exports_sources = "jamroot.jam", "*.cpp"
def build(self):
self.run('b2')
```
```jam
# jamroot.jam
project foobar ;
# Boost.Core actually depends on Boost.Config
lib foobar : foobar.cpp /boost_core//libs ;
```
```c++
// foobar.cpp
#include <boost/core/null_deleter.hpp>
```
Relevant Conan version: 1.18.5
- [X] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [X] I've specified the Conan version, operating system version and any tool that can be relevant.
- [X] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion. | priority | generator doesn t specify transitive dependencies boost build targets created by files produced by generator don t list their dependencies as a result of that if you require package a which depends on package b and then you use a target for package a you are likely to encounter a build error because the build system is not aware of dependency between a and b example project python conanfile py from conans import conanfile class foobarconan conanfile name foobar version settings arch os compiler requires boost core bincrafters stable build requires generators exports sources jamroot jam cpp def build self self run jam jamroot jam project foobar boost core actually depends on boost config lib foobar foobar cpp boost core libs c foobar cpp include relevant conan version i ve read the i ve specified the conan version operating system version and any tool that can be relevant i ve explained the steps to reproduce the error or the motivation use case of the question suggestion | 1 |
207,974 | 7,134,916,562 | IssuesEvent | 2018-01-22 22:34:21 | vmware/vic-ui | https://api.github.com/repos/vmware/vic-ui | closed | In Create VCH UI, DNS server is mandatory if you set static IP on public | area/ui priority/low team/lifecycle | In the docs for the `vic-machine create --dns-server` option, we state the following:
---
A DNS server for the VCH endpoint VM to use on the public, client, and management networks.
- If you specify a DNS server, vSphere Integrated Containers Engine uses the same DNS server setting for all three of the public, client, and management networks.
- If you do not specify a DNS server and you specify a static IP address for the VCH endpoint VM on all three of the client, public, and management networks, vSphere Integrated Containers Engine uses the Google public DNS service.
- If you do not specify a DNS server and you use DHCP for all of the client, public, and management networks, vSphere Integrated Containers Engine uses the DNS servers that DHCP provides.
---
But, in the UI, it appears that a DNS server is mandatory if you set a static IP on public:

If you use DHCP on the public network but static IPs on client and management, then the DNS server is not required, and the NEXT button activates:

If you set a static IP on all 3 of public, client, and management, the DNS server is required. However, in the CLI if you set static IPs everywhere, then according to the doc the Google public DNS is used, if you don't set a DNS server:

So, it seems that the behaviour of the UI is inconsistent with the CLI, or at least with the behaviour of the CLI as documented until now. So, is the doc statement above correct? | 1.0 | In Create VCH UI, DNS server is mandatory if you set static IP on public - In the docs for the `vic-machine create --dns-server` option, we state the following:
---
A DNS server for the VCH endpoint VM to use on the public, client, and management networks.
- If you specify a DNS server, vSphere Integrated Containers Engine uses the same DNS server setting for all three of the public, client, and management networks.
- If you do not specify a DNS server and you specify a static IP address for the VCH endpoint VM on all three of the client, public, and management networks, vSphere Integrated Containers Engine uses the Google public DNS service.
- If you do not specify a DNS server and you use DHCP for all of the client, public, and management networks, vSphere Integrated Containers Engine uses the DNS servers that DHCP provides.
---
But, in the UI, it appears that a DNS server is mandatory if you set a static IP on public:

If you use DHCP on the public network but static IPs on client and management, then the DNS server is not required, and the NEXT button activates:

If you set a static IP on all 3 of public, client, and management, the DNS server is required. However, in the CLI if you set static IPs everywhere, then according to the doc the Google public DNS is used, if you don't set a DNS server:

So, it seems that the behaviour of the UI is inconsistent with the CLI, or at least with the behaviour of the CLI as documented until now. So, is the doc statement above correct? | priority | in create vch ui dns server is mandatory if you set static ip on public in the docs for the vic machine create dns server option we state the following a dns server for the vch endpoint vm to use on the public client and management networks if you specify a dns server vsphere integrated containers engine uses the same dns server setting for all three of the public client and management networks if you do not specify a dns server and you specify a static ip address for the vch endpoint vm on all three of the client public and management networks vsphere integrated containers engine uses the google public dns service if you do not specify a dns server and you use dhcp for all of the client public and management networks vsphere integrated containers engine uses the dns servers that dhcp provides but in the ui it appears that a dns server is mandatory if you set a static ip on public if you use dhcp on the public network but static ips on client and management then the dns server is not required and the next button activates if you set a static ip on all of public client and management the dns server is required however in the cli if you set static ips everywhere then according to the doc the google public dns is used if you don t set a dns server so it seems that the behaviour of the ui is inconsistent with the cli or at least with the behaviour of the cli as documented until now so is the doc statement above correct | 1 |
646,313 | 21,044,266,978 | IssuesEvent | 2022-03-31 14:45:41 | AY2122S2-TIC4002-F18-3/tp2 | https://api.github.com/repos/AY2122S2-TIC4002-F18-3/tp2 | closed | [PE-D] List and Filter in the table | priority.Low severity.VeryLow | In the table there is error:
list TAG should be removed;
`filter NAME` should be `filter TAG`
<!--session: 1648207687672-d90effc3-d576-49b5-9299-e7983d44a86f-->
<!--Version: Web v3.4.2-->
-------------
Labels: `severity.Medium` `type.DocumentationBug`
original: l-shihao/ped#10 | 1.0 | [PE-D] List and Filter in the table - In the table there is error:
list TAG should be removed;
`filter NAME` should be `filter TAG`
<!--session: 1648207687672-d90effc3-d576-49b5-9299-e7983d44a86f-->
<!--Version: Web v3.4.2-->
-------------
Labels: `severity.Medium` `type.DocumentationBug`
original: l-shihao/ped#10 | priority | list and filter in the table in the table there is error list tag should be removed filter name should be filter tag labels severity medium type documentationbug original l shihao ped | 1 |
112,043 | 4,501,652,566 | IssuesEvent | 2016-09-01 10:08:32 | thommoboy/There-are-no-brakes | https://api.github.com/repos/thommoboy/There-are-no-brakes | opened | occasionally players leave a player on the ground in elevator section | enhancement Priority Low Tutorial | probably moving the top pressureplate to the right of the elevator instead of left will make people more likely to discover it | 1.0 | occasionally players leave a player on the ground in elevator section - probably moving the top pressureplate to the right of the elevator instead of left will make people more likely to discover it | priority | occasionally players leave a player on the ground in elevator section probably moving the top pressureplate to the right of the elevator instead of left will make people more likely to discover it | 1 |
733,242 | 25,296,618,343 | IssuesEvent | 2022-11-17 07:20:53 | OpenSIPS/opensips | https://api.github.com/repos/OpenSIPS/opensips | closed | [CRASH] core dump in next_branches --> search_next_avp in 2.4.9 (nightly build as of march 5th) | bug low-priority investigating | <!--
Thank you for reporting a crash in OpenSIPS!
In order for us to understand better the reason of the crash, kindly provide all the available information you have about it, according to the template below
-->
**OpenSIPS version you are running**
<!-- paste below, inside the ticks block, the output of the `opensips -V` command -->
```
version: opensips 2.4.9 (x86_64/linux)
flags: STATS: On, DISABLE_NAGLE, USE_MCAST, SHM_MMAP, PKG_MALLOC, F_MALLOC, FAST_LOCK-ADAPTIVE_WAIT
ADAPTIVE_WAIT_LOOPS=1024, MAX_RECV_BUFFER_SIZE 262144, MAX_LISTEN 16, MAX_URI_SIZE 1024, BUF_SIZE 65535
poll method support: poll, epoll, sigio_rt, select.
main.c compiled on 21:22:53 Mar 5 2021 with gcc 4.8.5
```
**Crash Core Dump**
<!--
*Please* DO NOT post the content of the corefile here, but rather provide *a link* to a place (dropbox, pastebin, gdrive) where you stored the output of the core dump.
If you don't have a core dump, please generate one according to the steps described here:
https://www.opensips.org/Documentation/TroubleShooting-Crash
-->
https://pastebin.com/raw/WPBd27jq
**Describe the traffic that generated the bug**
<!--
Please describe what kind of traffic made OpenSIPS crash
-->
INVITE sent to remote server which responded with 300 and opensips crashed in next_branches. We have seen this in fail-over scenarios after getting a negative response as well. Always in next_branches. It is very intermittent. We sometimes will run at high volumes for weeks without having a crash.
**To Reproduce**
<!--
Steps to reproduce the behavior:
Example:
1. Start OpenSIPS
2. Start traffic
3. Check OpenSIPS crashed
-->
1. start opensips
2. start traffic
3. run for week to month
4. get core dump in next_branches function randomly after having called it thousands of times per day previous.
**Relevant System Logs**
<!--
Please provide, in ticks block (```example```), relevant information from the system logs
-->
```2021-03-25T18:04:51.776626+00:00 sbc-proxy-01 /usr/local/opensips/sbin/opensips[4490]: ERROR:tm:w_t_relay: t_forward_nonack failed
2021-03-25T18:04:59.222669+00:00 sbc-proxy-01 /usr/local/opensips/sbin/opensips[4490]: INFO:tm:t_forward_nonack: discarding fwd for a cancelled transaction
2021-03-25T18:04:59.223038+00:00 sbc-proxy-01 /usr/local/opensips/sbin/opensips[4490]: ERROR:tm:w_t_relay: t_forward_nonack failed
2021-03-25T18:05:04.727529+00:00 sbc-proxy-01 /usr/local/opensips/sbin/opensips[4480]: INFO:tm:t_forward_nonack: discarding fwd for a cancelled transaction
2021-03-25T18:05:04.727831+00:00 sbc-proxy-01 /usr/local/opensips/sbin/opensips[4480]: ERROR:tm:w_t_relay: t_forward_nonack failed
2021-03-25T18:05:06.866238+00:00 sbc-proxy-01 /usr/local/opensips/sbin/opensips[4487]: CRITICAL:core:sig_usr: segfault in process pid: 4487, id: 9
2021-03-25T18:05:10.465386+00:00 sbc-proxy-01 /usr/local/opensips/sbin/opensips[4474]: INFO:core:handle_sigs: child process 4487 exited by a signal 11
2021-03-25T18:05:10.465834+00:00 sbc-proxy-01 /usr/local/opensips/sbin/opensips[4474]: INFO:core:handle_sigs: core was generated
2021-03-25T18:05:10.466281+00:00 sbc-proxy-01 /usr/local/opensips/sbin/opensips[4474]: INFO:core:handle_sigs: terminating due to SIGCHLD
2021-03-25T18:05:10.466658+00:00 sbc-proxy-01 /usr/local/opensips/sbin/opensips[4477]: INFO:core:sig_usr: signal 15 received
```
**OS/environment information**
- Operating System: CentOS 7<!-- (example: `Debian 9`) -->
- OpenSIPS installation: manual download source from git nightly build<!-- (example: `git`/`source`/`debs`/`manual packages`) -->
- other relevant information:
**Additional context**
<!-- Add any other context about the problem here. -->
| 1.0 | [CRASH] core dump in next_branches --> search_next_avp in 2.4.9 (nightly build as of march 5th) - <!--
Thank you for reporting a crash in OpenSIPS!
In order for us to understand better the reason of the crash, kindly provide all the available information you have about it, according to the template below
-->
**OpenSIPS version you are running**
<!-- paste below, inside the ticks block, the output of the `opensips -V` command -->
```
version: opensips 2.4.9 (x86_64/linux)
flags: STATS: On, DISABLE_NAGLE, USE_MCAST, SHM_MMAP, PKG_MALLOC, F_MALLOC, FAST_LOCK-ADAPTIVE_WAIT
ADAPTIVE_WAIT_LOOPS=1024, MAX_RECV_BUFFER_SIZE 262144, MAX_LISTEN 16, MAX_URI_SIZE 1024, BUF_SIZE 65535
poll method support: poll, epoll, sigio_rt, select.
main.c compiled on 21:22:53 Mar 5 2021 with gcc 4.8.5
```
**Crash Core Dump**
<!--
*Please* DO NOT post the content of the corefile here, but rather provide *a link* to a place (dropbox, pastebin, gdrive) where you stored the output of the core dump.
If you don't have a core dump, please generate one according to the steps described here:
https://www.opensips.org/Documentation/TroubleShooting-Crash
-->
https://pastebin.com/raw/WPBd27jq
**Describe the traffic that generated the bug**
<!--
Please describe what kind of traffic made OpenSIPS crash
-->
INVITE sent to remote server which responded with 300 and opensips crashed in next_branches. We have seen this in fail-over scenarios after getting a negative response as well. Always in next_branches. It is very intermittent. We sometimes will run at high volumes for weeks without having a crash.
**To Reproduce**
<!--
Steps to reproduce the behavior:
Example:
1. Start OpenSIPS
2. Start traffic
3. Check OpenSIPS crashed
-->
1. start opensips
2. start traffic
3. run for week to month
4. get core dump in next_branches function randomly after having called it thousands of times per day previous.
**Relevant System Logs**
<!--
Please provide, in ticks block (```example```), relevant information from the system logs
-->
```2021-03-25T18:04:51.776626+00:00 sbc-proxy-01 /usr/local/opensips/sbin/opensips[4490]: ERROR:tm:w_t_relay: t_forward_nonack failed
2021-03-25T18:04:59.222669+00:00 sbc-proxy-01 /usr/local/opensips/sbin/opensips[4490]: INFO:tm:t_forward_nonack: discarding fwd for a cancelled transaction
2021-03-25T18:04:59.223038+00:00 sbc-proxy-01 /usr/local/opensips/sbin/opensips[4490]: ERROR:tm:w_t_relay: t_forward_nonack failed
2021-03-25T18:05:04.727529+00:00 sbc-proxy-01 /usr/local/opensips/sbin/opensips[4480]: INFO:tm:t_forward_nonack: discarding fwd for a cancelled transaction
2021-03-25T18:05:04.727831+00:00 sbc-proxy-01 /usr/local/opensips/sbin/opensips[4480]: ERROR:tm:w_t_relay: t_forward_nonack failed
2021-03-25T18:05:06.866238+00:00 sbc-proxy-01 /usr/local/opensips/sbin/opensips[4487]: CRITICAL:core:sig_usr: segfault in process pid: 4487, id: 9
2021-03-25T18:05:10.465386+00:00 sbc-proxy-01 /usr/local/opensips/sbin/opensips[4474]: INFO:core:handle_sigs: child process 4487 exited by a signal 11
2021-03-25T18:05:10.465834+00:00 sbc-proxy-01 /usr/local/opensips/sbin/opensips[4474]: INFO:core:handle_sigs: core was generated
2021-03-25T18:05:10.466281+00:00 sbc-proxy-01 /usr/local/opensips/sbin/opensips[4474]: INFO:core:handle_sigs: terminating due to SIGCHLD
2021-03-25T18:05:10.466658+00:00 sbc-proxy-01 /usr/local/opensips/sbin/opensips[4477]: INFO:core:sig_usr: signal 15 received
```
**OS/environment information**
- Operating System: CentOS 7<!-- (example: `Debian 9`) -->
- OpenSIPS installation: manual download source from git nightly build<!-- (example: `git`/`source`/`debs`/`manual packages`) -->
- other relevant information:
**Additional context**
<!-- Add any other context about the problem here. -->
| priority | core dump in next branches search next avp in nightly build as of march thank you for reporting a crash in opensips in order for us to understand better the reason of the crash kindly provide all the available information you have about it according to the template below opensips version you are running version opensips linux flags stats on disable nagle use mcast shm mmap pkg malloc f malloc fast lock adaptive wait adaptive wait loops max recv buffer size max listen max uri size buf size poll method support poll epoll sigio rt select main c compiled on mar with gcc crash core dump please do not post the content of the corefile here but rather provide a link to a place dropbox pastebin gdrive where you stored the output of the core dump if you don t have a core dump please generate one according to the steps described here describe the traffic that generated the bug please describe what kind of traffic made opensips crash invite sent to remote server which responded with and opensips crashed in next branches we have seen this in fail over scenarios after getting a negative response as well always in next branches it is very intermittent we sometimes will run at high volumes for weeks without having a crash to reproduce steps to reproduce the behavior example start opensips start traffic check opensips crashed start opensips start traffic run for week to month get core dump in next branches function randomly after having called it thousands of times per day previous relevant system logs please provide in ticks block example relevant information from the system logs sbc proxy usr local opensips sbin opensips error tm w t relay t forward nonack failed sbc proxy usr local opensips sbin opensips info tm t forward nonack discarding fwd for a cancelled transaction sbc proxy usr local opensips sbin opensips error tm w t relay t forward nonack failed sbc proxy usr local opensips sbin opensips info tm t forward nonack discarding fwd for a cancelled transaction sbc proxy usr local opensips sbin opensips error tm w t relay t forward nonack failed sbc proxy usr local opensips sbin opensips critical core sig usr segfault in process pid id sbc proxy usr local opensips sbin opensips info core handle sigs child process exited by a signal sbc proxy usr local opensips sbin opensips info core handle sigs core was generated sbc proxy usr local opensips sbin opensips info core handle sigs terminating due to sigchld sbc proxy usr local opensips sbin opensips info core sig usr signal received os environment information operating system centos opensips installation manual download source from git nightly build other relevant information additional context | 1 |
145 | 2,492,073,954 | IssuesEvent | 2015-01-04 11:34:30 | ferstaberinde/FAMDB | https://api.github.com/repos/ferstaberinde/FAMDB | closed | add mission author selection should default to currently logged in user | enhancement priority:low UI | And possibly automatically set to text-input if author does not exist | 1.0 | add mission author selection should default to currently logged in user - And possibly automatically set to text-input if author does not exist | priority | add mission author selection should default to currently logged in user and possibly automatically set to text input if author does not exist | 1 |
825,555 | 31,394,160,970 | IssuesEvent | 2023-08-26 18:14:58 | KevinVG207/UmaLauncher | https://api.github.com/repos/KevinVG207/UmaLauncher | closed | Add option to preferences: "Minimize CarrotJuicer Console" | enhancement low priority | Would be a nice option to have, I guess. | 1.0 | Add option to preferences: "Minimize CarrotJuicer Console" - Would be a nice option to have, I guess. | priority | add option to preferences minimize carrotjuicer console would be a nice option to have i guess | 1 |
13,838 | 2,610,305,546 | IssuesEvent | 2015-02-26 19:38:13 | chrsmith/hedgewars | https://api.github.com/repos/chrsmith/hedgewars | closed | Te old music of .15 was put back into .16 instead of the new cool music .16 had before :( | auto-migrated Priority-Low Type-Enhancement | ```
What steps will reproduce the problem?
1. N/A
2. N/A
3. N/A
What is the expected output? What do you see instead? cool music,nothing I hear
old music
What version of the product are you using? On what operating system? .16
something Ubuntu
Please provide any additional information below. The new music sounded fresh
and new but I am sick of listening to that old intro song , it sucks that .16
has the .15 now :( *sad panda*
```
-----
Original issue reported on code.google.com by `davidb...@gmail.com` on 5 Sep 2011 at 2:09 | 1.0 | Te old music of .15 was put back into .16 instead of the new cool music .16 had before :( - ```
What steps will reproduce the problem?
1. N/A
2. N/A
3. N/A
What is the expected output? What do you see instead? cool music,nothing I hear
old music
What version of the product are you using? On what operating system? .16
something Ubuntu
Please provide any additional information below. The new music sounded fresh
and new but I am sick of listening to that old intro song , it sucks that .16
has the .15 now :( *sad panda*
```
-----
Original issue reported on code.google.com by `davidb...@gmail.com` on 5 Sep 2011 at 2:09 | priority | te old music of was put back into instead of the new cool music had before what steps will reproduce the problem n a n a n a what is the expected output what do you see instead cool music nothing i hear old music what version of the product are you using on what operating system something ubuntu please provide any additional information below the new music sounded fresh and new but i am sick of listening to that old intro song it sucks that has the now sad panda original issue reported on code google com by davidb gmail com on sep at | 1 |
619,916 | 19,539,634,242 | IssuesEvent | 2021-12-31 17:07:07 | pombase/website | https://api.github.com/repos/pombase/website | reopened | consider ancestry of extensions when filtering display for redundancy | enhancement low priority GO data filtering_and_extensions |

binds hrk1 part_of "protein localization to chromatin"
seems to be fully redundant with the bottom one?
| 1.0 | consider ancestry of extensions when filtering display for redundancy -

binds hrk1 part_of "protein localization to chromatin"
seems to be fully redundant with the bottom one?
| priority | consider ancestry of extensions when filtering display for redundancy binds part of protein localization to chromatin seems to be fully redundant with the bottom one | 1 |
311,458 | 9,533,402,800 | IssuesEvent | 2019-04-29 21:11:29 | ucb-bar/hammer | https://api.github.com/repos/ucb-bar/hammer | opened | Convert all length units to Decimal | DeveloperSupport low priority usability | Right now we use `float`s for most physical length units. There were real issues caused by this in the power straps code, so I wrote a function to snap things to a grid. However it's pretty easy to forget to use this, so I think going forward we should use the `Decimal` type, which will be much safer. | 1.0 | Convert all length units to Decimal - Right now we use `float`s for most physical length units. There were real issues caused by this in the power straps code, so I wrote a function to snap things to a grid. However it's pretty easy to forget to use this, so I think going forward we should use the `Decimal` type, which will be much safer. | priority | convert all length units to decimal right now we use float s for most physical length units there were real issues caused by this in the power straps code so i wrote a function to snap things to a grid however it s pretty easy to forget to use this so i think going forward we should use the decimal type which will be much safer | 1 |
482,261 | 13,903,663,896 | IssuesEvent | 2020-10-20 07:34:30 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [Modkit] Package reference assemblies in modkit | Category: Modkit Priority: Low Status: Fixed | This would mean modders don't have to try and extract the DLLs from the game | 1.0 | [Modkit] Package reference assemblies in modkit - This would mean modders don't have to try and extract the DLLs from the game | priority | package reference assemblies in modkit this would mean modders don t have to try and extract the dlls from the game | 1 |
577,175 | 17,104,722,485 | IssuesEvent | 2021-07-09 15:56:34 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | opened | Improve compatibility of TypeSafeIndex with Eigen 3.4 | priority: low team: dynamics | Related to #14968. Refer to https://github.com/RobotLocomotion/drake/pull/15335#pullrequestreview-702097433 for the motivating discussion.
In #15335, we added some `int{}` wrappers around uses of `TypeSafeIndex` to placate the compiler when using Eigen's 3.4 development branch. The purpose of `TypeSafeIndex` is to easily serve as an index, so it's a bit awkward to need `int{}` now.
We should dig into the templates or overloads and figure out what's wrong with Drake or Eigen, so that we can remove the `int{}` noise. Once we have that fix landed, we should revert #15335 and any similar `int{}` that have been added in the meantime. | 1.0 | Improve compatibility of TypeSafeIndex with Eigen 3.4 - Related to #14968. Refer to https://github.com/RobotLocomotion/drake/pull/15335#pullrequestreview-702097433 for the motivating discussion.
In #15335, we added some `int{}` wrappers around uses of `TypeSafeIndex` to placate the compiler when using Eigen's 3.4 development branch. The purpose of `TypeSafeIndex` is to easily serve as an index, so it's a bit awkward to need `int{}` now.
We should dig into the templates or overloads and figure out what's wrong with Drake or Eigen, so that we can remove the `int{}` noise. Once we have that fix landed, we should revert #15335 and any similar `int{}` that have been added in the meantime. | priority | improve compatibility of typesafeindex with eigen related to refer to for the motivating discussion in we added some int wrappers around uses of typesafeindex to placate the compiler when using eigen s development branch the purpose of typesafeindex is to easily serve as an index so it s a bit awkward to need int now we should dig into the templates or overloads and figure out what s wrong with drake or eigen so that we can remove the int noise once we have that fix landed we should revert and any similar int that have been added in the meantime | 1 |
57,465 | 3,082,014,341 | IssuesEvent | 2015-08-23 09:52:12 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | opened | Сломанное отображение меню на WHS или при смене темы. | bug Component-UI imported OpSys-Windows Priority-Low | _From [viptair](https://code.google.com/u/viptair/) on April 07, 2013 20:54:34_
Версия x64 13123
Windows Home Server 2011
Вот так выглядит строка меню после 1 дня работы. Свернуть-развернуть не помогает, только закрыть-открыть программу.
**Attachment:** [bugFL.jpg](http://code.google.com/p/flylinkdc/issues/detail?id=985)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=985_ | 1.0 | Сломанное отображение меню на WHS или при смене темы. - _From [viptair](https://code.google.com/u/viptair/) on April 07, 2013 20:54:34_
Версия x64 13123
Windows Home Server 2011
Вот так выглядит строка меню после 1 дня работы. Свернуть-развернуть не помогает, только закрыть-открыть программу.
**Attachment:** [bugFL.jpg](http://code.google.com/p/flylinkdc/issues/detail?id=985)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=985_ | priority | сломанное отображение меню на whs или при смене темы from on april версия windows home server вот так выглядит строка меню после дня работы свернуть развернуть не помогает только закрыть открыть программу attachment original issue | 1 |
360,404 | 10,688,212,860 | IssuesEvent | 2019-10-22 17:45:27 | ntop/ntopng | https://api.github.com/repos/ntop/ntopng | opened | Inconsistency between misbehaving flows menu and flow filter | low-priority bug | 

Given a flow with two or more abnormal status (long lived and elephant flow in the picture), for the dropdown the flow is counted twice, once as 1 elephant flow and once as long lived (`status_stats->incStats` in `Flow::sumStats`). However, when filtering by flow status, only the flow predominant status is taken into account, causing a "no results found" on the elephant flow status (as long lived is the predominant).
The first screenshot also shows another issue, the red triangle indicates that the flow is an elephant flow despite showing the "long lived flows". This can be explained because the "elephant flow" is the status for which an alert was initially generated (because the flow was not marked as long lived yet). | 1.0 | Inconsistency between misbehaving flows menu and flow filter - 

Given a flow with two or more abnormal status (long lived and elephant flow in the picture), for the dropdown the flow is counted twice, once as 1 elephant flow and once as long lived (`status_stats->incStats` in `Flow::sumStats`). However, when filtering by flow status, only the flow predominant status is taken into account, causing a "no results found" on the elephant flow status (as long lived is the predominant).
The first screenshot also shows another issue, the red triangle indicates that the flow is an elephant flow despite showing the "long lived flows". This can be explained because the "elephant flow" is the status for which an alert was initially generated (because the flow was not marked as long lived yet). | priority | inconsistency between misbehaving flows menu and flow filter given a flow with two or more abnormal status long lived and elephant flow in the picture for the dropdown the flow is counted twice once as elephant flow and once as long lived status stats incstats in flow sumstats however when filtering by flow status only the flow predominant status is taken into account causing a no results found on the elephant flow status as long lived is the predominant the first screenshot also shows another issue the red triangle indicates that the flow is an elephant flow despite showing the long lived flows this can be explained because the elephant flow is the status for which an alert was initially generated because the flow was not marked as long lived yet | 1 |
503,662 | 14,596,419,653 | IssuesEvent | 2020-12-20 15:46:36 | darktable-org/darktable | https://api.github.com/repos/darktable-org/darktable | closed | TIFF import → Warnings | difficulty: trivial priority: low reproduce: confirmed understood: clear | Export any image to TIFF and import it back to dt. See console output with lots of "[tiff_open] warning: TIFFReadDirectory: Unknown field with tag XXXXX (0xXXXX) encountered". | 1.0 | TIFF import → Warnings - Export any image to TIFF and import it back to dt. See console output with lots of "[tiff_open] warning: TIFFReadDirectory: Unknown field with tag XXXXX (0xXXXX) encountered". | priority | tiff import → warnings export any image to tiff and import it back to dt see console output with lots of warning tiffreaddirectory unknown field with tag xxxxx encountered | 1 |
549,329 | 16,090,631,164 | IssuesEvent | 2021-04-26 16:15:14 | fiffty-50/openpilotlog | https://api.github.com/repos/fiffty-50/openpilotlog | closed | Change implementation of warning about expiring currencies | Low Priority enhancement | At the moment, expiring currencies are only shown on the HomeWidget if they are selected as 'active' in the settingswidget. This is only the case for Take-off and Landing currency at the moment.
With approaching currency expiration dates, the expiring currency should be shown in the homewidget, even if it is not 'active', if the user has selected to be warned about expiring currencies.
Affects:
- HomeWidget | 1.0 | Change implementation of warning about expiring currencies - At the moment, expiring currencies are only shown on the HomeWidget if they are selected as 'active' in the settingswidget. This is only the case for Take-off and Landing currency at the moment.
With approaching currency expiration dates, the expiring currency should be shown in the homewidget, even if it is not 'active', if the user has selected to be warned about expiring currencies.
Affects:
- HomeWidget | priority | change implementation of warning about expiring currencies at the moment expiring currencies are only shown on the homewidget if they are selected as active in the settingswidget this is only the case for take off and landing currency at the moment with approaching currency expiration dates the expiring currency should be shown in the homewidget even if it is not active if the user has selected to be warned about expiring currencies affects homewidget | 1 |
539,470 | 15,789,307,609 | IssuesEvent | 2021-04-01 22:27:58 | ankidroid/Anki-Android | https://api.github.com/repos/ankidroid/Anki-Android | closed | Custom study: Don't show "increase new card limit" when no new cards | Accepted Anki Ecosystem Compatibility Enhancement Good First Issue! Help Wanted Keep Open Priority-Low | ###### Research
[x] I have read the [support page](https://ankidroid.org/docs/help.html) and am reporting a bug or enhancement request specific to AnkiDroid
[x] I have checked the [manual](https://ankidroid.org/docs/manual.html) and the [FAQ](https://github.com/ankidroid/Anki-Android/wiki/FAQ) and could not find a solution to my issue
[x] I have searched for similar existing issues here and on the user forum
###### Reproduction Steps
1. On the main screen, long-press a deck and select "User-defined learning"
###### Expected Result
If there are 0 "new cards", the menu item "Increase limit for new cards" is not shown.
###### Actual Result
The menu item "Increase limit for new cards" is shown even when it doesn't make sense. | 1.0 | Custom study: Don't show "increase new card limit" when no new cards - ###### Research
[x] I have read the [support page](https://ankidroid.org/docs/help.html) and am reporting a bug or enhancement request specific to AnkiDroid
[x] I have checked the [manual](https://ankidroid.org/docs/manual.html) and the [FAQ](https://github.com/ankidroid/Anki-Android/wiki/FAQ) and could not find a solution to my issue
[x] I have searched for similar existing issues here and on the user forum
###### Reproduction Steps
1. On the main screen, long-press a deck and select "User-defined learning"
###### Expected Result
If there are 0 "new cards", the menu item "Increase limit for new cards" is not shown.
###### Actual Result
The menu item "Increase limit for new cards" is shown even when it doesn't make sense. | priority | custom study don t show increase new card limit when no new cards research i have read the and am reporting a bug or enhancement request specific to ankidroid i have checked the and the and could not find a solution to my issue i have searched for similar existing issues here and on the user forum reproduction steps on the main screen long press a deck and select user defined learning expected result if there are new cards the menu item increase limit for new cards is not shown actual result the menu item increase limit for new cards is shown even when it doesn t make sense | 1 |
773,281 | 27,152,633,086 | IssuesEvent | 2023-02-17 03:33:04 | 0auBSQ/OpenTaiko | https://api.github.com/repos/0auBSQ/OpenTaiko | closed | [Bug]Config.iniでデフォルトのソート順を「絶対パス順」にすると選曲のBoxの中の「とじる」ボタンがすべて後ろの方に表示される | bug priority: low-to-medium | あるべきところに「とじる」がなく

すべて後ろに表示されている






全部のフォルダで確認しました
デフォルトのソート順を「AC15」にし、ゲーム内のソートメニューから変更した場合はこのようなバグは起きませんでした
また、このバグが起きた後、ソートメニューからソート順を変更してもバグは残ったままでした
ゲーム自体は問題なくプレイ可 | 1.0 | [Bug]Config.iniでデフォルトのソート順を「絶対パス順」にすると選曲のBoxの中の「とじる」ボタンがすべて後ろの方に表示される - あるべきところに「とじる」がなく

すべて後ろに表示されている






全部のフォルダで確認しました
デフォルトのソート順を「AC15」にし、ゲーム内のソートメニューから変更した場合はこのようなバグは起きませんでした
また、このバグが起きた後、ソートメニューからソート順を変更してもバグは残ったままでした
ゲーム自体は問題なくプレイ可 | priority | config iniでデフォルトのソート順を「絶対パス順」にすると選曲のboxの中の「とじる」ボタンがすべて後ろの方に表示される あるべきところに「とじる」がなく すべて後ろに表示されている 全部のフォルダで確認しました デフォルトのソート順を「 」にし、ゲーム内のソートメニューから変更した場合はこのようなバグは起きませんでした また、このバグが起きた後、ソートメニューからソート順を変更してもバグは残ったままでした ゲーム自体は問題なくプレイ可 | 1 |
746,036 | 26,011,027,209 | IssuesEvent | 2022-12-21 01:45:06 | neuropsychology/NeuroKit | https://api.github.com/repos/neuropsychology/NeuroKit | closed | More PPG and Blood Pressure related resources | feature idea :fire: wontfix PPG/BVP :heartbeat: low priority :sleeping: | Chanced upon this [paper](https://www.sciencedirect.com/science/article/abs/pii/S1746809418302209) on blood pressure estimation from PPG signals and thought it might be useful for enhancing our PPG features 😄
From raw PPG signals, they estimate
- Mean Arterial Pressure (MAP)
- Diastolic Blood Pressure (DBP)
- Systolic Blood Pressure (SBP)
(Though note their results on the appropriateness of the algorithms)
Also see this [issue](https://github.com/neuropsychology/NeuroKit/issues/312) regarding Nabian et al.'s package on Blood Pressure preprocessing pipelines | 1.0 | More PPG and Blood Pressure related resources - Chanced upon this [paper](https://www.sciencedirect.com/science/article/abs/pii/S1746809418302209) on blood pressure estimation from PPG signals and thought it might be useful for enhancing our PPG features 😄
From raw PPG signals, they estimate
- Mean Arterial Pressure (MAP)
- Diastolic Blood Pressure (DBP)
- Systolic Blood Pressure (SBP)
(Though note their results on the appropriateness of the algorithms)
Also see this [issue](https://github.com/neuropsychology/NeuroKit/issues/312) regarding Nabian et al.'s package on Blood Pressure preprocessing pipelines | priority | more ppg and blood pressure related resources chanced upon this on blood pressure estimation from ppg signals and thought it might be useful for enhancing our ppg features 😄 from raw ppg signals they estimate mean arterial pressure map diastolic blood pressure dbp systolic blood pressure sbp though note their results on the appropriateness of the algorithms also see this regarding nabian et al s package on blood pressure preprocessing pipelines | 1 |
696,167 | 23,888,071,670 | IssuesEvent | 2022-09-08 09:16:06 | netdata/netdata-cloud | https://api.github.com/repos/netdata/netdata-cloud | closed | [Bug]: Invalid re-direct after refreshing page following custom dashboard creation | bug internal submit priority/low visualizations-team cloud-frontend | ### Bug description
If user refreshes page immediately after creating a custom dashboard he is re-directed to Dashboards tab and sees an invalid link for dashboard error message.
### Expected behavior
After user refreshes the page he should land on the page he previously was on
### Steps to reproduce
1. Login to netdata cloud
2. claim an agent
3. create a custom dashboard
5. save custom dashboard
6. refresh the page
### Screenshots

### Error Logs
_No response_
### Desktop
OS: MacOs
Browser chrome
Browser Version 101
### Additional context
_No response_ | 1.0 | [Bug]: Invalid re-direct after refreshing page following custom dashboard creation - ### Bug description
If user refreshes page immediately after creating a custom dashboard he is re-directed to Dashboards tab and sees an invalid link for dashboard error message.
### Expected behavior
After user refreshes the page he should land on the page he previously was on
### Steps to reproduce
1. Login to netdata cloud
2. claim an agent
3. create a custom dashboard
5. save custom dashboard
6. refresh the page
### Screenshots

### Error Logs
_No response_
### Desktop
OS: MacOs
Browser chrome
Browser Version 101
### Additional context
_No response_ | priority | invalid re direct after refreshing page following custom dashboard creation bug description if user refreshes page immediately after creating a custom dashboard he is re directed to dashboards tab and sees an invalid link for dashboard error message expected behavior after user refreshes the page he should land on the page he previously was on steps to reproduce login to netdata cloud claim an agent create a custom dashboard save custom dashboard refresh the page screenshots error logs no response desktop os macos browser chrome browser version additional context no response | 1 |
111,859 | 4,489,662,748 | IssuesEvent | 2016-08-30 11:54:58 | thommoboy/There-are-no-brakes | https://api.github.com/repos/thommoboy/There-are-no-brakes | closed | ropes in tutorial level looking weird | bug Priority Low Tutorial | didnt get in the way of the players at all but they did mention it
replace rope objects with line renderers | 1.0 | ropes in tutorial level looking weird - didnt get in the way of the players at all but they did mention it
replace rope objects with line renderers | priority | ropes in tutorial level looking weird didnt get in the way of the players at all but they did mention it replace rope objects with line renderers | 1 |
419,028 | 12,216,714,505 | IssuesEvent | 2020-05-01 15:42:16 | XeroAPI/xero-php-oauth2 | https://api.github.com/repos/XeroAPI/xero-php-oauth2 | closed | getInvoice returns an Invoices model instead of Invoice | Priority: Low Status: Completed Type: Enhancement | It'd be nice if there weren't extra steps to get to the specified invoice when [calling `getInvoice`](https://github.com/XeroAPI/xero-php-oauth2/blob/58451ba80d1a09ce8120d02766508713fa1da45b/docs/accounting/Api/AccountingApi.md#getinvoice).
AFAIK this function should never return multiple invoices (that would be what `getInvoices` is for), but to get to the requested Invoice your code has to look a bit cluttered:
```php
$invoicesResponse = $apiInstance->getInvoice(
$record['tenant_id'],
$guid
);
if (!empty($invoicesResponse)) {
$invoices = $invoicesResponse->getInvoices();
if (count($invoices) > 0) {
return $invoices[0];
}
}
```
Whereas it could be:
```php
$invoice = $apiInstance->getInvoice(
$record['tenant_id'],
$guid
);
if (!empty($invoice)) {
return $invoice;
}
``` | 1.0 | getInvoice returns an Invoices model instead of Invoice - It'd be nice if there weren't extra steps to get to the specified invoice when [calling `getInvoice`](https://github.com/XeroAPI/xero-php-oauth2/blob/58451ba80d1a09ce8120d02766508713fa1da45b/docs/accounting/Api/AccountingApi.md#getinvoice).
AFAIK this function should never return multiple invoices (that would be what `getInvoices` is for), but to get to the requested Invoice your code has to look a bit cluttered:
```php
$invoicesResponse = $apiInstance->getInvoice(
$record['tenant_id'],
$guid
);
if (!empty($invoicesResponse)) {
$invoices = $invoicesResponse->getInvoices();
if (count($invoices) > 0) {
return $invoices[0];
}
}
```
Whereas it could be:
```php
$invoice = $apiInstance->getInvoice(
$record['tenant_id'],
$guid
);
if (!empty($invoice)) {
return $invoice;
}
``` | priority | getinvoice returns an invoices model instead of invoice it d be nice if there weren t extra steps to get to the specified invoice when afaik this function should never return multiple invoices that would be what getinvoices is for but to get to the requested invoice your code has to look a bit cluttered php invoicesresponse apiinstance getinvoice record guid if empty invoicesresponse invoices invoicesresponse getinvoices if count invoices return invoices whereas it could be php invoice apiinstance getinvoice record guid if empty invoice return invoice | 1 |
3,464 | 2,538,461,998 | IssuesEvent | 2015-01-27 07:03:23 | newca12/gapt | https://api.github.com/repos/newca12/gapt | closed | NetBeans IDE displays Warnings | 2–5 stars IDE-NetBeans imported Priority-Low Type-Other | _From [bruno...@gmail.com](https://code.google.com/u/105016684496602932564/) on July 22, 2010 08:38:41_
I have successfully installed GAP within NetBeans, following the instructions in the wiki. Everything goes ok and all tests are passed, but for every subproject, NetBeans displays a warning saying:
"Some dependency artifacts are not in the local repository"
"The project loading failed or was not complete"
Any ideashow to get rid of these warnings?
_Original issue: http://code.google.com/p/gapt/issues/detail?id=76_ | 1.0 | NetBeans IDE displays Warnings - _From [bruno...@gmail.com](https://code.google.com/u/105016684496602932564/) on July 22, 2010 08:38:41_
I have successfully installed GAP within NetBeans, following the instructions in the wiki. Everything goes ok and all tests are passed, but for every subproject, NetBeans displays a warning saying:
"Some dependency artifacts are not in the local repository"
"The project loading failed or was not complete"
Any ideashow to get rid of these warnings?
_Original issue: http://code.google.com/p/gapt/issues/detail?id=76_ | priority | netbeans ide displays warnings from on july i have successfully installed gap within netbeans following the instructions in the wiki everything goes ok and all tests are passed but for every subproject netbeans displays a warning saying some dependency artifacts are not in the local repository the project loading failed or was not complete any ideashow to get rid of these warnings original issue | 1 |
337,264 | 10,212,860,267 | IssuesEvent | 2019-08-14 20:32:54 | space-wizards/RobustToolbox | https://api.github.com/repos/space-wizards/RobustToolbox | closed | [ServerConsole] restartserver doesn't restart "enough" | Priority: 3-low Project: Server Type: Bug Type: Feature | If the server is memory leaking, i'd expect to be able to run this to cleanup all the things.
Doesn't seem to do that.
| 1.0 | [ServerConsole] restartserver doesn't restart "enough" - If the server is memory leaking, i'd expect to be able to run this to cleanup all the things.
Doesn't seem to do that.
| priority | restartserver doesn t restart enough if the server is memory leaking i d expect to be able to run this to cleanup all the things doesn t seem to do that | 1 |
387,518 | 11,463,038,709 | IssuesEvent | 2020-02-07 15:16:04 | pysal/spaghetti | https://api.github.com/repos/pysal/spaghetti | closed | The Transportation Problem notebook | binders enhancement notebooks priority-low | create a notebook that demonstrates The Transportation Problem | 1.0 | The Transportation Problem notebook - create a notebook that demonstrates The Transportation Problem | priority | the transportation problem notebook create a notebook that demonstrates the transportation problem | 1 |
387,014 | 11,454,647,683 | IssuesEvent | 2020-02-06 17:28:15 | Sakuten/backend | https://api.github.com/repos/Sakuten/backend | closed | もうちょいユニットテストをする | low priority refactoring |
Step 1: 目的
============
* 現時点で、インテグレーションテストが主
* もうちょい細かくテストをする
Step 2: 概要
============
* エンドポイントに直接書かれた機能を分離
* それぞれに対して、**エンドポイントを介さずにテスト**
* もちろん今までのテストコードは消さない
* 作展終了後!
| 1.0 | もうちょいユニットテストをする -
Step 1: 目的
============
* 現時点で、インテグレーションテストが主
* もうちょい細かくテストをする
Step 2: 概要
============
* エンドポイントに直接書かれた機能を分離
* それぞれに対して、**エンドポイントを介さずにテスト**
* もちろん今までのテストコードは消さない
* 作展終了後!
| priority | もうちょいユニットテストをする step 目的 現時点で、インテグレーションテストが主 もうちょい細かくテストをする step 概要 エンドポイントに直接書かれた機能を分離 それぞれに対して、 エンドポイントを介さずにテスト もちろん今までのテストコードは消さない 作展終了後! | 1 |
254,905 | 8,101,273,074 | IssuesEvent | 2018-08-12 11:45:30 | brandon1024/find | https://api.github.com/repos/brandon1024/find | closed | Migrate to ES6 | low priority refactor | ## Issue Description
Much of our code uses the older ES5 syntax. Because ES6 has been supported by both Firefox and Chrome for quite some time, it is time to migrate to the newer syntax. This will likely have no effect on performance, but will improve readability and therefore maintainability as well.
## Tasks
- [ ] Migrate code syntax to ES6
- [ ] Update documentation to display supported browser versions | 1.0 | Migrate to ES6 - ## Issue Description
Much of our code uses the older ES5 syntax. Because ES6 has been supported by both Firefox and Chrome for quite some time, it is time to migrate to the newer syntax. This will likely have no effect on performance, but will improve readability and therefore maintainability as well.
## Tasks
- [ ] Migrate code syntax to ES6
- [ ] Update documentation to display supported browser versions | priority | migrate to issue description much of our code uses the older syntax because has been supported by both firefox and chrome for quite some time it is time to migrate to the newer syntax this will likely have no effect on performance but will improve readability and therefore maintainability as well tasks migrate code syntax to update documentation to display supported browser versions | 1 |
389,622 | 11,504,615,616 | IssuesEvent | 2020-02-12 23:48:31 | whole-tale/gwvolman | https://api.github.com/repos/whole-tale/gwvolman | closed | Add Publishing Unit Tests | priority/low | The publishing code has unit tests from girder-wholetale. It would be great to get these in here. | 1.0 | Add Publishing Unit Tests - The publishing code has unit tests from girder-wholetale. It would be great to get these in here. | priority | add publishing unit tests the publishing code has unit tests from girder wholetale it would be great to get these in here | 1 |
827,386 | 31,769,330,914 | IssuesEvent | 2023-09-12 10:44:18 | clEsperanto/CLIc_prototype | https://api.github.com/repos/clEsperanto/CLIc_prototype | opened | CUDA slow execution time | enhancement Backend low priority | Initial benchmark shows that the CUDA backend implementation take a significant time to process data.
possible slowdown culprit are
- __kernel compilation__
- kernel source are first compiled by `nvcrt` into a `.ptx` to then be loaded into the CUDA kernel structure.
- this takes approx ~100ms per source code compiled
- e.g. 3d gaussian blur is impacted by a ~300ms delay (100ms per dimensions)
- __kernel translation__
- kernel source are in OCL, hence a code conversion is done at runtime, using pre-compiled regexp
- after verification, this process takes ~0.4 ms (and drop to ~ 0.2ms if multiple iterations
- e.g. 3d gaussian blur is impacted by ~0.8ms (~0.4+0.2+0.2 ms)
more tests are needed but currently not a priority | 1.0 | CUDA slow execution time - Initial benchmark shows that the CUDA backend implementation take a significant time to process data.
possible slowdown culprit are
- __kernel compilation__
- kernel source are first compiled by `nvcrt` into a `.ptx` to then be loaded into the CUDA kernel structure.
- this takes approx ~100ms per source code compiled
- e.g. 3d gaussian blur is impacted by a ~300ms delay (100ms per dimensions)
- __kernel translation__
- kernel source are in OCL, hence a code conversion is done at runtime, using pre-compiled regexp
- after verification, this process takes ~0.4 ms (and drop to ~ 0.2ms if multiple iterations
- e.g. 3d gaussian blur is impacted by ~0.8ms (~0.4+0.2+0.2 ms)
more tests are needed but currently not a priority | priority | cuda slow execution time initial benchmark shows that the cuda backend implementation take a significant time to process data possible slowdown culprit are kernel compilation kernel source are first compiled by nvcrt into a ptx to then be loaded into the cuda kernel structure this takes approx per source code compiled e g gaussian blur is impacted by a delay per dimensions kernel translation kernel source are in ocl hence a code conversion is done at runtime using pre compiled regexp after verification this process takes ms and drop to if multiple iterations e g gaussian blur is impacted by ms more tests are needed but currently not a priority | 1 |
228,822 | 7,568,150,499 | IssuesEvent | 2018-04-22 17:13:59 | smit-happens/YCP_EVOS | https://api.github.com/repos/smit-happens/YCP_EVOS | closed | Close log files off in shutdown | priority-low size-small | <!--
Issue template
To Use this Template:
* Fill out what you can
* Delete what you do not fill out
-->
## End goal
The log files and SD card connections are closed off before the car exits shutdown | 1.0 | Close log files off in shutdown - <!--
Issue template
To Use this Template:
* Fill out what you can
* Delete what you do not fill out
-->
## End goal
The log files and SD card connections are closed off before the car exits shutdown | priority | close log files off in shutdown issue template to use this template fill out what you can delete what you do not fill out end goal the log files and sd card connections are closed off before the car exits shutdown | 1 |
201,200 | 7,025,469,517 | IssuesEvent | 2017-12-23 11:04:00 | mrmlnc/fast-glob | https://api.github.com/repos/mrmlnc/fast-glob | closed | Callback on every file found | Motivation: Low Priority: Low Type: Need More Details | It'd be nice to have the ability to run a callback on every file, something like this:
```javascript
fastGlob('sources/*', filename => {
// do some asynchronous stuff
}).then(filenames => {
console.log('done')
})
``` | 1.0 | Callback on every file found - It'd be nice to have the ability to run a callback on every file, something like this:
```javascript
fastGlob('sources/*', filename => {
// do some asynchronous stuff
}).then(filenames => {
console.log('done')
})
``` | priority | callback on every file found it d be nice to have the ability to run a callback on every file something like this javascript fastglob sources filename do some asynchronous stuff then filenames console log done | 1 |
287,201 | 8,805,421,984 | IssuesEvent | 2018-12-26 19:32:20 | tilezen/vector-datasource | https://api.github.com/repos/tilezen/vector-datasource | closed | Hide early wood, platform to zoom 17 | in review priority low | ```
hide-until-z17-any:
filter:
$zoom: { max: 17 }
kind: [bank, bus_stop, car_sharing, wood, platform]
draw:
mapzen_icon_library:
visible: false
``` | 1.0 | Hide early wood, platform to zoom 17 - ```
hide-until-z17-any:
filter:
$zoom: { max: 17 }
kind: [bank, bus_stop, car_sharing, wood, platform]
draw:
mapzen_icon_library:
visible: false
``` | priority | hide early wood platform to zoom hide until any filter zoom max kind draw mapzen icon library visible false | 1 |
514,419 | 14,938,893,144 | IssuesEvent | 2021-01-25 16:17:54 | ooni/probe | https://api.github.com/repos/ooni/probe | reopened | Raspberry Pi armv7 support | effort/XL enhancement priority/low | I want to install ooniprobe on docker container in a Raspberry Pi. What do you recommend? do you have some resource?
Thanks, Jacobo | 1.0 | Raspberry Pi armv7 support - I want to install ooniprobe on docker container in a Raspberry Pi. What do you recommend? do you have some resource?
Thanks, Jacobo | priority | raspberry pi support i want to install ooniprobe on docker container in a raspberry pi what do you recommend do you have some resource thanks jacobo | 1 |
537,638 | 15,732,315,156 | IssuesEvent | 2021-03-29 18:08:40 | Malikil/PYOP-Checker | https://api.github.com/repos/Malikil/PYOP-Checker | opened | Display more pending maps when mods are specified | enhancement low priority | Right now the bot will only display maps of a certain mod up until the size limit for one field in an embed. That's fine when multiple mods are trying to be displayed and we don't want to hit the absolute cap. But when a single mod is specified and we probably won't be hitting the cap through other mods then extra fields can be added for the selected mod so more maps can be checked at once. | 1.0 | Display more pending maps when mods are specified - Right now the bot will only display maps of a certain mod up until the size limit for one field in an embed. That's fine when multiple mods are trying to be displayed and we don't want to hit the absolute cap. But when a single mod is specified and we probably won't be hitting the cap through other mods then extra fields can be added for the selected mod so more maps can be checked at once. | priority | display more pending maps when mods are specified right now the bot will only display maps of a certain mod up until the size limit for one field in an embed that s fine when multiple mods are trying to be displayed and we don t want to hit the absolute cap but when a single mod is specified and we probably won t be hitting the cap through other mods then extra fields can be added for the selected mod so more maps can be checked at once | 1 |
134,234 | 5,222,406,521 | IssuesEvent | 2017-01-27 08:06:33 | Sonarr/Sonarr | https://api.github.com/repos/Sonarr/Sonarr | closed | Change style of Disk Space | enhancement priority:low ui-only v3 | Any chance we can get an updated view on the Disk Space that shows a bar graph comparing the used to free space (see example from windows)


| 1.0 | Change style of Disk Space - Any chance we can get an updated view on the Disk Space that shows a bar graph comparing the used to free space (see example from windows)


| priority | change style of disk space any chance we can get an updated view on the disk space that shows a bar graph comparing the used to free space see example from windows | 1 |
600,757 | 18,355,385,045 | IssuesEvent | 2021-10-08 17:24:18 | godaddy-wordpress/coblocks | https://api.github.com/repos/godaddy-wordpress/coblocks | closed | [A11y] Improve accessibility for the Offset Gallery block | [Type] Enhancement [Priority] Low [Type] a11y | **Accessibility problem:**
Shows nothing
Requested in #1927 | 1.0 | [A11y] Improve accessibility for the Offset Gallery block - **Accessibility problem:**
Shows nothing
Requested in #1927 | priority | improve accessibility for the offset gallery block accessibility problem shows nothing requested in | 1 |
443,620 | 12,796,982,770 | IssuesEvent | 2020-07-02 11:28:26 | lbowes/falcon-9-simulation | https://api.github.com/repos/lbowes/falcon-9-simulation | opened | docs/update-readme | Priority: Low Type: Enhancement enhancement | Update the project README
- [ ] better summary
- [ ] include some gifs/webcast screenshots
- [ ] list all 3rd party libraries used so far
- [ ] install instructions
- [ ] usage instructions | 1.0 | docs/update-readme - Update the project README
- [ ] better summary
- [ ] include some gifs/webcast screenshots
- [ ] list all 3rd party libraries used so far
- [ ] install instructions
- [ ] usage instructions | priority | docs update readme update the project readme better summary include some gifs webcast screenshots list all party libraries used so far install instructions usage instructions | 1 |
612,742 | 19,030,161,762 | IssuesEvent | 2021-11-24 09:50:13 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | [Coverity CID: 240667] Improper use of negative value in samples/subsys/usb/cdc_acm_composite/src/main.c | bug priority: low Coverity |
Static code scan issues found in file:
https://github.com/zephyrproject-rtos/zephyr/tree/c0fcd35531611bbe35376c62a9e50744d6904940/samples/subsys/usb/cdc_acm_composite/src/main.c
Category: Integer handling issues
Function: `interrupt_handler`
Component: Samples
CID: [240667](https://scan9.coverity.com/reports.htm#v29726/p12996/mergedDefectId=240667)
Details:
https://github.com/zephyrproject-rtos/zephyr/blob/c0fcd35531611bbe35376c62a9e50744d6904940/samples/subsys/usb/cdc_acm_composite/src/main.c
Please fix or provide comments in coverity using the link:
https://scan9.coverity.com/reports.htm#v29271/p12996.
For more information about the violation, check the [Coverity Reference](https://scan9.coverity.com/doc/en/cov_checker_ref.html#static_checker_NEGATIVE_RETURNS). ([CWE-394](http://cwe.mitre.org/data/definitions/394.html))
Note: This issue was created automatically. Priority was set based on classification
of the file affected and the impact field in coverity. Assignees were set using the CODEOWNERS file.
| 1.0 | [Coverity CID: 240667] Improper use of negative value in samples/subsys/usb/cdc_acm_composite/src/main.c -
Static code scan issues found in file:
https://github.com/zephyrproject-rtos/zephyr/tree/c0fcd35531611bbe35376c62a9e50744d6904940/samples/subsys/usb/cdc_acm_composite/src/main.c
Category: Integer handling issues
Function: `interrupt_handler`
Component: Samples
CID: [240667](https://scan9.coverity.com/reports.htm#v29726/p12996/mergedDefectId=240667)
Details:
https://github.com/zephyrproject-rtos/zephyr/blob/c0fcd35531611bbe35376c62a9e50744d6904940/samples/subsys/usb/cdc_acm_composite/src/main.c
Please fix or provide comments in coverity using the link:
https://scan9.coverity.com/reports.htm#v29271/p12996.
For more information about the violation, check the [Coverity Reference](https://scan9.coverity.com/doc/en/cov_checker_ref.html#static_checker_NEGATIVE_RETURNS). ([CWE-394](http://cwe.mitre.org/data/definitions/394.html))
Note: This issue was created automatically. Priority was set based on classification
of the file affected and the impact field in coverity. Assignees were set using the CODEOWNERS file.
| priority | improper use of negative value in samples subsys usb cdc acm composite src main c static code scan issues found in file category integer handling issues function interrupt handler component samples cid details please fix or provide comments in coverity using the link for more information about the violation check the note this issue was created automatically priority was set based on classification of the file affected and the impact field in coverity assignees were set using the codeowners file | 1 |
97,125 | 3,985,334,925 | IssuesEvent | 2016-05-07 20:15:43 | okTurtles/group-income-simple | https://api.github.com/repos/okTurtles/group-income-simple | closed | Try out webpack? | frontend low priority research tooling | This issue is somewhat related to #37.
- [ ] Use [grunt-webpack](https://github.com/webpack/grunt-webpack)
- [ ] Get the component hot-reload thing working
- [ ] Ensure it's simple to switch back to browserify
- [ ] Make a note of the browserify and webpack related dev dependencies in a wiki, then replace the browserify deps with webpack equivalents
> Webpack dependencies: `npm i -D grunt-webpack babel-loader css-loader ejs-html-loader vue-html-loader vue-style-loader vue-loader webpack webpack-dev-server`
- [ ] Make it clear how "vendor" related JS (i.e. jquery, modernizer, etc.) gets added and used by the project | 1.0 | Try out webpack? - This issue is somewhat related to #37.
- [ ] Use [grunt-webpack](https://github.com/webpack/grunt-webpack)
- [ ] Get the component hot-reload thing working
- [ ] Ensure it's simple to switch back to browserify
- [ ] Make a note of the browserify and webpack related dev dependencies in a wiki, then replace the browserify deps with webpack equivalents
> Webpack dependencies: `npm i -D grunt-webpack babel-loader css-loader ejs-html-loader vue-html-loader vue-style-loader vue-loader webpack webpack-dev-server`
- [ ] Make it clear how "vendor" related JS (i.e. jquery, modernizer, etc.) gets added and used by the project | priority | try out webpack this issue is somewhat related to use get the component hot reload thing working ensure it s simple to switch back to browserify make a note of the browserify and webpack related dev dependencies in a wiki then replace the browserify deps with webpack equivalents webpack dependencies npm i d grunt webpack babel loader css loader ejs html loader vue html loader vue style loader vue loader webpack webpack dev server make it clear how vendor related js i e jquery modernizer etc gets added and used by the project | 1 |
637,024 | 20,617,997,803 | IssuesEvent | 2022-03-07 14:56:44 | ooni/probe | https://api.github.com/repos/ooni/probe | closed | Adjust desktop app window size on Windows | bug good first issue ux priority/low user feedback platform/windows ooni/probe-desktop | This happens only on Windows. It hides away the button to show logs when a test is running.

| 1.0 | Adjust desktop app window size on Windows - This happens only on Windows. It hides away the button to show logs when a test is running.

| priority | adjust desktop app window size on windows this happens only on windows it hides away the button to show logs when a test is running | 1 |
377,065 | 11,162,866,286 | IssuesEvent | 2019-12-26 19:36:54 | InfiniteFlightAirportEditing/Airports | https://api.github.com/repos/InfiniteFlightAirportEditing/Airports | reopened | KRDD-Redding Municipal Airport-CALIFORNIA-USA | Being Redone Low Priority | # Airport Name
Redding Municipal
# Country?
US of A
# Improvements that need to be made?
Redo
# Are you working on this airport?
Yes
# Airport Priority? (IF Event, 10000ft+ Runway, World/US Capital, Low)
Low
| 1.0 | KRDD-Redding Municipal Airport-CALIFORNIA-USA - # Airport Name
Redding Municipal
# Country?
US of A
# Improvements that need to be made?
Redo
# Are you working on this airport?
Yes
# Airport Priority? (IF Event, 10000ft+ Runway, World/US Capital, Low)
Low
| priority | krdd redding municipal airport california usa airport name redding municipal country us of a improvements that need to be made redo are you working on this airport yes airport priority if event runway world us capital low low | 1 |
778,794 | 27,329,715,061 | IssuesEvent | 2023-02-25 13:16:40 | Updated-NoCheatPlus/NoCheatPlus | https://api.github.com/repos/Updated-NoCheatPlus/NoCheatPlus | closed | How to allow jesus (waterwalk) on a horse? | (type) bug (field) vehicle (priority) low priority (resolution) postponed | ### Describe the issue
Well i an have jesus (waterwalk) bypassed with a permission but doesn't work on animals.
### How to reproduce the issue
Bypass the waterwalk and ride an animal
### Extra links: video and/or debug log
https://youtu.be/6YFwtde2N8E
### Any possible config options changed or plugins that may cause interference?
Unsure | 2.0 | How to allow jesus (waterwalk) on a horse? - ### Describe the issue
Well i an have jesus (waterwalk) bypassed with a permission but doesn't work on animals.
### How to reproduce the issue
Bypass the waterwalk and ride an animal
### Extra links: video and/or debug log
https://youtu.be/6YFwtde2N8E
### Any possible config options changed or plugins that may cause interference?
Unsure | priority | how to allow jesus waterwalk on a horse describe the issue well i an have jesus waterwalk bypassed with a permission but doesn t work on animals how to reproduce the issue bypass the waterwalk and ride an animal extra links video and or debug log any possible config options changed or plugins that may cause interference unsure | 1 |
138,522 | 5,343,115,575 | IssuesEvent | 2017-02-17 10:20:50 | bulletind/khabar | https://api.github.com/repos/bulletind/khabar | closed | [khabar] Improve error logging to show parameter values | priority:low | Example:
401 4.1.3 Bad recipient address syntax
So we know someone sends a wrong email address, but to solve this it helps to know which value is actually being sent.
```
Traceback
goroutine 2888831 [running]:
gopkg.in/simversity/gotracer%2ev1.Tracer.Notify(0x0, 0xc20800b880, 0x14, 0xc208100140, 0x3, 0xc20800b8c0, 0x17, 0xc20800b900, 0x16, 0xc20800b940, ...)
/home/travis/gopath/src/gopkg.in/simversity/gotracer.v1/error.go:29 +0xcd
log.Panic(0xc2080e5868, 0x1, 0x1)
/home/travis/.gimme/versions/go/src/log/log.go:320 +0xc4
gopkg.in/bulletind/khabar.v1/utils.(*MailConn).SendEmail(0xc2080e5b28, 0xc2083dc840, 0x1a, 0xc2083efa70, 0x1, 0x1, 0xc2081463c0, 0x1a, 0xc208152000, 0x1807)
/home/travis/gopath/src/gopkg.in/bulletind/khabar.v1/utils/smtp.go:51 +0x386
gopkg.in/bulletind/khabar.v1/core.emailHandler(0xc2080c5130, 0xc208152000, 0x1807, 0xc208406f30)
/home/travis/gopath/src/gopkg.in/bulletind/khabar.v1/core/emailer.go:57 +0xd91
gopkg.in/bulletind/khabar.v1/core.sendToChannel(0xc2080c5130, 0xc208152000, 0x1807, 0x8d5e10, 0x5, 0xc208406f30)
/home/travis/gopath/src/gopkg.in/bulletind/khabar.v1/core/sender.go:35 +0x40a
gopkg.in/bulletind/khabar.v1/core.send(0x8d5f70, 0x5, 0x8d5e10, 0x5, 0xc2080c5130)
/home/travis/gopath/src/gopkg.in/bulletind/khabar.v1/core/sender.go:117 +0xb9f
gopkg.in/bulletind/khabar.v1/core.SendNotification.func1(0x8d5f70, 0x5, 0x8d5e10, 0x5, 0xc2080c5130, 0xc20841a260)
/home/travis/gopath/src/gopkg.in/bulletind/khabar.v1/core/sender.go:141 +0x77
created by gopkg.in/bulletind/khabar.v1/core.SendNotification
/home/travis/gopath/src/gopkg.in/bulletind/khabar.v1/core/sender.go:142 +0x343
```
| 1.0 | [khabar] Improve error logging to show parameter values - Example:
401 4.1.3 Bad recipient address syntax
So we know someone sends a wrong email address, but to solve this it helps to know which value is actually being sent.
```
Traceback
goroutine 2888831 [running]:
gopkg.in/simversity/gotracer%2ev1.Tracer.Notify(0x0, 0xc20800b880, 0x14, 0xc208100140, 0x3, 0xc20800b8c0, 0x17, 0xc20800b900, 0x16, 0xc20800b940, ...)
/home/travis/gopath/src/gopkg.in/simversity/gotracer.v1/error.go:29 +0xcd
log.Panic(0xc2080e5868, 0x1, 0x1)
/home/travis/.gimme/versions/go/src/log/log.go:320 +0xc4
gopkg.in/bulletind/khabar.v1/utils.(*MailConn).SendEmail(0xc2080e5b28, 0xc2083dc840, 0x1a, 0xc2083efa70, 0x1, 0x1, 0xc2081463c0, 0x1a, 0xc208152000, 0x1807)
/home/travis/gopath/src/gopkg.in/bulletind/khabar.v1/utils/smtp.go:51 +0x386
gopkg.in/bulletind/khabar.v1/core.emailHandler(0xc2080c5130, 0xc208152000, 0x1807, 0xc208406f30)
/home/travis/gopath/src/gopkg.in/bulletind/khabar.v1/core/emailer.go:57 +0xd91
gopkg.in/bulletind/khabar.v1/core.sendToChannel(0xc2080c5130, 0xc208152000, 0x1807, 0x8d5e10, 0x5, 0xc208406f30)
/home/travis/gopath/src/gopkg.in/bulletind/khabar.v1/core/sender.go:35 +0x40a
gopkg.in/bulletind/khabar.v1/core.send(0x8d5f70, 0x5, 0x8d5e10, 0x5, 0xc2080c5130)
/home/travis/gopath/src/gopkg.in/bulletind/khabar.v1/core/sender.go:117 +0xb9f
gopkg.in/bulletind/khabar.v1/core.SendNotification.func1(0x8d5f70, 0x5, 0x8d5e10, 0x5, 0xc2080c5130, 0xc20841a260)
/home/travis/gopath/src/gopkg.in/bulletind/khabar.v1/core/sender.go:141 +0x77
created by gopkg.in/bulletind/khabar.v1/core.SendNotification
/home/travis/gopath/src/gopkg.in/bulletind/khabar.v1/core/sender.go:142 +0x343
```
| priority | improve error logging to show parameter values example bad recipient address syntax so we know someone sends a wrong email address but to solve this it helps to know which value is actually being sent traceback goroutine gopkg in simversity gotracer tracer notify home travis gopath src gopkg in simversity gotracer error go log panic home travis gimme versions go src log log go gopkg in bulletind khabar utils mailconn sendemail home travis gopath src gopkg in bulletind khabar utils smtp go gopkg in bulletind khabar core emailhandler home travis gopath src gopkg in bulletind khabar core emailer go gopkg in bulletind khabar core sendtochannel home travis gopath src gopkg in bulletind khabar core sender go gopkg in bulletind khabar core send home travis gopath src gopkg in bulletind khabar core sender go gopkg in bulletind khabar core sendnotification home travis gopath src gopkg in bulletind khabar core sender go created by gopkg in bulletind khabar core sendnotification home travis gopath src gopkg in bulletind khabar core sender go | 1 |
78,462 | 3,510,109,310 | IssuesEvent | 2016-01-09 06:48:48 | yadayada/acd_cli | https://api.github.com/repos/yadayada/acd_cli | closed | Feature Request: Optional ncurses TUI for upload | CLI enhancement low priority | It would be nice for those of us who run acd_cli from a screen session to have a bit more introspection into what's going on during runs, as the current display is just a status bar and a set of statistics for an upload action.
It's only slightly non-trivial to throw together something with a cross-platform TUI like [blessings](https://pypi.python.org/pypi/blessings/), [PyTVision](https://pypi.python.org/pypi/PyTVision), or [textland](https://pypi.python.org/pypi/textland). Hell, a pluggable UI architecture wouldn't be a *terrible* idea (though it would likely take a bit of work). | 1.0 | Feature Request: Optional ncurses TUI for upload - It would be nice for those of us who run acd_cli from a screen session to have a bit more introspection into what's going on during runs, as the current display is just a status bar and a set of statistics for an upload action.
It's only slightly non-trivial to throw together something with a cross-platform TUI like [blessings](https://pypi.python.org/pypi/blessings/), [PyTVision](https://pypi.python.org/pypi/PyTVision), or [textland](https://pypi.python.org/pypi/textland). Hell, a pluggable UI architecture wouldn't be a *terrible* idea (though it would likely take a bit of work). | priority | feature request optional ncurses tui for upload it would be nice for those of us who run acd cli from a screen session to have a bit more introspection into what s going on during runs as the current display is just a status bar and a set of statistics for an upload action it s only slightly non trivial to throw together something with a cross platform tui like or hell a pluggable ui architecture wouldn t be a terrible idea though it would likely take a bit of work | 1 |
530,060 | 15,415,107,645 | IssuesEvent | 2021-03-05 01:49:49 | the-hyjal-project/bugtracker | https://api.github.com/repos/the-hyjal-project/bugtracker | closed | Miss Danna and children not walking together as a group | Database Low Priority NPC | **Describe Your Issue**
Miss Danna and her group of schoolchildren are not presently walking together as a group in Stormwind, as they should do. This is similar to my previous report about Suzanne/Janey/Lisan.
**Steps To Reproduce**
<!--- Steps to reproduce the behavior. Provide as much details as possible. -->
1. Go to Stormwind.
2. Find Miss Danna and the children listed below.
3. Note that the various schoolchildren are scattered around the city, rather than walking together with Miss Danna.
<!--- Include IDs of affected NPCs , items, quests or spells with a link to the relevant page. -->
Miss Danna:
https://classic.wowhead.com/npc=3513/miss-danna
The children:
https://classic.wowhead.com/npc=3505/pat
https://classic.wowhead.com/npc=3512/jimmy
https://classic.wowhead.com/npc=3510/twain
https://classic.wowhead.com/npc=3508/mikey
https://classic.wowhead.com/npc=3507/andi
https://classic.wowhead.com/npc=3511/steven
**Expected Behavior**
<!--- Describe how it **should** work. -->
Miss Danna and the children listed above should walk together, and have a dialogue.
| 1.0 | Miss Danna and children not walking together as a group - **Describe Your Issue**
Miss Danna and her group of schoolchildren are not presently walking together as a group in Stormwind, as they should do. This is similar to my previous report about Suzanne/Janey/Lisan.
**Steps To Reproduce**
<!--- Steps to reproduce the behavior. Provide as much details as possible. -->
1. Go to Stormwind.
2. Find Miss Danna and the children listed below.
3. Note that the various schoolchildren are scattered around the city, rather than walking together with Miss Danna.
<!--- Include IDs of affected NPCs , items, quests or spells with a link to the relevant page. -->
Miss Danna:
https://classic.wowhead.com/npc=3513/miss-danna
The children:
https://classic.wowhead.com/npc=3505/pat
https://classic.wowhead.com/npc=3512/jimmy
https://classic.wowhead.com/npc=3510/twain
https://classic.wowhead.com/npc=3508/mikey
https://classic.wowhead.com/npc=3507/andi
https://classic.wowhead.com/npc=3511/steven
**Expected Behavior**
<!--- Describe how it **should** work. -->
Miss Danna and the children listed above should walk together, and have a dialogue.
| priority | miss danna and children not walking together as a group describe your issue miss danna and her group of schoolchildren are not presently walking together as a group in stormwind as they should do this is similar to my previous report about suzanne janey lisan steps to reproduce go to stormwind find miss danna and the children listed below note that the various schoolchildren are scattered around the city rather than walking together with miss danna miss danna the children expected behavior miss danna and the children listed above should walk together and have a dialogue | 1 |
584,467 | 17,455,558,943 | IssuesEvent | 2021-08-06 00:16:09 | googleapis/python-dialogflow | https://api.github.com/repos/googleapis/python-dialogflow | closed | samples.snippets.detect_intent_audio_test: test_detect_intent_audio failed | priority: p1 type: bug api: dialogflow samples flakybot: issue | This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: df5733da687feb43c190fe67834a80b605802a1d
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/3ec2d974-3b38-46d6-abbe-b787908613c3), [Sponge](http://sponge2/3ec2d974-3b38-46d6-abbe-b787908613c3)
status: failed
<details><summary>Test output</summary><br><pre>capsys = <_pytest.capture.CaptureFixture object at 0x7f7b346ff4a8>
def test_detect_intent_audio(capsys):
for audio_file_path in AUDIOS:
detect_intent_audio(PROJECT_ID, SESSION_ID, audio_file_path, "en-US")
out, _ = capsys.readouterr()
> assert "Fulfillment text: What time will the meeting start?" in out
E AssertionError: assert 'Fulfillment text: What time will the meeting start?' in 'Session path: projects/python-docs-samples-tests/agent/sessions/test_4df993f5-9419-4334-af4f-73d1e93aa3ff\n\n========...=============\nQuery text: today\nDetected intent: Default Fallback Intent (confidence: 1.0)\n\nFulfillment text: \n\n'
detect_intent_audio_test.py:36: AssertionError</pre></details> | 1.0 | samples.snippets.detect_intent_audio_test: test_detect_intent_audio failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: df5733da687feb43c190fe67834a80b605802a1d
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/3ec2d974-3b38-46d6-abbe-b787908613c3), [Sponge](http://sponge2/3ec2d974-3b38-46d6-abbe-b787908613c3)
status: failed
<details><summary>Test output</summary><br><pre>capsys = <_pytest.capture.CaptureFixture object at 0x7f7b346ff4a8>
def test_detect_intent_audio(capsys):
for audio_file_path in AUDIOS:
detect_intent_audio(PROJECT_ID, SESSION_ID, audio_file_path, "en-US")
out, _ = capsys.readouterr()
> assert "Fulfillment text: What time will the meeting start?" in out
E AssertionError: assert 'Fulfillment text: What time will the meeting start?' in 'Session path: projects/python-docs-samples-tests/agent/sessions/test_4df993f5-9419-4334-af4f-73d1e93aa3ff\n\n========...=============\nQuery text: today\nDetected intent: Default Fallback Intent (confidence: 1.0)\n\nFulfillment text: \n\n'
detect_intent_audio_test.py:36: AssertionError</pre></details> | priority | samples snippets detect intent audio test test detect intent audio failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output capsys def test detect intent audio capsys for audio file path in audios detect intent audio project id session id audio file path en us out capsys readouterr assert fulfillment text what time will the meeting start in out e assertionerror assert fulfillment text what time will the meeting start in session path projects python docs samples tests agent sessions test n n nquery text today ndetected intent default fallback intent confidence n nfulfillment text n n detect intent audio test py assertionerror | 1 |
299,874 | 9,205,934,431 | IssuesEvent | 2019-03-08 12:09:22 | qissue-bot/QGIS | https://api.github.com/repos/qissue-bot/QGIS | closed | SPIT: 'SET SEARCH_PATH TO NULL' yields an error | Component: Affected QGIS version Component: Crashes QGIS or corrupts data Component: Easy fix? Component: Operating System Component: Pull Request or Patch supplied Component: Regression? Component: Resolution Priority: Low Project: QGIS Application Status: Closed Tracker: Bug report | ---
Author Name: **Maciej Sieczka -** (Maciej Sieczka -)
Original Redmine Issue: 1230, https://issues.qgis.org/issues/1230
Original Assignee: nobody -
---
QGIS SVN trunk r9094, [[PostgreSQL]] 8.2.7, [[PostGIS]] 1.3.3, QT 4.4.0
Debian testing amd64.
1. add a Shapefile in SPIT
2. connect to a database
3. choose 'Public' schema
4. press OK - the import fails, with an error:
```
Problem inserting features from file:
/home/shoofi/gis/dane/WDPE/NATpntpol/Poland_NP_poly.shp
<p>Error while executing the SQL:</p><p>SET SEARCH_PATH TO NULL,'public'</p><p>The database said:ERROR: syntax error at or near "NULL"
LINE 1: SET SEARCH_PATH TO NULL,'public'
^
</p>
```
| 1.0 | SPIT: 'SET SEARCH_PATH TO NULL' yields an error - ---
Author Name: **Maciej Sieczka -** (Maciej Sieczka -)
Original Redmine Issue: 1230, https://issues.qgis.org/issues/1230
Original Assignee: nobody -
---
QGIS SVN trunk r9094, [[PostgreSQL]] 8.2.7, [[PostGIS]] 1.3.3, QT 4.4.0
Debian testing amd64.
1. add a Shapefile in SPIT
2. connect to a database
3. choose 'Public' schema
4. press OK - the import fails, with an error:
```
Problem inserting features from file:
/home/shoofi/gis/dane/WDPE/NATpntpol/Poland_NP_poly.shp
<p>Error while executing the SQL:</p><p>SET SEARCH_PATH TO NULL,'public'</p><p>The database said:ERROR: syntax error at or near "NULL"
LINE 1: SET SEARCH_PATH TO NULL,'public'
^
</p>
```
| priority | spit set search path to null yields an error author name maciej sieczka maciej sieczka original redmine issue original assignee nobody qgis svn trunk qt debian testing add a shapefile in spit connect to a database choose public schema press ok the import fails with an error problem inserting features from file home shoofi gis dane wdpe natpntpol poland np poly shp error while executing the sql set search path to null public the database said error syntax error at or near null line set search path to null public | 1 |
579,976 | 17,202,606,655 | IssuesEvent | 2021-07-17 15:13:18 | ME-ICA/tedana | https://api.github.com/repos/ME-ICA/tedana | closed | Add carpet plots to HTML reports | effort: low enhancement impact: medium priority: low reports | <!--
This is a suggested issue template for tedana.
If there is other information that would be helpful to include, please do not hesitate to add it!
Before submitting, please check to make sure that the issue is not already addressed; if there is a related issue, then please cross-reference it by #.
If this is a usage question, please check out NeuroStars here:
https://neurostars.org/
and tag your topic with "tedana"
-->
<!--
Summarize the issue in 1-2 sentences, linking other issues if they are relevant
Note: simply typing # will prompt you for open issues to select from
-->
### Summary
It would be great to include carpet plots in the HTML reports. In a perfect world, those plots would be broken down by tissue type and would also have line plots attached with regressors of no interest, but those elements would require additional inputs, which we may not want to support.
<!--
If needed, add additional detail for:
1. Recreating a bug/problem
2. Any additional context necessary to understand the issue
-->
### Additional Detail
I have been working on carpet plots in `nilearn`, so we could use their functions.
This stems from #450.
<!--
If desired, add suggested next steps.
If you foresee them in a particular order or priority, please use numbering
-->
<!--
Thank you for submitting your issue!
If you do not receive a response within a calendar week, please post a comment on this issue to catch our attention.
Some issues may not be resolved right away due to the volunteer nature of the project; thank you for your patience!
-->
| 1.0 | Add carpet plots to HTML reports - <!--
This is a suggested issue template for tedana.
If there is other information that would be helpful to include, please do not hesitate to add it!
Before submitting, please check to make sure that the issue is not already addressed; if there is a related issue, then please cross-reference it by #.
If this is a usage question, please check out NeuroStars here:
https://neurostars.org/
and tag your topic with "tedana"
-->
<!--
Summarize the issue in 1-2 sentences, linking other issues if they are relevant
Note: simply typing # will prompt you for open issues to select from
-->
### Summary
It would be great to include carpet plots in the HTML reports. In a perfect world, those plots would be broken down by tissue type and would also have line plots attached with regressors of no interest, but those elements would require additional inputs, which we may not want to support.
<!--
If needed, add additional detail for:
1. Recreating a bug/problem
2. Any additional context necessary to understand the issue
-->
### Additional Detail
I have been working on carpet plots in `nilearn`, so we could use their functions.
This stems from #450.
<!--
If desired, add suggested next steps.
If you foresee them in a particular order or priority, please use numbering
-->
<!--
Thank you for submitting your issue!
If you do not receive a response within a calendar week, please post a comment on this issue to catch our attention.
Some issues may not be resolved right away due to the volunteer nature of the project; thank you for your patience!
-->
| priority | add carpet plots to html reports this is a suggested issue template for tedana if there is other information that would be helpful to include please do not hesitate to add it before submitting please check to make sure that the issue is not already addressed if there is a related issue then please cross reference it by if this is a usage question please check out neurostars here and tag your topic with tedana summarize the issue in sentences linking other issues if they are relevant note simply typing will prompt you for open issues to select from summary it would be great to include carpet plots in the html reports in a perfect world those plots would be broken down by tissue type and would also have line plots attached with regressors of no interest but those elements would require additional inputs which we may not want to support if needed add additional detail for recreating a bug problem any additional context necessary to understand the issue additional detail i have been working on carpet plots in nilearn so we could use their functions this stems from if desired add suggested next steps if you foresee them in a particular order or priority please use numbering thank you for submitting your issue if you do not receive a response within a calendar week please post a comment on this issue to catch our attention some issues may not be resolved right away due to the volunteer nature of the project thank you for your patience | 1 |
681,333 | 23,305,662,507 | IssuesEvent | 2022-08-08 00:16:50 | cypress-io/cypress | https://api.github.com/repos/cypress-io/cypress | closed | improve warning/error handling in data-context | unification jira-migration fast-follows-2 priority: high stage: review | ## **Summary**
The file watching & need to explicitly retry on error is intentional but can appear inconsistent. Ensure we have clear internal/external documentation on the stages of the flow here.
Error and warning is also confusing and unclear - there's many different fields that can be assigned to.
Re: feedback from [~accountid:615b6c0199b4b8006a9e53b2] in bug hunt:
- - -
Basically, The experience is inconsistent. I'm getting sent three conflicting signals:
* Some changes to the config file are noticed immediately. So clearly the file in being watched.
* But sometimes I have to click a button to get changes noticed. Why is the watch not noticing solutions as well as problems?
[https://cypressio.slack.com/archives/C02MYBT9Y5S/p1649099346926519](https://cypressio.slack.com/archives/C02MYBT9Y5S/p1649099346926519|smart-card)
## **Acceptance Criteria**
* Should…
* Should also…
### **Resources**
Any Notion documents, Google documents, Figma Boards
### **Open Pull Requests**
Any PRs needed for review
┆Issue is synchronized with this [Jira Task](https://cypress-io.atlassian.net/browse/UNIFY-1502) by [Unito](https://www.unito.io)
┆author: Tim Griesser
┆friendlyId: UNIFY-1502
┆priority: High
┆sprint: Fast Follows 2
┆taskType: Task
| 1.0 | improve warning/error handling in data-context - ## **Summary**
The file watching & need to explicitly retry on error is intentional but can appear inconsistent. Ensure we have clear internal/external documentation on the stages of the flow here.
Error and warning is also confusing and unclear - there's many different fields that can be assigned to.
Re: feedback from [~accountid:615b6c0199b4b8006a9e53b2] in bug hunt:
- - -
Basically, The experience is inconsistent. I'm getting sent three conflicting signals:
* Some changes to the config file are noticed immediately. So clearly the file in being watched.
* But sometimes I have to click a button to get changes noticed. Why is the watch not noticing solutions as well as problems?
[https://cypressio.slack.com/archives/C02MYBT9Y5S/p1649099346926519](https://cypressio.slack.com/archives/C02MYBT9Y5S/p1649099346926519|smart-card)
## **Acceptance Criteria**
* Should…
* Should also…
### **Resources**
Any Notion documents, Google documents, Figma Boards
### **Open Pull Requests**
Any PRs needed for review
┆Issue is synchronized with this [Jira Task](https://cypress-io.atlassian.net/browse/UNIFY-1502) by [Unito](https://www.unito.io)
┆author: Tim Griesser
┆friendlyId: UNIFY-1502
┆priority: High
┆sprint: Fast Follows 2
┆taskType: Task
| priority | improve warning error handling in data context summary the file watching need to explicitly retry on error is intentional but can appear inconsistent ensure we have clear internal external documentation on the stages of the flow here error and warning is also confusing and unclear there s many different fields that can be assigned to re feedback from in bug hunt basically the experience is inconsistent i m getting sent three conflicting signals some changes to the config file are noticed immediately so clearly the file in being watched but sometimes i have to click a button to get changes noticed why is the watch not noticing solutions as well as problems acceptance criteria should… should also… resources any notion documents google documents figma boards open pull requests any prs needed for review ┆issue is synchronized with this by ┆author tim griesser ┆friendlyid unify ┆priority high ┆sprint fast follows ┆tasktype task | 1 |
41,476 | 2,869,009,175 | IssuesEvent | 2015-06-05 22:33:11 | dart-lang/sdk | https://api.github.com/repos/dart-lang/sdk | closed | package:intl - support list gender message selection | Area-Pkg Pkg-Intl Priority-Low Triaged Type-Enhancement | *This issue was originally filed by @seaneagan*
_____
The intl package currently has Intl.gender to select messages based on gender. In addition when dealing with lists of people, languages have different rules to determine the gender. See:
CLDR:
http://unicode.org/reports/tr35/tr35-general.html#List_Gender
ICU:
http://icu-project.org/apiref/icu4j/com/ibm/icu/util/GenderInfo.html
Once issue #11069 is fixed, it may be nice to have a method to complement it, which can select a message based on list gender. This could either be added to Intl.gender by making the "gender" argument also accept an Iterable, or a new method such as Intl.listGender.
| 1.0 | package:intl - support list gender message selection - *This issue was originally filed by @seaneagan*
_____
The intl package currently has Intl.gender to select messages based on gender. In addition when dealing with lists of people, languages have different rules to determine the gender. See:
CLDR:
http://unicode.org/reports/tr35/tr35-general.html#List_Gender
ICU:
http://icu-project.org/apiref/icu4j/com/ibm/icu/util/GenderInfo.html
Once issue #11069 is fixed, it may be nice to have a method to complement it, which can select a message based on list gender. This could either be added to Intl.gender by making the "gender" argument also accept an Iterable, or a new method such as Intl.listGender.
| priority | package intl support list gender message selection this issue was originally filed by seaneagan the intl package currently has intl gender to select messages based on gender in addition when dealing with lists of people languages have different rules to determine the gender see cldr icu once issue is fixed it may be nice to have a method to complement it which can select a message based on list gender this could either be added to intl gender by making the quot gender quot argument also accept an iterable or a new method such as intl listgender | 1 |
184,077 | 6,701,027,412 | IssuesEvent | 2017-10-11 08:06:20 | OpenEMS/openems | https://api.github.com/repos/OpenEMS/openems | closed | Enable command line parameter --config for path to config file | Component: Edge Priority: Low Type: Enhancement | <!--
IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION YOUR ISSUE MIGHT BE CLOSED WITHOUT INVESTIGATING
-->
### Bug Report or Feature Request (mark with an `x`)
```
- [ ] bug report -> please search issues before submitting
- [ ] feature request
```
### Bug description or desired functionality.
<!--
What would like to see implemented?
What is the usecase?
-->
Enable command line parameter --config for path to config file | 1.0 | Enable command line parameter --config for path to config file - <!--
IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION YOUR ISSUE MIGHT BE CLOSED WITHOUT INVESTIGATING
-->
### Bug Report or Feature Request (mark with an `x`)
```
- [ ] bug report -> please search issues before submitting
- [ ] feature request
```
### Bug description or desired functionality.
<!--
What would like to see implemented?
What is the usecase?
-->
Enable command line parameter --config for path to config file | priority | enable command line parameter config for path to config file if you don t fill out the following information your issue might be closed without investigating bug report or feature request mark with an x bug report please search issues before submitting feature request bug description or desired functionality what would like to see implemented what is the usecase enable command line parameter config for path to config file | 1 |
251,345 | 8,014,082,733 | IssuesEvent | 2018-07-25 04:20:08 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | mbedtls build error when CONFIG_DEBUG=y | area: Security bug priority: low | **_Reported by Andrew Boie:_**
I can only reproduce this if CONFIG_DEBUG=y. I reproduced this on qemu_x86. Test case was tests/crypto/mbedtls.
```
LD arch/x86/core/built-in.o
CC drivers/interrupt_controller/system_apic.o
LD arch/x86/built-in.o
/projects/zephyr2/ext/lib/crypto/mbedtls/library/cmac.c: In function ‘cmac_test_subkeys’:
/projects/zephyr2/ext/lib/crypto/mbedtls/library/cmac.c:768:12: error: ‘ret’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
int i, ret;
^~~
/projects/zephyr2/ext/lib/crypto/mbedtls/library/cmac.c: In function ‘cmac_test_wth_cipher’:
/projects/zephyr2/ext/lib/crypto/mbedtls/library/cmac.c:886:11: error: ‘ret’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
return( ret );
^
cc1: all warnings being treated as errors
/projects/zephyr2/scripts/Makefile.build:183: recipe for target 'ext/lib/crypto/mbedtls/library/cmac.o' failed
make[7]: *** [ext/lib/crypto/mbedtls/library/cmac.o] Error 1
CC ext/lib/crypto/mbedtls/library/ctr_drbg.o
LD arch/built-in.o
LD drivers/interrupt_controller/built-in.o
LD drivers/random/built-in.o
CC drivers/serial/uart_ns16550.o
```
(Imported from Jira ZEP-2336) | 1.0 | mbedtls build error when CONFIG_DEBUG=y - **_Reported by Andrew Boie:_**
I can only reproduce this if CONFIG_DEBUG=y. I reproduced this on qemu_x86. Test case was tests/crypto/mbedtls.
```
LD arch/x86/core/built-in.o
CC drivers/interrupt_controller/system_apic.o
LD arch/x86/built-in.o
/projects/zephyr2/ext/lib/crypto/mbedtls/library/cmac.c: In function ‘cmac_test_subkeys’:
/projects/zephyr2/ext/lib/crypto/mbedtls/library/cmac.c:768:12: error: ‘ret’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
int i, ret;
^~~
/projects/zephyr2/ext/lib/crypto/mbedtls/library/cmac.c: In function ‘cmac_test_wth_cipher’:
/projects/zephyr2/ext/lib/crypto/mbedtls/library/cmac.c:886:11: error: ‘ret’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
return( ret );
^
cc1: all warnings being treated as errors
/projects/zephyr2/scripts/Makefile.build:183: recipe for target 'ext/lib/crypto/mbedtls/library/cmac.o' failed
make[7]: *** [ext/lib/crypto/mbedtls/library/cmac.o] Error 1
CC ext/lib/crypto/mbedtls/library/ctr_drbg.o
LD arch/built-in.o
LD drivers/interrupt_controller/built-in.o
LD drivers/random/built-in.o
CC drivers/serial/uart_ns16550.o
```
(Imported from Jira ZEP-2336) | priority | mbedtls build error when config debug y reported by andrew boie i can only reproduce this if config debug y i reproduced this on qemu test case was tests crypto mbedtls ld arch core built in o cc drivers interrupt controller system apic o ld arch built in o projects ext lib crypto mbedtls library cmac c in function ‘cmac test subkeys’ projects ext lib crypto mbedtls library cmac c error ‘ret’ may be used uninitialized in this function int i ret projects ext lib crypto mbedtls library cmac c in function ‘cmac test wth cipher’ projects ext lib crypto mbedtls library cmac c error ‘ret’ may be used uninitialized in this function return ret all warnings being treated as errors projects scripts makefile build recipe for target ext lib crypto mbedtls library cmac o failed make error cc ext lib crypto mbedtls library ctr drbg o ld arch built in o ld drivers interrupt controller built in o ld drivers random built in o cc drivers serial uart o imported from jira zep | 1 |
112,185 | 4,513,031,356 | IssuesEvent | 2016-09-04 01:43:07 | elbereth/DragonUnPACKer | https://api.github.com/repos/elbereth/DragonUnPACKer | closed | Crashes immediately upon startup -- no errors or anything | auto-migrated crash startup low priority sourceforge v5.6 wontfix | For several versions now (I forget which version was the last that worked but can check if needed) Dragon UnPACKer has ceased to work for me. When I try to run it I see the splash screen for a split second and then it just exits. Apparently it doesn't even count as a "first run" because Duppi says "Detected Dragon UnPACKer build (268) mismatch last run one (193). You must run Dragon UnPACKer 5 at least once after an update." (And I'm guessing build 193 was the last one that worked, though what version number specifically that is I'm not sure. It asks if I want to automatically run it and if I hit yes it tries to run with the exact same problem again. If I hit no, it shows a window for a split second with no text or anything which immediately goes away too (I don't know if it's crashing too or what.)
I'm using XP SP3. I've tried running it as an administrator just to be sure with the same results.
Reported by: nazo | 1.0 | Crashes immediately upon startup -- no errors or anything - For several versions now (I forget which version was the last that worked but can check if needed) Dragon UnPACKer has ceased to work for me. When I try to run it I see the splash screen for a split second and then it just exits. Apparently it doesn't even count as a "first run" because Duppi says "Detected Dragon UnPACKer build (268) mismatch last run one (193). You must run Dragon UnPACKer 5 at least once after an update." (And I'm guessing build 193 was the last one that worked, though what version number specifically that is I'm not sure. It asks if I want to automatically run it and if I hit yes it tries to run with the exact same problem again. If I hit no, it shows a window for a split second with no text or anything which immediately goes away too (I don't know if it's crashing too or what.)
I'm using XP SP3. I've tried running it as an administrator just to be sure with the same results.
Reported by: nazo | priority | crashes immediately upon startup no errors or anything for several versions now i forget which version was the last that worked but can check if needed dragon unpacker has ceased to work for me when i try to run it i see the splash screen for a split second and then it just exits apparently it doesn t even count as a first run because duppi says detected dragon unpacker build mismatch last run one you must run dragon unpacker at least once after an update and i m guessing build was the last one that worked though what version number specifically that is i m not sure it asks if i want to automatically run it and if i hit yes it tries to run with the exact same problem again if i hit no it shows a window for a split second with no text or anything which immediately goes away too i don t know if it s crashing too or what i m using xp i ve tried running it as an administrator just to be sure with the same results reported by nazo | 1 |
823,848 | 31,047,663,676 | IssuesEvent | 2023-08-11 02:21:51 | rocky-linux/docs.rockylinux.org | https://api.github.com/repos/rocky-linux/docs.rockylinux.org | closed | Please remove/reduce the current docs/docs in mkdocs | priority: low status: wontfix type: maintenance | I would like us to seize this reorg. opportunity to change the current docs/docs (double docs) reference in mkdocs.yml and file system layout. A clearer "documentation/docs" or similar might make it easier for folks looking to understand the layout of things and help with the web tooling aspects of docs.r.o. Thanks. | 1.0 | Please remove/reduce the current docs/docs in mkdocs - I would like us to seize this reorg. opportunity to change the current docs/docs (double docs) reference in mkdocs.yml and file system layout. A clearer "documentation/docs" or similar might make it easier for folks looking to understand the layout of things and help with the web tooling aspects of docs.r.o. Thanks. | priority | please remove reduce the current docs docs in mkdocs i would like us to seize this reorg opportunity to change the current docs docs double docs reference in mkdocs yml and file system layout a clearer documentation docs or similar might make it easier for folks looking to understand the layout of things and help with the web tooling aspects of docs r o thanks | 1 |
805,953 | 29,765,957,187 | IssuesEvent | 2023-06-15 00:58:00 | GTNewHorizons/GT-New-Horizons-Modpack | https://api.github.com/repos/GTNewHorizons/GT-New-Horizons-Modpack | closed | Dump aspect command from gt++ not working properly | Type: bugMinor Priority: very low Status: Triage | ### Your GTNH Discord Username
LEO_PREMIUM#3437
### Your Pack Version
2.2.8
### Your Server
sp
### Type of Server
Single Player
### Your Expectation
write all aspects of all items.
### The Reality
the file never gets finished. (checked with several friends, same situation)
do not even have iron ingot in the file.
### Your Proposal
someone have a look at the code.
https://github.com/GTNewHorizons/GTplusplus/tree/80c790b8a7b2c61215d636b3ce14715580b2f5eb/src/main/java/gtPlusPlus/xmod/thaumcraft
should be here.
### Final Checklist
- [X] I have searched this issue tracker and there is nothing similar already. Posting on a closed issue saying the bug still exists will prompt us to investigate and reopen it once we confirm your report.
- [X] I can reproduce this problem consistently by follow the exact steps I described above, or this does not need reproducing, e.g. recipe loophole.
- [X] I have asked other people and they confirm they also have this problem by follow the exact steps I described above, or this does not need reproducing, e.g. recipe loophole. | 1.0 | Dump aspect command from gt++ not working properly - ### Your GTNH Discord Username
LEO_PREMIUM#3437
### Your Pack Version
2.2.8
### Your Server
sp
### Type of Server
Single Player
### Your Expectation
write all aspects of all items.
### The Reality
the file never gets finished. (checked with several friends, same situation)
do not even have iron ingot in the file.
### Your Proposal
someone have a look at the code.
https://github.com/GTNewHorizons/GTplusplus/tree/80c790b8a7b2c61215d636b3ce14715580b2f5eb/src/main/java/gtPlusPlus/xmod/thaumcraft
should be here.
### Final Checklist
- [X] I have searched this issue tracker and there is nothing similar already. Posting on a closed issue saying the bug still exists will prompt us to investigate and reopen it once we confirm your report.
- [X] I can reproduce this problem consistently by follow the exact steps I described above, or this does not need reproducing, e.g. recipe loophole.
- [X] I have asked other people and they confirm they also have this problem by follow the exact steps I described above, or this does not need reproducing, e.g. recipe loophole. | priority | dump aspect command from gt not working properly your gtnh discord username leo premium your pack version your server sp type of server single player your expectation write all aspects of all items the reality the file never gets finished checked with several friends same situation do not even have iron ingot in the file your proposal someone have a look at the code should be here final checklist i have searched this issue tracker and there is nothing similar already posting on a closed issue saying the bug still exists will prompt us to investigate and reopen it once we confirm your report i can reproduce this problem consistently by follow the exact steps i described above or this does not need reproducing e g recipe loophole i have asked other people and they confirm they also have this problem by follow the exact steps i described above or this does not need reproducing e g recipe loophole | 1 |
212,884 | 7,243,700,944 | IssuesEvent | 2018-02-14 12:45:36 | pmem/issues | https://api.github.com/repos/pmem/issues | opened | tests: port RUNTEST functionality from linux to windows (KEEP_GOING=y & CLEAN_FAILED=y) | Exposure: High OS: Windows Priority: 4 low Type: Feature | It would be good if on Windows there was a way to run tests without stopping on first failed test like on Linux.
```
# Normally the first failed test terminates the test run. If KEEP_GOING
# is set, continues executing all tests. If any tests fail, once all tests
# have completed reports number of failures, lists failed tests and exits
# with error status.
#
#KEEP_GOING=y
#
# This option works only if KEEP_GOING=y, then if CLEAN_FAILED is set
# all data created by test is removed on test failure.
#
#CLEAN_FAILED=y
```
| 1.0 | tests: port RUNTEST functionality from linux to windows (KEEP_GOING=y & CLEAN_FAILED=y) - It would be good if on Windows there was a way to run tests without stopping on first failed test like on Linux.
```
# Normally the first failed test terminates the test run. If KEEP_GOING
# is set, continues executing all tests. If any tests fail, once all tests
# have completed reports number of failures, lists failed tests and exits
# with error status.
#
#KEEP_GOING=y
#
# This option works only if KEEP_GOING=y, then if CLEAN_FAILED is set
# all data created by test is removed on test failure.
#
#CLEAN_FAILED=y
```
| priority | tests port runtest functionality from linux to windows keep going y clean failed y it would be good if on windows there was a way to run tests without stopping on first failed test like on linux normally the first failed test terminates the test run if keep going is set continues executing all tests if any tests fail once all tests have completed reports number of failures lists failed tests and exits with error status keep going y this option works only if keep going y then if clean failed is set all data created by test is removed on test failure clean failed y | 1 |
475,271 | 13,690,648,260 | IssuesEvent | 2020-09-30 14:37:36 | qutebrowser/qutebrowser | https://api.github.com/repos/qutebrowser/qutebrowser | opened | Improvements after brave adblock PR is merged | priority: 2 - low | Follow-up for #5317.
- [ ] Check if `_on_download_finished` of the two adblock implementations can be merged. | 1.0 | Improvements after brave adblock PR is merged - Follow-up for #5317.
- [ ] Check if `_on_download_finished` of the two adblock implementations can be merged. | priority | improvements after brave adblock pr is merged follow up for check if on download finished of the two adblock implementations can be merged | 1 |
759,608 | 26,602,868,632 | IssuesEvent | 2023-01-23 16:59:30 | svthalia/concrexit | https://api.github.com/repos/svthalia/concrexit | opened | Allow people without active membership to register for an event | priority: low feature | ### Problem
Currently, when creating an event, there is no option to allow people without an active membership to sign up for that event. There is an option to manually add non-members to add the event via the backend, but for alumni events (where mostly people without active membership sign up) this is a pretty tedious process, especially since there are currently two ways of signing up (via mail and via the event).
### Solution
A checkbox in the event backend that, when checked, allows website visitors to register themselves while not having an active membership. A requisite is that you do own a Thalia account, AKA have been a member in the past.
### Motivation
This would make registering for alumni events easier and more centralized. Other than that, being able to see who else is registered might work motivating for alumni to register themselves
### Alternatives
The alternative is the current situation: Mailing to the committee. In our opinion not desired.
### Additional context
See my other feature request regarding the alumni committee.
| 1.0 | Allow people without active membership to register for an event - ### Problem
Currently, when creating an event, there is no option to allow people without an active membership to sign up for that event. There is an option to manually add non-members to add the event via the backend, but for alumni events (where mostly people without active membership sign up) this is a pretty tedious process, especially since there are currently two ways of signing up (via mail and via the event).
### Solution
A checkbox in the event backend that, when checked, allows website visitors to register themselves while not having an active membership. A requisite is that you do own a Thalia account, AKA have been a member in the past.
### Motivation
This would make registering for alumni events easier and more centralized. Other than that, being able to see who else is registered might work motivating for alumni to register themselves
### Alternatives
The alternative is the current situation: Mailing to the committee. In our opinion not desired.
### Additional context
See my other feature request regarding the alumni committee.
| priority | allow people without active membership to register for an event problem currently when creating an event there is no option to allow people without an active membership to sign up for that event there is an option to manually add non members to add the event via the backend but for alumni events where mostly people without active membership sign up this is a pretty tedious process especially since there are currently two ways of signing up via mail and via the event solution a checkbox in the event backend that when checked allows website visitors to register themselves while not having an active membership a requisite is that you do own a thalia account aka have been a member in the past motivation this would make registering for alumni events easier and more centralized other than that being able to see who else is registered might work motivating for alumni to register themselves alternatives the alternative is the current situation mailing to the committee in our opinion not desired additional context see my other feature request regarding the alumni committee | 1 |
811,401 | 30,286,403,331 | IssuesEvent | 2023-07-08 18:39:05 | juno-fx/report | https://api.github.com/repos/juno-fx/report | closed | Nuke - Clearer text error when trying to add write node in unsaved script | enhancement low priority | **Describe the bug**
Make the error more clearer. It doesn't really say you should save your script before adding a write
**To Reproduce**
Steps to reproduce the behavior:
Import plate
Add write
Get error
**Expected behavior**
Just better phrasing
**Screenshots**
If applicable, add screenshots to help explain your problem.

**App (please complete the following information):**
Nuke
**Additional context**
Add any other context about the problem here.
| 1.0 | Nuke - Clearer text error when trying to add write node in unsaved script - **Describe the bug**
Make the error more clearer. It doesn't really say you should save your script before adding a write
**To Reproduce**
Steps to reproduce the behavior:
Import plate
Add write
Get error
**Expected behavior**
Just better phrasing
**Screenshots**
If applicable, add screenshots to help explain your problem.

**App (please complete the following information):**
Nuke
**Additional context**
Add any other context about the problem here.
| priority | nuke clearer text error when trying to add write node in unsaved script describe the bug make the error more clearer it doesn t really say you should save your script before adding a write to reproduce steps to reproduce the behavior import plate add write get error expected behavior just better phrasing screenshots if applicable add screenshots to help explain your problem app please complete the following information nuke additional context add any other context about the problem here | 1 |
335,937 | 10,168,837,676 | IssuesEvent | 2019-08-07 21:59:21 | chef/chef | https://api.github.com/repos/chef/chef | closed | knife node environment set output | Priority: Low Status: Sustaining Backlog | <!---
!!!!!! NOTE: CHEF CLIENT BUGS ONLY !!!!!!
This issue tracker is for the code contained within this repo -- `chef-client`, base `knife` functionality (not
plugins), `chef-apply`, `chef-solo`, `chef-client -z`, etc.
* Requests for new or alternative functionality should be made to [feedback.chef.io](https://feedback.chef.io/forums/301644-chef-product-feedback/category/110832-chef-client)
* [Chef Server issues](https://github.com/chef/chef-server/issues/new)
* [ChefDK issues](https://github.com/chef/chef-dk/issues/new)
* Cookbook Issues (see the https://github.com/chef-cookbooks repos or search [Supermarket](https://supermarket.chef.io) or GitHub/Google)
-->
## Description
knife node environment set output is wrong.
## Chef Version
Chef Workstation: 0.2.53
chef-run: 0.2.8
chef-client: 14.10.9
delivery-cli: 0.0.52 (9d07501a3b347cc687c902319d23dc32dd5fa621)
berks: 7.0.7
test-kitchen: 1.24.0
inspec: 3.6.6
## Platform Version
MacOS and others
## Replication Case
```knife node environment set apache_web acceptance```
## Client Output
```
apache_web:
chef_environment: _default
```
The output should say
```
apache_web:
chef_environment: acceptance
```
but it actually set the node to Acceptance:
```
knife node show apache_web
Node Name: apache_web
Environment: acceptance
FQDN: ip-172-31-58-128.ec2.internal
```
| 1.0 | knife node environment set output - <!---
!!!!!! NOTE: CHEF CLIENT BUGS ONLY !!!!!!
This issue tracker is for the code contained within this repo -- `chef-client`, base `knife` functionality (not
plugins), `chef-apply`, `chef-solo`, `chef-client -z`, etc.
* Requests for new or alternative functionality should be made to [feedback.chef.io](https://feedback.chef.io/forums/301644-chef-product-feedback/category/110832-chef-client)
* [Chef Server issues](https://github.com/chef/chef-server/issues/new)
* [ChefDK issues](https://github.com/chef/chef-dk/issues/new)
* Cookbook Issues (see the https://github.com/chef-cookbooks repos or search [Supermarket](https://supermarket.chef.io) or GitHub/Google)
-->
## Description
knife node environment set output is wrong.
## Chef Version
Chef Workstation: 0.2.53
chef-run: 0.2.8
chef-client: 14.10.9
delivery-cli: 0.0.52 (9d07501a3b347cc687c902319d23dc32dd5fa621)
berks: 7.0.7
test-kitchen: 1.24.0
inspec: 3.6.6
## Platform Version
MacOS and others
## Replication Case
```knife node environment set apache_web acceptance```
## Client Output
```
apache_web:
chef_environment: _default
```
The output should say
```
apache_web:
chef_environment: acceptance
```
but it actually set the node to Acceptance:
```
knife node show apache_web
Node Name: apache_web
Environment: acceptance
FQDN: ip-172-31-58-128.ec2.internal
```
| priority | knife node environment set output note chef client bugs only this issue tracker is for the code contained within this repo chef client base knife functionality not plugins chef apply chef solo chef client z etc requests for new or alternative functionality should be made to cookbook issues see the repos or search or github google description knife node environment set output is wrong chef version chef workstation chef run chef client delivery cli berks test kitchen inspec platform version macos and others replication case knife node environment set apache web acceptance client output apache web chef environment default the output should say apache web chef environment acceptance but it actually set the node to acceptance knife node show apache web node name apache web environment acceptance fqdn ip internal | 1 |
603,142 | 18,529,341,800 | IssuesEvent | 2021-10-21 02:35:08 | MPAS-Dev/MPAS-Analysis | https://api.github.com/repos/MPAS-Dev/MPAS-Analysis | closed | Check for incomplete, corrupted or changed observation files | enhancement low priority | Put a checksum for each file on the observations public space so the download script can check for modified, missing or incomplete files and re-download them as needed. | 1.0 | Check for incomplete, corrupted or changed observation files - Put a checksum for each file on the observations public space so the download script can check for modified, missing or incomplete files and re-download them as needed. | priority | check for incomplete corrupted or changed observation files put a checksum for each file on the observations public space so the download script can check for modified missing or incomplete files and re download them as needed | 1 |
675,106 | 23,079,005,616 | IssuesEvent | 2022-07-26 04:36:50 | EeeeG-Inc/OKR-web-app | https://api.github.com/repos/EeeeG-Inc/OKR-web-app | opened | PHP Insights セキュリティー対応 | priority: low | ```sh
php artisan insights --no-interaction --disable-security-check
```
- 開発速度が停滞してしまうことから、便宜上セキュリティチェックを disabled にした
- 利用者が増えた場合、セキュリティアップデートも検討する必要がある | 1.0 | PHP Insights セキュリティー対応 - ```sh
php artisan insights --no-interaction --disable-security-check
```
- 開発速度が停滞してしまうことから、便宜上セキュリティチェックを disabled にした
- 利用者が増えた場合、セキュリティアップデートも検討する必要がある | priority | php insights セキュリティー対応 sh php artisan insights no interaction disable security check 開発速度が停滞してしまうことから、便宜上セキュリティチェックを disabled にした 利用者が増えた場合、セキュリティアップデートも検討する必要がある | 1 |
248,129 | 7,927,718,702 | IssuesEvent | 2018-07-06 09:02:19 | telerik/kendo-ui-core | https://api.github.com/repos/telerik/kendo-ui-core | closed | Add beforeEdit, filter / columnMenu open events TreeList events | C: TreeList Enhancement Kendo1 Priority 1 SEV: Low | ### Enhancement
Add beforeEdit, filter / columnMenu open TreeList events.
They are already implemented for the Grid:
https://github.com/telerik/kendo/issues/6712
| 1.0 | Add beforeEdit, filter / columnMenu open events TreeList events - ### Enhancement
Add beforeEdit, filter / columnMenu open TreeList events.
They are already implemented for the Grid:
https://github.com/telerik/kendo/issues/6712
| priority | add beforeedit filter columnmenu open events treelist events enhancement add beforeedit filter columnmenu open treelist events they are already implemented for the grid | 1 |
381,831 | 11,296,069,443 | IssuesEvent | 2020-01-17 00:24:27 | pacificclimate/pdp | https://api.github.com/repos/pacificclimate/pdp | opened | DRY up ncWMS mockups for javascript testing | javascript priority:low | In order to test each individual data portal, the portal is instantiated by the javascript test suite. This includes mocking up all the calls to the backend and ncWMS needed to get the portal running with its default dataset. So the test suite includes a fake ncWMS `getCapabilities` call for the default dataset on each portal as a massive multiline xml string. These are thousands of lines long, and have to be updated when new default data is added to a portal.
The results of the `getCapabilities` query don't vary much based on dataset; a lot of it is boilerplate or highly predictable - the same set of projections, a separate listing of each colour available for each variable available, etc.
It seems like it would be straightforward to write a reusable `getCapabilities`-mocking function that accepts the unique ID of the dataset, the names of each variable in the dataset, and the list of timestamps for that dataset, and uses them to generate the massive pile of xml. (You might need spatial extent too, or a couple other things.) This would make updating the test suite for an existing portal or writing the test suite for a new portal easier.
Pretty low priority, though. | 1.0 | DRY up ncWMS mockups for javascript testing - In order to test each individual data portal, the portal is instantiated by the javascript test suite. This includes mocking up all the calls to the backend and ncWMS needed to get the portal running with its default dataset. So the test suite includes a fake ncWMS `getCapabilities` call for the default dataset on each portal as a massive multiline xml string. These are thousands of lines long, and have to be updated when new default data is added to a portal.
The results of the `getCapabilities` query don't vary much based on dataset; a lot of it is boilerplate or highly predictable - the same set of projections, a separate listing of each colour available for each variable available, etc.
It seems like it would be straightforward to write a reusable `getCapabilities`-mocking function that accepts the unique ID of the dataset, the names of each variable in the dataset, and the list of timestamps for that dataset, and uses them to generate the massive pile of xml. (You might need spatial extent too, or a couple other things.) This would make updating the test suite for an existing portal or writing the test suite for a new portal easier.
Pretty low priority, though. | priority | dry up ncwms mockups for javascript testing in order to test each individual data portal the portal is instantiated by the javascript test suite this includes mocking up all the calls to the backend and ncwms needed to get the portal running with its default dataset so the test suite includes a fake ncwms getcapabilities call for the default dataset on each portal as a massive multiline xml string these are thousands of lines long and have to be updated when new default data is added to a portal the results of the getcapabilities query don t vary much based on dataset a lot of it is boilerplate or highly predictable the same set of projections a separate listing of each colour available for each variable available etc it seems like it would be straightforward to write a reusable getcapabilities mocking function that accepts the unique id of the dataset the names of each variable in the dataset and the list of timestamps for that dataset and uses them to generate the massive pile of xml you might need spatial extent too or a couple other things this would make updating the test suite for an existing portal or writing the test suite for a new portal easier pretty low priority though | 1 |
752,202 | 26,276,552,247 | IssuesEvent | 2023-01-06 22:50:34 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio] The API `/configuration/content-type/delete` should return DELETED instead of OK in the `message` response | bug priority: low validate | ### Duplicates
- [X] I have searched the existing issues
### Latest version
- [X] The issue is in the latest released 4.0.x
- [ ] The issue is in the latest released 3.1.x
### Describe the issue
[studio] The API `/configuration/content-type/delete` should return DELETED instead of OK in the `message` response.
### Steps to reproduce
Steps:
Using a REST API tool:
1. Set up a request for the API, `POST http://localhost:8080/studio/api/2/configuration/content-type/delete`
2. Fill up with with a valid request body.
3. Send the request.
4. See the issue
### Relevant log output
```shell
{
"response": {
"code": 0,
"message": "OK",
"remedialAction": "",
"documentationUrl": ""
}
}
```
### Screenshots and/or videos
_No response_ | 1.0 | [studio] The API `/configuration/content-type/delete` should return DELETED instead of OK in the `message` response - ### Duplicates
- [X] I have searched the existing issues
### Latest version
- [X] The issue is in the latest released 4.0.x
- [ ] The issue is in the latest released 3.1.x
### Describe the issue
[studio] The API `/configuration/content-type/delete` should return DELETED instead of OK in the `message` response.
### Steps to reproduce
Steps:
Using a REST API tool:
1. Set up a request for the API, `POST http://localhost:8080/studio/api/2/configuration/content-type/delete`
2. Fill up with with a valid request body.
3. Send the request.
4. See the issue
### Relevant log output
```shell
{
"response": {
"code": 0,
"message": "OK",
"remedialAction": "",
"documentationUrl": ""
}
}
```
### Screenshots and/or videos
_No response_ | priority | the api configuration content type delete should return deleted instead of ok in the message response duplicates i have searched the existing issues latest version the issue is in the latest released x the issue is in the latest released x describe the issue the api configuration content type delete should return deleted instead of ok in the message response steps to reproduce steps using a rest api tool set up a request for the api post fill up with with a valid request body send the request see the issue relevant log output shell response code message ok remedialaction documentationurl screenshots and or videos no response | 1 |
127,380 | 5,029,474,721 | IssuesEvent | 2016-12-15 21:18:58 | squiggle-lang/squiggle-lang | https://api.github.com/repos/squiggle-lang/squiggle-lang | closed | Optimization: Use object literals when possible | enhancement low priority | I need to double check this, but I think perhaps object literals will be pretty-printed nicer? So use those in the case where all object keys are string literals.
| 1.0 | Optimization: Use object literals when possible - I need to double check this, but I think perhaps object literals will be pretty-printed nicer? So use those in the case where all object keys are string literals.
| priority | optimization use object literals when possible i need to double check this but i think perhaps object literals will be pretty printed nicer so use those in the case where all object keys are string literals | 1 |
71,681 | 3,367,418,074 | IssuesEvent | 2015-11-22 05:21:25 | FLEXIcontent/flexicontent-cck | https://api.github.com/repos/FLEXIcontent/flexicontent-cck | closed | About enlarging back-end templates view | enhancement Priority Low | When landing in the templates view of the back-end, the top information table and the actual templates tables are not showing full width. Which compact the info and wasting screen estate.
First fix:
/administrator/components/com_flexicontent/views/templates/tmpl/default.php
On line 73, change
```
<table class="fc-table-list" style="margin:0px; min-width: unset;">
```
for
```
<table class="fc-table-list" style="margin:0px; width:100% !important">
```
Second fix:
/administrator/components/com_flexicontent/assets/css/flexicontentbackend.css
On line 138, change
```
.flexicontent #adminForm table.adminlist {
width: auto !important;
}
```
for
```
.flexicontent #adminForm table.adminlist {
width: 100% !important;
}
```
I tested these 2 fixes on modern browser on Mac (Chrome, FF and Safari). | 1.0 | About enlarging back-end templates view - When landing in the templates view of the back-end, the top information table and the actual templates tables are not showing full width. Which compact the info and wasting screen estate.
First fix:
/administrator/components/com_flexicontent/views/templates/tmpl/default.php
On line 73, change
```
<table class="fc-table-list" style="margin:0px; min-width: unset;">
```
for
```
<table class="fc-table-list" style="margin:0px; width:100% !important">
```
Second fix:
/administrator/components/com_flexicontent/assets/css/flexicontentbackend.css
On line 138, change
```
.flexicontent #adminForm table.adminlist {
width: auto !important;
}
```
for
```
.flexicontent #adminForm table.adminlist {
width: 100% !important;
}
```
I tested these 2 fixes on modern browser on Mac (Chrome, FF and Safari). | priority | about enlarging back end templates view when landing in the templates view of the back end the top information table and the actual templates tables are not showing full width which compact the info and wasting screen estate first fix administrator components com flexicontent views templates tmpl default php on line change for second fix administrator components com flexicontent assets css flexicontentbackend css on line change flexicontent adminform table adminlist width auto important for flexicontent adminform table adminlist width important i tested these fixes on modern browser on mac chrome ff and safari | 1 |
303,234 | 9,303,991,956 | IssuesEvent | 2019-03-24 21:36:29 | TNG/ngqp | https://api.github.com/repos/TNG/ngqp | opened | Stricter TypeScript settings | Comp: Core Priority: Low Status: Accepted Type: Feature | **What's your idea?**
We should try to enable more strict TypeScript settings:
- [x] noImplicitAny
- [ ] strictNullChecks
- [ ] strictFunctionTypes
- [ ] strictPropertyInitialization
- [ ] noImplicitThis
- [ ] noImplicitReturns
- [ ] Replace all with strict
For some, like strictNullChecks, we may have to find an intermediate solution to deal with existing places. | 1.0 | Stricter TypeScript settings - **What's your idea?**
We should try to enable more strict TypeScript settings:
- [x] noImplicitAny
- [ ] strictNullChecks
- [ ] strictFunctionTypes
- [ ] strictPropertyInitialization
- [ ] noImplicitThis
- [ ] noImplicitReturns
- [ ] Replace all with strict
For some, like strictNullChecks, we may have to find an intermediate solution to deal with existing places. | priority | stricter typescript settings what s your idea we should try to enable more strict typescript settings noimplicitany strictnullchecks strictfunctiontypes strictpropertyinitialization noimplicitthis noimplicitreturns replace all with strict for some like strictnullchecks we may have to find an intermediate solution to deal with existing places | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.