Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
957
labels
stringlengths
4
795
body
stringlengths
1
259k
index
stringclasses
12 values
text_combine
stringlengths
96
259k
label
stringclasses
2 values
text
stringlengths
96
252k
binary_label
int64
0
1
57,063
3,081,235,533
IssuesEvent
2015-08-22 14:24:24
bitfighter/bitfighter
https://api.github.com/repos/bitfighter/bitfighter
closed
Quitting editor while uploading causes problems
019 bug imported Priority-Medium
_From [watusim...@bitfighter.org](https://code.google.com/u/105427273526970468779/) on November 09, 2013 15:21:57_ Quitting the editor while uploading a level can cause a crash, in this particular instance, the thread was trying to save the level, when the level no longer existed because the editor was closed. One possible solution might be for the editor to save itself before the thread is spawned, to avoid the save issue? _Original issue: http://code.google.com/p/bitfighter/issues/detail?id=294_
1.0
Quitting editor while uploading causes problems - _From [watusim...@bitfighter.org](https://code.google.com/u/105427273526970468779/) on November 09, 2013 15:21:57_ Quitting the editor while uploading a level can cause a crash, in this particular instance, the thread was trying to save the level, when the level no longer existed because the editor was closed. One possible solution might be for the editor to save itself before the thread is spawned, to avoid the save issue? _Original issue: http://code.google.com/p/bitfighter/issues/detail?id=294_
priority
quitting editor while uploading causes problems from on november quitting the editor while uploading a level can cause a crash in this particular instance the thread was trying to save the level when the level no longer existed because the editor was closed one possible solution might be for the editor to save itself before the thread is spawned to avoid the save issue original issue
1
629,162
20,025,006,667
IssuesEvent
2022-02-01 20:16:19
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
closed
tests: driver: clock: nrf: Several failures on nrf52dk_nrf52832
bug priority: medium platform: nRF
**Describe the bug** Several tests related to clock driver fails on nrf52dk_nrf52832. It looks like the board hangs after some initial commands and the test is terminated by a timeout. Following scenarios are failing: drivers.clock.drivers.clock.clock_control_nrf5 drivers.clock.drivers.clock.clock_control_nrf5_lfclk_rc drivers.clock.drivers.clock.nrf5_clock_calibration drivers.clock.nrf_lf_clock_start_synth_stable **To Reproduce** Steps to reproduce the behavior: 1. connect nrf52dk_nrf52832 (e.g. as /dev/ttyACM0) 2. run `./scripts/twister -p nrf52dk_nrf52832 --device-testing --device-serial /dev/ttyACM0 --ninja -T tests/drivers/clock_control/ -v` 3. See error **Expected behavior** Tests passes **Impact** Not clear **Logs and console output** ``` 17:11 $ ./scripts/twister -p nrf52dk_nrf52832 --device-testing --device-serial /dev/ttyACM0 --ninja -T tests/drivers/clock_control/ -v ZEPHYR_BASE unset, using "/home/maciej/zephyrproject2/zephyr" Renaming output directory to /home/maciej/zephyrproject2/zephyr/twister-out.31 INFO - Zephyr version: v2.7.99-3217-gc71dd808345b INFO - JOBS: 4 INFO - Using 'zephyr' toolchain. INFO - Building initial testcase list... INFO - 28 test scenarios (28 configurations) selected, 14 configurations discarded due to filters. Device testing on: | Platform | ID | Serial device | |------------------|------|-----------------| | nrf52dk_nrf52832 | | /dev/ttyACM0 | INFO - Adding tasks to the queue... INFO - Added initial list of jobs to queue INFO - 1/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_lf_clock_start/drivers.clock.nrf_lf_clock_start_synth_available PASSED (device 3.456s) INFO - 2/14 nrf52dk_nrf52832 tests/drivers/clock_control/onoff/drivers.clock.clock_control_onoff PASSED (device 4.387s) INFO - 3/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_lf_clock_start/drivers.clock.nrf_lf_clock_start_synth_no_wait PASSED (device 5.097s) INFO - 4/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_clock_calibration/drivers.clock.nrf5_clock_calibration FAILED Timeout (device 65.230s) ERROR - see: /home/maciej/zephyrproject2/zephyr/twister-out/nrf52dk_nrf52832/tests/drivers/clock_control/nrf_clock_calibration/drivers.clock.nrf5_clock_calibration/handler.log INFO - 5/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_lf_clock_start/drivers.clock.nrf_lf_clock_start_synth_stable FAILED Timeout (device 68.428s) ERROR - see: /home/maciej/zephyrproject2/zephyr/twister-out/nrf52dk_nrf52832/tests/drivers/clock_control/nrf_lf_clock_start/drivers.clock.nrf_lf_clock_start_synth_stable/build.log INFO - 6/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_lf_clock_start/drivers.clock.nrf_lf_clock_start_rc_available PASSED (device 3.346s) INFO - 7/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_lf_clock_start/drivers.clock.nrf_lf_clock_start_rc_no_wait PASSED (device 4.574s) INFO - 8/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_lf_clock_start/drivers.clock.nrf_lf_clock_start_rc_stable PASSED (device 5.238s) INFO - 9/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_lf_clock_start/drivers.clock.nrf_lf_clock_start_xtal_no_wait PASSED (device 6.069s) INFO - 10/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_lf_clock_start/drivers.clock.nrf_lf_clock_start_xtal_available PASSED (device 5.008s) INFO - 11/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_lf_clock_start/drivers.clock.nrf_lf_clock_start_xtal_stable PASSED (device 6.449s) INFO - 12/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_onoff_and_bt/drivers.clock.nrf_onoff_and_bt PASSED (device 26.930s) INFO - 13/14 nrf52dk_nrf52832 tests/drivers/clock_control/clock_control_api/drivers.clock.clock_control_nrf5_lfclk_rc FAILED Timeout (device 63.458s) ERROR - see: /home/maciej/zephyrproject2/zephyr/twister-out/nrf52dk_nrf52832/tests/drivers/clock_control/clock_control_api/drivers.clock.clock_control_nrf5_lfclk_rc/handler.log INFO - 14/14 nrf52dk_nrf52832 tests/drivers/clock_control/clock_control_api/drivers.clock.clock_control_nrf5 FAILED Timeout (device 63.364s) ERROR - see: /home/maciej/zephyrproject2/zephyr/twister-out/nrf52dk_nrf52832/tests/drivers/clock_control/clock_control_api/drivers.clock.clock_control_nrf5/handler.log INFO - 10 of 14 test configurations passed (71.43%), 4 failed, 14 skipped with 0 warnings in 364.38 seconds INFO - In total 37 test cases were executed, 42 skipped on 1 out of total 418 platforms (0.24%) INFO - 14 test configurations executed on platforms, 0 test configurations were only built. ``` **Environment (please complete the following information):** - OS: Ubuntu 18.04 - Toolchain Zephyr SDK 0.13.1 - Commit SHA or Version used c71dd80834
1.0
tests: driver: clock: nrf: Several failures on nrf52dk_nrf52832 - **Describe the bug** Several tests related to clock driver fails on nrf52dk_nrf52832. It looks like the board hangs after some initial commands and the test is terminated by a timeout. Following scenarios are failing: drivers.clock.drivers.clock.clock_control_nrf5 drivers.clock.drivers.clock.clock_control_nrf5_lfclk_rc drivers.clock.drivers.clock.nrf5_clock_calibration drivers.clock.nrf_lf_clock_start_synth_stable **To Reproduce** Steps to reproduce the behavior: 1. connect nrf52dk_nrf52832 (e.g. as /dev/ttyACM0) 2. run `./scripts/twister -p nrf52dk_nrf52832 --device-testing --device-serial /dev/ttyACM0 --ninja -T tests/drivers/clock_control/ -v` 3. See error **Expected behavior** Tests passes **Impact** Not clear **Logs and console output** ``` 17:11 $ ./scripts/twister -p nrf52dk_nrf52832 --device-testing --device-serial /dev/ttyACM0 --ninja -T tests/drivers/clock_control/ -v ZEPHYR_BASE unset, using "/home/maciej/zephyrproject2/zephyr" Renaming output directory to /home/maciej/zephyrproject2/zephyr/twister-out.31 INFO - Zephyr version: v2.7.99-3217-gc71dd808345b INFO - JOBS: 4 INFO - Using 'zephyr' toolchain. INFO - Building initial testcase list... INFO - 28 test scenarios (28 configurations) selected, 14 configurations discarded due to filters. Device testing on: | Platform | ID | Serial device | |------------------|------|-----------------| | nrf52dk_nrf52832 | | /dev/ttyACM0 | INFO - Adding tasks to the queue... INFO - Added initial list of jobs to queue INFO - 1/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_lf_clock_start/drivers.clock.nrf_lf_clock_start_synth_available PASSED (device 3.456s) INFO - 2/14 nrf52dk_nrf52832 tests/drivers/clock_control/onoff/drivers.clock.clock_control_onoff PASSED (device 4.387s) INFO - 3/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_lf_clock_start/drivers.clock.nrf_lf_clock_start_synth_no_wait PASSED (device 5.097s) INFO - 4/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_clock_calibration/drivers.clock.nrf5_clock_calibration FAILED Timeout (device 65.230s) ERROR - see: /home/maciej/zephyrproject2/zephyr/twister-out/nrf52dk_nrf52832/tests/drivers/clock_control/nrf_clock_calibration/drivers.clock.nrf5_clock_calibration/handler.log INFO - 5/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_lf_clock_start/drivers.clock.nrf_lf_clock_start_synth_stable FAILED Timeout (device 68.428s) ERROR - see: /home/maciej/zephyrproject2/zephyr/twister-out/nrf52dk_nrf52832/tests/drivers/clock_control/nrf_lf_clock_start/drivers.clock.nrf_lf_clock_start_synth_stable/build.log INFO - 6/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_lf_clock_start/drivers.clock.nrf_lf_clock_start_rc_available PASSED (device 3.346s) INFO - 7/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_lf_clock_start/drivers.clock.nrf_lf_clock_start_rc_no_wait PASSED (device 4.574s) INFO - 8/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_lf_clock_start/drivers.clock.nrf_lf_clock_start_rc_stable PASSED (device 5.238s) INFO - 9/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_lf_clock_start/drivers.clock.nrf_lf_clock_start_xtal_no_wait PASSED (device 6.069s) INFO - 10/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_lf_clock_start/drivers.clock.nrf_lf_clock_start_xtal_available PASSED (device 5.008s) INFO - 11/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_lf_clock_start/drivers.clock.nrf_lf_clock_start_xtal_stable PASSED (device 6.449s) INFO - 12/14 nrf52dk_nrf52832 tests/drivers/clock_control/nrf_onoff_and_bt/drivers.clock.nrf_onoff_and_bt PASSED (device 26.930s) INFO - 13/14 nrf52dk_nrf52832 tests/drivers/clock_control/clock_control_api/drivers.clock.clock_control_nrf5_lfclk_rc FAILED Timeout (device 63.458s) ERROR - see: /home/maciej/zephyrproject2/zephyr/twister-out/nrf52dk_nrf52832/tests/drivers/clock_control/clock_control_api/drivers.clock.clock_control_nrf5_lfclk_rc/handler.log INFO - 14/14 nrf52dk_nrf52832 tests/drivers/clock_control/clock_control_api/drivers.clock.clock_control_nrf5 FAILED Timeout (device 63.364s) ERROR - see: /home/maciej/zephyrproject2/zephyr/twister-out/nrf52dk_nrf52832/tests/drivers/clock_control/clock_control_api/drivers.clock.clock_control_nrf5/handler.log INFO - 10 of 14 test configurations passed (71.43%), 4 failed, 14 skipped with 0 warnings in 364.38 seconds INFO - In total 37 test cases were executed, 42 skipped on 1 out of total 418 platforms (0.24%) INFO - 14 test configurations executed on platforms, 0 test configurations were only built. ``` **Environment (please complete the following information):** - OS: Ubuntu 18.04 - Toolchain Zephyr SDK 0.13.1 - Commit SHA or Version used c71dd80834
priority
tests driver clock nrf several failures on describe the bug several tests related to clock driver fails on it looks like the board hangs after some initial commands and the test is terminated by a timeout following scenarios are failing drivers clock drivers clock clock control drivers clock drivers clock clock control lfclk rc drivers clock drivers clock clock calibration drivers clock nrf lf clock start synth stable to reproduce steps to reproduce the behavior connect e g as dev run scripts twister p device testing device serial dev ninja t tests drivers clock control v see error expected behavior tests passes impact not clear logs and console output scripts twister p device testing device serial dev ninja t tests drivers clock control v zephyr base unset using home maciej zephyr renaming output directory to home maciej zephyr twister out info zephyr version info jobs info using zephyr toolchain info building initial testcase list info test scenarios configurations selected configurations discarded due to filters device testing on platform id serial device dev info adding tasks to the queue info added initial list of jobs to queue info tests drivers clock control nrf lf clock start drivers clock nrf lf clock start synth available passed device info tests drivers clock control onoff drivers clock clock control onoff passed device info tests drivers clock control nrf lf clock start drivers clock nrf lf clock start synth no wait passed device info tests drivers clock control nrf clock calibration drivers clock clock calibration failed timeout device error see home maciej zephyr twister out tests drivers clock control nrf clock calibration drivers clock clock calibration handler log info tests drivers clock control nrf lf clock start drivers clock nrf lf clock start synth stable failed timeout device error see home maciej zephyr twister out tests drivers clock control nrf lf clock start drivers clock nrf lf clock start synth stable build log info tests drivers clock control nrf lf clock start drivers clock nrf lf clock start rc available passed device info tests drivers clock control nrf lf clock start drivers clock nrf lf clock start rc no wait passed device info tests drivers clock control nrf lf clock start drivers clock nrf lf clock start rc stable passed device info tests drivers clock control nrf lf clock start drivers clock nrf lf clock start xtal no wait passed device info tests drivers clock control nrf lf clock start drivers clock nrf lf clock start xtal available passed device info tests drivers clock control nrf lf clock start drivers clock nrf lf clock start xtal stable passed device info tests drivers clock control nrf onoff and bt drivers clock nrf onoff and bt passed device info tests drivers clock control clock control api drivers clock clock control lfclk rc failed timeout device error see home maciej zephyr twister out tests drivers clock control clock control api drivers clock clock control lfclk rc handler log info tests drivers clock control clock control api drivers clock clock control failed timeout device error see home maciej zephyr twister out tests drivers clock control clock control api drivers clock clock control handler log info of test configurations passed failed skipped with warnings in seconds info in total test cases were executed skipped on out of total platforms info test configurations executed on platforms test configurations were only built environment please complete the following information os ubuntu toolchain zephyr sdk commit sha or version used
1
7,184
2,598,259,943
IssuesEvent
2015-02-22 10:06:24
HubTurbo/HubTurbo
https://api.github.com/repos/HubTurbo/HubTurbo
closed
Don't trigger a sync if HT gains focus due to a click on the project switcher
feature-projects priority.medium status.accepted type.enhancement
It's annoying when I want to switch projects but HT goes into a sync when I click on the project switcher drop down.
1.0
Don't trigger a sync if HT gains focus due to a click on the project switcher - It's annoying when I want to switch projects but HT goes into a sync when I click on the project switcher drop down.
priority
don t trigger a sync if ht gains focus due to a click on the project switcher it s annoying when i want to switch projects but ht goes into a sync when i click on the project switcher drop down
1
296,821
9,126,351,784
IssuesEvent
2019-02-24 20:53:50
pixijs/pixi.js
https://api.github.com/repos/pixijs/pixi.js
closed
Compressed textures integration
Difficulty: Medium Domain: API Priority: Medium Renderer: WebGL Resolution: Won't Fix Type: Feature Request Version: v5.x
I dont have enough experience to refactor the code properly. ".dds", ".pvr", ".pkm" formats, as in https://github.com/pixijs/pixi-compressed-textures There is already working example in v4: http://pixijs.github.io/examples/#/textures/dds.js We dont have to integrate chooser, just do the following: 1. refactor CompressedImage, add it to pixi-gl-core 2. fix textureParser so it accepts those formats UPD. looks like there's no PKM format in that code too
1.0
Compressed textures integration - I dont have enough experience to refactor the code properly. ".dds", ".pvr", ".pkm" formats, as in https://github.com/pixijs/pixi-compressed-textures There is already working example in v4: http://pixijs.github.io/examples/#/textures/dds.js We dont have to integrate chooser, just do the following: 1. refactor CompressedImage, add it to pixi-gl-core 2. fix textureParser so it accepts those formats UPD. looks like there's no PKM format in that code too
priority
compressed textures integration i dont have enough experience to refactor the code properly dds pvr pkm formats as in there is already working example in we dont have to integrate chooser just do the following refactor compressedimage add it to pixi gl core fix textureparser so it accepts those formats upd looks like there s no pkm format in that code too
1
273,437
8,530,713,365
IssuesEvent
2018-11-04 02:15:28
robotframework/robotframework
https://api.github.com/repos/robotframework/robotframework
closed
Write Control Character does not work on Python 3
beta 2 bug priority: medium
In telnetlib, control characters are no more 'char' type, but 'bytes' type. So if you try to issue command: Write Control Character 26 # CTRL-Z then it gives the error: TypeError: can't concat str to bytes
1.0
Write Control Character does not work on Python 3 - In telnetlib, control characters are no more 'char' type, but 'bytes' type. So if you try to issue command: Write Control Character 26 # CTRL-Z then it gives the error: TypeError: can't concat str to bytes
priority
write control character does not work on python in telnetlib control characters are no more char type but bytes type so if you try to issue command write control character ctrl z then it gives the error typeerror can t concat str to bytes
1
781,280
27,430,508,213
IssuesEvent
2023-03-02 00:42:36
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
closed
Zephyr does not define minimal C++ language standard requirement and does not track it
bug priority: medium area: C++
**Describe the bug** Zephyr defines C11 as minimal supported standard for C language in [C language support - language standards](https://docs.zephyrproject.org/latest/develop/languages/c/index.html#language-standards). Similar documentation article: [C++ language support](https://docs.zephyrproject.org/latest/develop/languages/cpp/index.html) does not have "language standard" section. Roadmap #31281 also does not say a word about minimal requirements. Thus, minimal C++ version supported is unknown. This is a serious problem since: * C++ language standards have much more changes between releases than C language. * Custom toolchains for embedded systems often concentrate on C language support (due to its popularity) leaving C++ language with old standard (like dreadful C++98). **Concrete problem encountered:** [SOF project](https://github.com/thesofproject/sof) uses Cadence Xtensa toolchain "xt-xcc" that is based on GNU 4.2.0 compiler for Intel Meteorlake audio firmware. Meteorlake board has some C++ code that uses Zephyr `#include <zephyr/sys/util.h>` that provides C/C++ interropt helpers at the time of writing. "xt-xcc" is obsolete for some time and latest support language is C++98 standard. Everything was working fine in our CI until PR #53405 , where author added `constexpr` keyword usage in commit deab09d7a6b0c51a52ffd8bd03f17e04cf39c25c . Author @cfriedt in that PR claims (selective quote): > > There should be a announcement for this and the code that suddenly requires new C++ > > As mentioned above, this is all in line with Zephyr's suggested ISO C and C++ standards (C11, C++11). **Proof of wrong:** Fuction template definition `__z_log2_impl` from `include/zephyr/sys/util.h` in some conditions returns value A and in other conditions value B. According to C++ standard ISO/IEC 14882:2011 §7.1.5.3: > The definition of a constexpr function shall satisfy the following constraints: > - it shall not be virtual (10.3); > - its return type shall be a literal type; > - each of its parameter types shall be a literal type; > - its function-body shall be = delete, = default, or a compound-statement that contains only > - null statements, > - static_assert-declarations > - typedef declarations and alias-declarations that do not define classes or enumerations, > - using-declarations, > - using-directives, > - and exactly one return statement; > - every constructor call and implicit conversion used in initializing the return value (6.6.3, 8.5) shall be > one of those allowed in a constant expression (5.19). Also, [cppreference.com - constexpr](https://en.cppreference.com/w/cpp/language/constexpr) confirms this: Quote marked "(until C++14)": > the function body must be either deleted or defaulted or contain only the following: > - [null statements](https://en.cppreference.com/w/cpp/language/statements#Expression_statements) (plain semicolons) > - [static_assert](https://en.cppreference.com/w/cpp/language/static_assert) declarations > - [typedef](https://en.cppreference.com/w/cpp/language/typedef) declarations and [alias](https://en.cppreference.com/w/cpp/language/type_alias) declarations that do not define classes or enumerations > - [using declarations](https://en.cppreference.com/w/cpp/language/namespace#Using-declarations) > - [using directives](https://en.cppreference.com/w/cpp/language/namespace#Using-directives) > - if the function is not a constructor, exactly one [return](https://en.cppreference.com/w/cpp/language/return) statement > **To Reproduce** Steps to reproduce the behavior: 1. Open my minimal reproduction project with Compiler Explorer: https://godbolt.org/z/78sWY65v4 2. Play around with compiler version and C++ standard. Note: compiler x86-64 gcc 12.2 compiles minimal example using C++11 without a warning! Change compiler to: x86-64 gcc 11.3 to receive an error (related to multiple return statements in constexpr function): > header.h:17:1: error: body of 'constexpr' function 'constexpr int __z_log2_impl(T) [with T = int]' not a return-statement > 17 | } > | ^ 3. Change CMake line: `set_property(TARGET output.s PROPERTY CXX_STANDARD 11)` to `set_property(TARGET output.s PROPERTY CXX_STANDARD 14)` and watch how everything compiles successfully. **Expected behavior** 1. Zephyr defines minimal C++ language standard in the official documentation. Right now it is C++14 if we do not roll back PR #53405 (probably - remaining C++ code needs verification too!) 2. Developers know, and check the C++ standard of the code they wright! Please note, that modern compilers GCC, Clang, MSVC tend to provide "backward compatibility" to old C++ standards what results in non-conforming code being compiled without an issue! If the compiler supports C++17 or C++20 it is possible that it will compile the code marked -std=c++11 that should not compile with that standard due to compiler extensions they provide! We need to watch the conformance carefully, maybe some static analysis tools? **Impact** SOF project has to urgently upgrade toolchain to new Cadence Xtensa "xt-clang" based on clang, that allows to use up to C++17 standard. If we do not do this, we are unable upgrade Zephyr revisions since our code will not build! Luckily, we had been preparing for toolchain change in https://github.com/thesofproject/sof/pull/7027 for some time and have only 2 .cpp source files at the time of writing. **Logs and console output** Compiling with xt-xcc: [xt-xcc.log](https://github.com/zephyrproject-rtos/zephyr/files/10835174/xt-xcc.log) Compiling with xt-clang -std=c++11: [xt-clang C++11.log](https://github.com/zephyrproject-rtos/zephyr/files/10835177/xt-clang.C%2B%2B11.log) **Environment (please complete the following information):** - OS: Windows, Linux **Additional context** N/A
1.0
Zephyr does not define minimal C++ language standard requirement and does not track it - **Describe the bug** Zephyr defines C11 as minimal supported standard for C language in [C language support - language standards](https://docs.zephyrproject.org/latest/develop/languages/c/index.html#language-standards). Similar documentation article: [C++ language support](https://docs.zephyrproject.org/latest/develop/languages/cpp/index.html) does not have "language standard" section. Roadmap #31281 also does not say a word about minimal requirements. Thus, minimal C++ version supported is unknown. This is a serious problem since: * C++ language standards have much more changes between releases than C language. * Custom toolchains for embedded systems often concentrate on C language support (due to its popularity) leaving C++ language with old standard (like dreadful C++98). **Concrete problem encountered:** [SOF project](https://github.com/thesofproject/sof) uses Cadence Xtensa toolchain "xt-xcc" that is based on GNU 4.2.0 compiler for Intel Meteorlake audio firmware. Meteorlake board has some C++ code that uses Zephyr `#include <zephyr/sys/util.h>` that provides C/C++ interropt helpers at the time of writing. "xt-xcc" is obsolete for some time and latest support language is C++98 standard. Everything was working fine in our CI until PR #53405 , where author added `constexpr` keyword usage in commit deab09d7a6b0c51a52ffd8bd03f17e04cf39c25c . Author @cfriedt in that PR claims (selective quote): > > There should be a announcement for this and the code that suddenly requires new C++ > > As mentioned above, this is all in line with Zephyr's suggested ISO C and C++ standards (C11, C++11). **Proof of wrong:** Fuction template definition `__z_log2_impl` from `include/zephyr/sys/util.h` in some conditions returns value A and in other conditions value B. According to C++ standard ISO/IEC 14882:2011 §7.1.5.3: > The definition of a constexpr function shall satisfy the following constraints: > - it shall not be virtual (10.3); > - its return type shall be a literal type; > - each of its parameter types shall be a literal type; > - its function-body shall be = delete, = default, or a compound-statement that contains only > - null statements, > - static_assert-declarations > - typedef declarations and alias-declarations that do not define classes or enumerations, > - using-declarations, > - using-directives, > - and exactly one return statement; > - every constructor call and implicit conversion used in initializing the return value (6.6.3, 8.5) shall be > one of those allowed in a constant expression (5.19). Also, [cppreference.com - constexpr](https://en.cppreference.com/w/cpp/language/constexpr) confirms this: Quote marked "(until C++14)": > the function body must be either deleted or defaulted or contain only the following: > - [null statements](https://en.cppreference.com/w/cpp/language/statements#Expression_statements) (plain semicolons) > - [static_assert](https://en.cppreference.com/w/cpp/language/static_assert) declarations > - [typedef](https://en.cppreference.com/w/cpp/language/typedef) declarations and [alias](https://en.cppreference.com/w/cpp/language/type_alias) declarations that do not define classes or enumerations > - [using declarations](https://en.cppreference.com/w/cpp/language/namespace#Using-declarations) > - [using directives](https://en.cppreference.com/w/cpp/language/namespace#Using-directives) > - if the function is not a constructor, exactly one [return](https://en.cppreference.com/w/cpp/language/return) statement > **To Reproduce** Steps to reproduce the behavior: 1. Open my minimal reproduction project with Compiler Explorer: https://godbolt.org/z/78sWY65v4 2. Play around with compiler version and C++ standard. Note: compiler x86-64 gcc 12.2 compiles minimal example using C++11 without a warning! Change compiler to: x86-64 gcc 11.3 to receive an error (related to multiple return statements in constexpr function): > header.h:17:1: error: body of 'constexpr' function 'constexpr int __z_log2_impl(T) [with T = int]' not a return-statement > 17 | } > | ^ 3. Change CMake line: `set_property(TARGET output.s PROPERTY CXX_STANDARD 11)` to `set_property(TARGET output.s PROPERTY CXX_STANDARD 14)` and watch how everything compiles successfully. **Expected behavior** 1. Zephyr defines minimal C++ language standard in the official documentation. Right now it is C++14 if we do not roll back PR #53405 (probably - remaining C++ code needs verification too!) 2. Developers know, and check the C++ standard of the code they wright! Please note, that modern compilers GCC, Clang, MSVC tend to provide "backward compatibility" to old C++ standards what results in non-conforming code being compiled without an issue! If the compiler supports C++17 or C++20 it is possible that it will compile the code marked -std=c++11 that should not compile with that standard due to compiler extensions they provide! We need to watch the conformance carefully, maybe some static analysis tools? **Impact** SOF project has to urgently upgrade toolchain to new Cadence Xtensa "xt-clang" based on clang, that allows to use up to C++17 standard. If we do not do this, we are unable upgrade Zephyr revisions since our code will not build! Luckily, we had been preparing for toolchain change in https://github.com/thesofproject/sof/pull/7027 for some time and have only 2 .cpp source files at the time of writing. **Logs and console output** Compiling with xt-xcc: [xt-xcc.log](https://github.com/zephyrproject-rtos/zephyr/files/10835174/xt-xcc.log) Compiling with xt-clang -std=c++11: [xt-clang C++11.log](https://github.com/zephyrproject-rtos/zephyr/files/10835177/xt-clang.C%2B%2B11.log) **Environment (please complete the following information):** - OS: Windows, Linux **Additional context** N/A
priority
zephyr does not define minimal c language standard requirement and does not track it describe the bug zephyr defines as minimal supported standard for c language in similar documentation article does not have language standard section roadmap also does not say a word about minimal requirements thus minimal c version supported is unknown this is a serious problem since c language standards have much more changes between releases than c language custom toolchains for embedded systems often concentrate on c language support due to its popularity leaving c language with old standard like dreadful c concrete problem encountered uses cadence xtensa toolchain xt xcc that is based on gnu compiler for intel meteorlake audio firmware meteorlake board has some c code that uses zephyr include that provides c c interropt helpers at the time of writing xt xcc is obsolete for some time and latest support language is c standard everything was working fine in our ci until pr where author added constexpr keyword usage in commit author cfriedt in that pr claims selective quote there should be a announcement for this and the code that suddenly requires new c as mentioned above this is all in line with zephyr s suggested iso c and c standards c proof of wrong fuction template definition z impl from include zephyr sys util h in some conditions returns value a and in other conditions value b according to c standard iso iec § the definition of a constexpr function shall satisfy the following constraints it shall not be virtual its return type shall be a literal type each of its parameter types shall be a literal type its function body shall be delete default or a compound statement that contains only null statements static assert declarations typedef declarations and alias declarations that do not define classes or enumerations using declarations using directives and exactly one return statement every constructor call and implicit conversion used in initializing the return value shall be one of those allowed in a constant expression also confirms this quote marked until c the function body must be either deleted or defaulted or contain only the following plain semicolons declarations declarations and declarations that do not define classes or enumerations if the function is not a constructor exactly one statement to reproduce steps to reproduce the behavior open my minimal reproduction project with compiler explorer play around with compiler version and c standard note compiler gcc compiles minimal example using c without a warning change compiler to gcc to receive an error related to multiple return statements in constexpr function header h error body of constexpr function constexpr int z impl t not a return statement change cmake line set property target output s property cxx standard to set property target output s property cxx standard and watch how everything compiles successfully expected behavior zephyr defines minimal c language standard in the official documentation right now it is c if we do not roll back pr probably remaining c code needs verification too developers know and check the c standard of the code they wright please note that modern compilers gcc clang msvc tend to provide backward compatibility to old c standards what results in non conforming code being compiled without an issue if the compiler supports c or c it is possible that it will compile the code marked std c that should not compile with that standard due to compiler extensions they provide we need to watch the conformance carefully maybe some static analysis tools impact sof project has to urgently upgrade toolchain to new cadence xtensa xt clang based on clang that allows to use up to c standard if we do not do this we are unable upgrade zephyr revisions since our code will not build luckily we had been preparing for toolchain change in for some time and have only cpp source files at the time of writing logs and console output compiling with xt xcc compiling with xt clang std c environment please complete the following information os windows linux additional context n a
1
339,804
10,262,515,504
IssuesEvent
2019-08-22 12:30:50
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
closed
openocd unable to flash hello_world to cc26x2r1_launchxl
bug priority: medium
**Describe the bug** west flash fails to write hello_world to the LaunchXL CC26X2R1, complaining that the debug regions are unpowered. **To Reproduce** follow the instructions here: https://docs.zephyrproject.org/latest/boards/arm/cc26x2r1_launchxl/doc/index.html Use a LAUNCHXL-CC26X2R1 **Expected behavior** west flash completes successfully, and "hello world" is printed on the serial interface **Impact** I'm unable to use this CC26x2 device (show stopper) **Screenshots or console output**-- west flash: using runner openocd It took a while to figure out that the boards ship with only some of the JTAG jumpers in place. I had to add jumpers to TDO, TDI, and SWD. Prior to doing that I was getting a similar error, but notably OpenOCD was reporting: "tap-disabled" I can get a little further with these jumpers installed, but only to the point where it complains "Error: Debug regions are unpowered, an unexpected reset might have happened". Here is the full output: Open On-Chip Debugger 0.10.0+dev-g1df07a9a4-dirty (2019-06-13-13:14) Licensed under GNU GPL v2 For bug reports, read http://openocd.org/doc/doxygen/bugs.html adapter speed: 2500 kHz srst_only separate srst_gates_jtag srst_open_drain connect_deassert_srst adapter_nsrst_delay: 100 Info : XDS110: connected Info : XDS110: firmware version = 2.3.0.14 Info : XDS110: hardware version = 0x0023 Info : XDS110: connected to target via JTAG Info : XDS110: TCK set to 2500 kHz Info : clock speed 2500 kHz Info : JTAG tap: cc26x2.jrc tap/device found: 0x3bb4102f (mfg: 0x017 (Texas Instruments), part: 0xbb41, ver: 0x3) Info : JTAG tap: cc26x2.cpu enabled Info : cc26x2.cpu: hardware has 6 breakpoints, 4 watchpoints Info : Listening on port 3333 for gdb connections TargetName Type Endian TapName State -- ------------------ ---------- ------ ------------------ ------------ 0* cc26x2.cpu cortex_m little cc26x2.cpu halted Info : JTAG tap: cc26x2.jrc tap/device found: 0x3bb4102f (mfg: 0x017 (Texas Instruments), part: 0xbb41, ver: 0x3) Info : JTAG tap: cc26x2.cpu enabled Error: Debug regions are unpowered, an unexpected reset might have happened Error: JTAG-DP STICKY ERROR Error: Could not find MEM-AP to control the core Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: DP initialisation failed Info : XDS110: disconnected **Environment (please complete the following information):** - OS: Linux (Ubuntu 18.04.2) - Toolchain: Zephyr SDK git rev-parse HEAD 2463ded4c84cc8e0f1de3d3966c67db0c025e113 **Additional context** I'm able to use west/ninja to flash an nrf52840_pca10056 without incident.
1.0
openocd unable to flash hello_world to cc26x2r1_launchxl - **Describe the bug** west flash fails to write hello_world to the LaunchXL CC26X2R1, complaining that the debug regions are unpowered. **To Reproduce** follow the instructions here: https://docs.zephyrproject.org/latest/boards/arm/cc26x2r1_launchxl/doc/index.html Use a LAUNCHXL-CC26X2R1 **Expected behavior** west flash completes successfully, and "hello world" is printed on the serial interface **Impact** I'm unable to use this CC26x2 device (show stopper) **Screenshots or console output**-- west flash: using runner openocd It took a while to figure out that the boards ship with only some of the JTAG jumpers in place. I had to add jumpers to TDO, TDI, and SWD. Prior to doing that I was getting a similar error, but notably OpenOCD was reporting: "tap-disabled" I can get a little further with these jumpers installed, but only to the point where it complains "Error: Debug regions are unpowered, an unexpected reset might have happened". Here is the full output: Open On-Chip Debugger 0.10.0+dev-g1df07a9a4-dirty (2019-06-13-13:14) Licensed under GNU GPL v2 For bug reports, read http://openocd.org/doc/doxygen/bugs.html adapter speed: 2500 kHz srst_only separate srst_gates_jtag srst_open_drain connect_deassert_srst adapter_nsrst_delay: 100 Info : XDS110: connected Info : XDS110: firmware version = 2.3.0.14 Info : XDS110: hardware version = 0x0023 Info : XDS110: connected to target via JTAG Info : XDS110: TCK set to 2500 kHz Info : clock speed 2500 kHz Info : JTAG tap: cc26x2.jrc tap/device found: 0x3bb4102f (mfg: 0x017 (Texas Instruments), part: 0xbb41, ver: 0x3) Info : JTAG tap: cc26x2.cpu enabled Info : cc26x2.cpu: hardware has 6 breakpoints, 4 watchpoints Info : Listening on port 3333 for gdb connections TargetName Type Endian TapName State -- ------------------ ---------- ------ ------------------ ------------ 0* cc26x2.cpu cortex_m little cc26x2.cpu halted Info : JTAG tap: cc26x2.jrc tap/device found: 0x3bb4102f (mfg: 0x017 (Texas Instruments), part: 0xbb41, ver: 0x3) Info : JTAG tap: cc26x2.cpu enabled Error: Debug regions are unpowered, an unexpected reset might have happened Error: JTAG-DP STICKY ERROR Error: Could not find MEM-AP to control the core Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: Invalid ACK (0) in DAP response Error: DP initialisation failed Info : XDS110: disconnected **Environment (please complete the following information):** - OS: Linux (Ubuntu 18.04.2) - Toolchain: Zephyr SDK git rev-parse HEAD 2463ded4c84cc8e0f1de3d3966c67db0c025e113 **Additional context** I'm able to use west/ninja to flash an nrf52840_pca10056 without incident.
priority
openocd unable to flash hello world to launchxl describe the bug west flash fails to write hello world to the launchxl complaining that the debug regions are unpowered to reproduce follow the instructions here use a launchxl expected behavior west flash completes successfully and hello world is printed on the serial interface impact i m unable to use this device show stopper screenshots or console output west flash using runner openocd it took a while to figure out that the boards ship with only some of the jtag jumpers in place i had to add jumpers to tdo tdi and swd prior to doing that i was getting a similar error but notably openocd was reporting tap disabled i can get a little further with these jumpers installed but only to the point where it complains error debug regions are unpowered an unexpected reset might have happened here is the full output open on chip debugger dev dirty licensed under gnu gpl for bug reports read adapter speed khz srst only separate srst gates jtag srst open drain connect deassert srst adapter nsrst delay info connected info firmware version info hardware version info connected to target via jtag info tck set to khz info clock speed khz info jtag tap jrc tap device found mfg texas instruments part ver info jtag tap cpu enabled info cpu hardware has breakpoints watchpoints info listening on port for gdb connections targetname type endian tapname state cpu cortex m little cpu halted info jtag tap jrc tap device found mfg texas instruments part ver info jtag tap cpu enabled error debug regions are unpowered an unexpected reset might have happened error jtag dp sticky error error could not find mem ap to control the core error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error invalid ack in dap response error dp initialisation failed info disconnected environment please complete the following information os linux ubuntu toolchain zephyr sdk git rev parse head additional context i m able to use west ninja to flash an without incident
1
281,801
8,699,803,730
IssuesEvent
2018-12-05 06:13:55
hotosm/tasking-manager
https://api.github.com/repos/hotosm/tasking-manager
closed
Task history lost on split
High Priority In Progress Medium Difficulty regression
Tiles are split 4 ways. In the old task manager the edit history and comments would propogate to the four child tiles after a split. In the new task manager this does not happen and all history and comments are lost.
1.0
Task history lost on split - Tiles are split 4 ways. In the old task manager the edit history and comments would propogate to the four child tiles after a split. In the new task manager this does not happen and all history and comments are lost.
priority
task history lost on split tiles are split ways in the old task manager the edit history and comments would propogate to the four child tiles after a split in the new task manager this does not happen and all history and comments are lost
1
162,794
6,175,949,087
IssuesEvent
2017-07-01 08:22:42
minio/minio-py
https://api.github.com/repos/minio/minio-py
closed
Add additional headers in all requests during multipart upload
priority: medium
About a month ago I've made a pull requests to add support for additional headers in get requests (https://github.com/minio/minio-py/pull/522). I'm using this headers in both, put and get methods, to enable SSE-C in S3 storage. Additional metadata was implemented before for `put_object` and `fput_object`. But, as I discovered later, this is not working with objects, large then `MIN_OBJECT_SIZE` (5MiB currently), because `minio` start a multipart upload and transmit this headers only in first request, which initiate this upload. In the same time, when you enable SSE-C in initial multipart request, you need to provide the same headers for all part's request. So I've created PR to add this in `_do_put_multipart_upload`. As additional benefit, I've add it to `_complete_multipart_upload`, to make it look more finished. Last request don't required this headers, but this add ability to transmit [Common Request Headers](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonRequestHeaders.html) which it support. Here is this PR: https://github.com/minio/minio-py/pull/544
1.0
Add additional headers in all requests during multipart upload - About a month ago I've made a pull requests to add support for additional headers in get requests (https://github.com/minio/minio-py/pull/522). I'm using this headers in both, put and get methods, to enable SSE-C in S3 storage. Additional metadata was implemented before for `put_object` and `fput_object`. But, as I discovered later, this is not working with objects, large then `MIN_OBJECT_SIZE` (5MiB currently), because `minio` start a multipart upload and transmit this headers only in first request, which initiate this upload. In the same time, when you enable SSE-C in initial multipart request, you need to provide the same headers for all part's request. So I've created PR to add this in `_do_put_multipart_upload`. As additional benefit, I've add it to `_complete_multipart_upload`, to make it look more finished. Last request don't required this headers, but this add ability to transmit [Common Request Headers](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonRequestHeaders.html) which it support. Here is this PR: https://github.com/minio/minio-py/pull/544
priority
add additional headers in all requests during multipart upload about a month ago i ve made a pull requests to add support for additional headers in get requests i m using this headers in both put and get methods to enable sse c in storage additional metadata was implemented before for put object and fput object but as i discovered later this is not working with objects large then min object size currently because minio start a multipart upload and transmit this headers only in first request which initiate this upload in the same time when you enable sse c in initial multipart request you need to provide the same headers for all part s request so i ve created pr to add this in do put multipart upload as additional benefit i ve add it to complete multipart upload to make it look more finished last request don t required this headers but this add ability to transmit which it support here is this pr
1
415,542
12,130,411,509
IssuesEvent
2020-04-23 01:28:51
minio/mc
https://api.github.com/repos/minio/mc
closed
Unable to copy a list of files to a remote directory
community priority: medium
## Expected behavior Minio copies all files to a new directory on minio ## Actual behavior Minio fails to copy: "Object name contains unsupported characters." ## Steps to reproduce the behavior ``` $ mkdir test $ echo test1 > test/file1 $ echo test1 > test/file2 $ echo test1 > test/file3 $ echo test1 > test/file4 $ echo test1 > test/file5 # Try the old version: $ ./20190711193128Z_1/bin/mc --version mc version RELEASE.2019-07-11T19-31-28Z $ ./20190711193128Z_1/bin/mc cp test/* bins/images/test/ test/file5: 30 B / 30 B 100.00% 461 B/s 0s $ ./20190711193128Z_1/bin/mc ls bins/images/test [2020-04-10 15:51:42 EDT] 6B file1 [2020-04-10 15:51:42 EDT] 6B file2 [2020-04-10 15:51:42 EDT] 6B file3 [2020-04-10 15:51:42 EDT] 6B file4 [2020-04-10 15:51:42 EDT] 6B file5 $ # Now trying with the latest mc release: $ ./RELEASE.2020-04-04T05-28-55Z_1/bin/mc --version mc version RELEASE.2020-04-04T05-28-55Z $ ./RELEASE.2020-04-04T05-28-55Z_1/bin/mc cp test/* bins/images/newtest/ mc: <ERROR> Failed to copy `test/file2`. Object name contains unsupported characters. mc: <ERROR> Failed to copy `test/file1`. Object name contains unsupported characters. mc: <ERROR> Failed to copy `test/file3`. Object name contains unsupported characters. mc: <ERROR> Failed to copy `test/file4`. Object name contains unsupported characters. mc: <ERROR> Failed to copy `test/file5`. Object name contains unsupported characters. test/file5: 30 B / 30 B 711 B/s 0s $ ./RELEASE.2020-04-04T05-28-55Z_1/bin/mc ls bins/images/newtest/ mc: <ERROR> Unable to stat `bins/images/newtest/`. Object does not exist. $ ``` ## mc --version Old Version: ``` mc version RELEASE.2019-07-11T19-31-28Z ``` New Version: ``` mc version RELEASE.2020-04-04T05-28-55Z ``` ## System information MacOS X 10.13. mc installed with brew, issue experienced with docker container also.
1.0
Unable to copy a list of files to a remote directory - ## Expected behavior Minio copies all files to a new directory on minio ## Actual behavior Minio fails to copy: "Object name contains unsupported characters." ## Steps to reproduce the behavior ``` $ mkdir test $ echo test1 > test/file1 $ echo test1 > test/file2 $ echo test1 > test/file3 $ echo test1 > test/file4 $ echo test1 > test/file5 # Try the old version: $ ./20190711193128Z_1/bin/mc --version mc version RELEASE.2019-07-11T19-31-28Z $ ./20190711193128Z_1/bin/mc cp test/* bins/images/test/ test/file5: 30 B / 30 B 100.00% 461 B/s 0s $ ./20190711193128Z_1/bin/mc ls bins/images/test [2020-04-10 15:51:42 EDT] 6B file1 [2020-04-10 15:51:42 EDT] 6B file2 [2020-04-10 15:51:42 EDT] 6B file3 [2020-04-10 15:51:42 EDT] 6B file4 [2020-04-10 15:51:42 EDT] 6B file5 $ # Now trying with the latest mc release: $ ./RELEASE.2020-04-04T05-28-55Z_1/bin/mc --version mc version RELEASE.2020-04-04T05-28-55Z $ ./RELEASE.2020-04-04T05-28-55Z_1/bin/mc cp test/* bins/images/newtest/ mc: <ERROR> Failed to copy `test/file2`. Object name contains unsupported characters. mc: <ERROR> Failed to copy `test/file1`. Object name contains unsupported characters. mc: <ERROR> Failed to copy `test/file3`. Object name contains unsupported characters. mc: <ERROR> Failed to copy `test/file4`. Object name contains unsupported characters. mc: <ERROR> Failed to copy `test/file5`. Object name contains unsupported characters. test/file5: 30 B / 30 B 711 B/s 0s $ ./RELEASE.2020-04-04T05-28-55Z_1/bin/mc ls bins/images/newtest/ mc: <ERROR> Unable to stat `bins/images/newtest/`. Object does not exist. $ ``` ## mc --version Old Version: ``` mc version RELEASE.2019-07-11T19-31-28Z ``` New Version: ``` mc version RELEASE.2020-04-04T05-28-55Z ``` ## System information MacOS X 10.13. mc installed with brew, issue experienced with docker container also.
priority
unable to copy a list of files to a remote directory expected behavior minio copies all files to a new directory on minio actual behavior minio fails to copy object name contains unsupported characters steps to reproduce the behavior mkdir test echo test echo test echo test echo test echo test try the old version bin mc version mc version release bin mc cp test bins images test test b b b s bin mc ls bins images test now trying with the latest mc release release bin mc version mc version release release bin mc cp test bins images newtest mc failed to copy test object name contains unsupported characters mc failed to copy test object name contains unsupported characters mc failed to copy test object name contains unsupported characters mc failed to copy test object name contains unsupported characters mc failed to copy test object name contains unsupported characters test b b b s release bin mc ls bins images newtest mc unable to stat bins images newtest object does not exist mc version old version mc version release new version mc version release system information macos x mc installed with brew issue experienced with docker container also
1
688,537
23,586,727,806
IssuesEvent
2022-08-23 12:16:07
aleksbobic/csx
https://api.github.com/repos/aleksbobic/csx
opened
Provide selected nodes / components / clusters as advanced search nodes
enhancement priority:high Complexity:medium
It should be possible to use the selected node and component values to directly feed a search request form the "advanced search"
1.0
Provide selected nodes / components / clusters as advanced search nodes - It should be possible to use the selected node and component values to directly feed a search request form the "advanced search"
priority
provide selected nodes components clusters as advanced search nodes it should be possible to use the selected node and component values to directly feed a search request form the advanced search
1
44,080
2,899,112,220
IssuesEvent
2015-06-17 09:17:09
greenlion/PHP-SQL-Parser
https://api.github.com/repos/greenlion/PHP-SQL-Parser
closed
Parser breaks on nexted subqueries ending in ))
bug imported Priority-Medium
_From [smalys...@gmail.com](https://code.google.com/u/106185853731560372304/) on February 15, 2012 02:13:50_ If I try to parse a complex query like this: SELECT * FROM contacts WHERE contacts.id IN (SELECT email_addr_bean_rel.bean_id FROM email_addr_bean_rel, email_addresses WHERE email_addresses.id = email_addr_bean_rel.email_address_id AND email_addr_bean_rel.deleted = 0 AND email_addr_bean_rel.bean_module = 'Contacts' AND email_addresses.email_address IN ("test@example.com")) then the query parser does not parse the internal subquery properly. The reason for that is that trim functions which are used on encompassing subquery do trim($sql, " ()") and thus when trimming they remove both enclosing parentheses for the subquery and the ones that belong to inner IN, which they should not be doing. I would propose to use something like this: private function trimSubquery($sq) { $sq = trim($sq); if(empty($sq)) return ''; while($sq[0] == '(' && substr($sq, -1) == ')') { $sq = substr($sq, 1, -1); $sq = trim($sq); } return $sq; } to keep trims balanced and use this functions instead of regular trim with ' ()'. _Original issue: http://code.google.com/p/php-sql-parser/issues/detail?id=25_
1.0
Parser breaks on nexted subqueries ending in )) - _From [smalys...@gmail.com](https://code.google.com/u/106185853731560372304/) on February 15, 2012 02:13:50_ If I try to parse a complex query like this: SELECT * FROM contacts WHERE contacts.id IN (SELECT email_addr_bean_rel.bean_id FROM email_addr_bean_rel, email_addresses WHERE email_addresses.id = email_addr_bean_rel.email_address_id AND email_addr_bean_rel.deleted = 0 AND email_addr_bean_rel.bean_module = 'Contacts' AND email_addresses.email_address IN ("test@example.com")) then the query parser does not parse the internal subquery properly. The reason for that is that trim functions which are used on encompassing subquery do trim($sql, " ()") and thus when trimming they remove both enclosing parentheses for the subquery and the ones that belong to inner IN, which they should not be doing. I would propose to use something like this: private function trimSubquery($sq) { $sq = trim($sq); if(empty($sq)) return ''; while($sq[0] == '(' && substr($sq, -1) == ')') { $sq = substr($sq, 1, -1); $sq = trim($sq); } return $sq; } to keep trims balanced and use this functions instead of regular trim with ' ()'. _Original issue: http://code.google.com/p/php-sql-parser/issues/detail?id=25_
priority
parser breaks on nexted subqueries ending in from on february if i try to parse a complex query like this select from contacts where contacts id in select email addr bean rel bean id from email addr bean rel email addresses where email addresses id email addr bean rel email address id and email addr bean rel deleted and email addr bean rel bean module contacts and email addresses email address in test example com then the query parser does not parse the internal subquery properly the reason for that is that trim functions which are used on encompassing subquery do trim sql and thus when trimming they remove both enclosing parentheses for the subquery and the ones that belong to inner in which they should not be doing i would propose to use something like this private function trimsubquery sq sq trim sq if empty sq return while sq substr sq sq substr sq sq trim sq return sq to keep trims balanced and use this functions instead of regular trim with original issue
1
661,006
22,038,246,500
IssuesEvent
2022-05-28 23:52:43
LuanRT/google-this
https://api.github.com/repos/LuanRT/google-this
closed
Move to Google's private API (Image Search)
enhancement priority: medium
It seems like the google website uses an API to fetch image search results, the data is a bit messy but it's way faster than parsing raw html, has more metadata, returns up to 100 images, and allows more request options. ## Sample Code This could be done by sending a post request to `https://www.google.com/_/VisualFrontendUi/data/batchexecute` with the following payload (form data): ```js "f.req" => [ [ [ "HoAMBc", JSON.stringify([ null, null, [ 2,null,529,5,396, [],[9429,9520],[194,194], false,null,null,9520 ], null,null,null,null, null,null,null,null, null,null,null,null, null,null,null,null, null,null,null,null, null,null,null,null, null,[ "query-goes-here", null,null,null, null,null,null, null,null,null, null,null,null, null,null,null, null,null,null, null,null, "lnms" ], null,null,null, null,null,null, null,null,[ null, "CAE=", "GGwgAA==" // these are protobufs ],null,true ]), null, "generic" ] ] ] "at" => `ABrGKkThP-sNRUU6e86GjhgSvs0k:${new Date().getTime()}` // random request id + timestamp ```
1.0
Move to Google's private API (Image Search) - It seems like the google website uses an API to fetch image search results, the data is a bit messy but it's way faster than parsing raw html, has more metadata, returns up to 100 images, and allows more request options. ## Sample Code This could be done by sending a post request to `https://www.google.com/_/VisualFrontendUi/data/batchexecute` with the following payload (form data): ```js "f.req" => [ [ [ "HoAMBc", JSON.stringify([ null, null, [ 2,null,529,5,396, [],[9429,9520],[194,194], false,null,null,9520 ], null,null,null,null, null,null,null,null, null,null,null,null, null,null,null,null, null,null,null,null, null,null,null,null, null,[ "query-goes-here", null,null,null, null,null,null, null,null,null, null,null,null, null,null,null, null,null,null, null,null, "lnms" ], null,null,null, null,null,null, null,null,[ null, "CAE=", "GGwgAA==" // these are protobufs ],null,true ]), null, "generic" ] ] ] "at" => `ABrGKkThP-sNRUU6e86GjhgSvs0k:${new Date().getTime()}` // random request id + timestamp ```
priority
move to google s private api image search it seems like the google website uses an api to fetch image search results the data is a bit messy but it s way faster than parsing raw html has more metadata returns up to images and allows more request options sample code this could be done by sending a post request to with the following payload form data js f req hoambc json stringify null null null false null null null null null null null null null null null null null null null null null null null null null null null null null null null query goes here null null null null null null null null null null null null null null null null null null null null lnms null null null null null null null null null cae ggwgaa these are protobufs null true null generic at abrgkkthp new date gettime random request id timestamp
1
479,116
13,791,439,260
IssuesEvent
2020-10-09 12:09:02
gnosis/conditional-tokens-explorer
https://api.github.com/repos/gnosis/conditional-tokens-explorer
closed
Positions list: filter by OWL returns no results
Medium priority QA Passed bug
Open Positions list and select OWL collateral in the filter **Actual result:** filter by OWL returns no results ![search.jpg](https://images.zenhubusercontent.com/5f55ec970a2e890ec10f7778/3b48af8e-3776-4003-8a8f-5712bd17d59b) **Expected Result:** positions with OWL collateral are displayed in the grid
1.0
Positions list: filter by OWL returns no results - Open Positions list and select OWL collateral in the filter **Actual result:** filter by OWL returns no results ![search.jpg](https://images.zenhubusercontent.com/5f55ec970a2e890ec10f7778/3b48af8e-3776-4003-8a8f-5712bd17d59b) **Expected Result:** positions with OWL collateral are displayed in the grid
priority
positions list filter by owl returns no results open positions list and select owl collateral in the filter actual result filter by owl returns no results expected result positions with owl collateral are displayed in the grid
1
188,042
6,767,690,266
IssuesEvent
2017-10-26 05:13:14
RSPluto/Web-UI
https://api.github.com/repos/RSPluto/Web-UI
closed
实时监测--进行区域配置,在添加配置时,添加的底图不是图片文件时,提示信息不正确。
bug Fixed Medium Priority
测试步骤: 1、进入区域配置页面。 2、在填写区域信息时,选择上传的文件格式为非图片文件。 期望结果: 2、添加的底图为非图片文件(如.doc)文件,显示的提示信息应该明确指出所添加的文件格式有误。 例如“添加图片失败,请检查文件格式是否为.png/.jpg....” 实际结果: 2、添加的底图为非图片文件,显示的提示信息为“Error:加载图片文件失败,图片压缩失败,请换一张图片”。未对所添加的文件格式进行校验。 具体如图所示: ![ttttt](https://user-images.githubusercontent.com/32258990/31533985-67ecdb14-b027-11e7-83aa-ece24fe2fcd9.PNG)
1.0
实时监测--进行区域配置,在添加配置时,添加的底图不是图片文件时,提示信息不正确。 - 测试步骤: 1、进入区域配置页面。 2、在填写区域信息时,选择上传的文件格式为非图片文件。 期望结果: 2、添加的底图为非图片文件(如.doc)文件,显示的提示信息应该明确指出所添加的文件格式有误。 例如“添加图片失败,请检查文件格式是否为.png/.jpg....” 实际结果: 2、添加的底图为非图片文件,显示的提示信息为“Error:加载图片文件失败,图片压缩失败,请换一张图片”。未对所添加的文件格式进行校验。 具体如图所示: ![ttttt](https://user-images.githubusercontent.com/32258990/31533985-67ecdb14-b027-11e7-83aa-ece24fe2fcd9.PNG)
priority
实时监测 进行区域配置,在添加配置时,添加的底图不是图片文件时,提示信息不正确。 测试步骤: 、进入区域配置页面。 、在填写区域信息时,选择上传的文件格式为非图片文件。 期望结果: 、添加的底图为非图片文件(如 doc)文件,显示的提示信息应该明确指出所添加的文件格式有误。 例如“添加图片失败,请检查文件格式是否为 png jpg ” 实际结果 、添加的底图为非图片文件,显示的提示信息为“error 加载图片文件失败,图片压缩失败,请换一张图片”。未对所添加的文件格式进行校验。 具体如图所示:
1
222,616
7,434,470,915
IssuesEvent
2018-03-26 11:09:08
minio/minio-py
https://api.github.com/repos/minio/minio-py
reopened
Several documentation issues
priority: medium
There are several minio-py SDK documentation issues as listed below: - [x] **`Missing`** optional arguments and/or their default values in command titles: **`..., request_headers = None`**) in **get_object**, **get_partial_object**, **fget_object** `..., copy_conditions`**`= None`**) in **copy_object** `..., content_type`**`= 'application/octet-stream', metadata = None`**) in **put_object**, **fput_object** Example: `fput_object(bucket_name, object_name, file_path, content_type)` => `fput_object(bucket_name, object_name, file_path, content_type`**` = 'application/octet-stream', metadata = None`**) - [x] Explanation for **fput_object** command argument `file_path` is wrong: | `file_path` | string | Path on the local filesystem **`to which the object data will be written.`**| => | `file_path` | string | Path on the local filesystem **`from which object data will be read.`**| - [x] Optional argument, `objectPrefix`, is listed as mandatory argument for **list_objects** and **list_objects_v2** commands. Fix is needed in the title and in the table as shown below: Title: `list_objects(bucket_name, prefix`**`= None`**`, recursive=False)` Title: `list_objects_v2(bucket_name, prefix`**`= None`**`, recursive=False)` Table: | `objectPrefix` | string | The prefix of the objects that should be listed.**`Optional, default is None.`**| - [x] Return value `object` for **list_objects** and **list_objects_v2** commands have missing parameters in their `Param` tables: | Param | Type | Description | | --------- | ------ | --------------- | | `object.bucket_name` | string | name of the bucket object resides in. | `object.content_type` | string | content-type of the object. | `object.metadata` | dict | contains any additional metadata on the object. | `object.is_dir` | bool | `True` if listed object is a dir (prefix) and `False` otherwise.
1.0
Several documentation issues - There are several minio-py SDK documentation issues as listed below: - [x] **`Missing`** optional arguments and/or their default values in command titles: **`..., request_headers = None`**) in **get_object**, **get_partial_object**, **fget_object** `..., copy_conditions`**`= None`**) in **copy_object** `..., content_type`**`= 'application/octet-stream', metadata = None`**) in **put_object**, **fput_object** Example: `fput_object(bucket_name, object_name, file_path, content_type)` => `fput_object(bucket_name, object_name, file_path, content_type`**` = 'application/octet-stream', metadata = None`**) - [x] Explanation for **fput_object** command argument `file_path` is wrong: | `file_path` | string | Path on the local filesystem **`to which the object data will be written.`**| => | `file_path` | string | Path on the local filesystem **`from which object data will be read.`**| - [x] Optional argument, `objectPrefix`, is listed as mandatory argument for **list_objects** and **list_objects_v2** commands. Fix is needed in the title and in the table as shown below: Title: `list_objects(bucket_name, prefix`**`= None`**`, recursive=False)` Title: `list_objects_v2(bucket_name, prefix`**`= None`**`, recursive=False)` Table: | `objectPrefix` | string | The prefix of the objects that should be listed.**`Optional, default is None.`**| - [x] Return value `object` for **list_objects** and **list_objects_v2** commands have missing parameters in their `Param` tables: | Param | Type | Description | | --------- | ------ | --------------- | | `object.bucket_name` | string | name of the bucket object resides in. | `object.content_type` | string | content-type of the object. | `object.metadata` | dict | contains any additional metadata on the object. | `object.is_dir` | bool | `True` if listed object is a dir (prefix) and `False` otherwise.
priority
several documentation issues there are several minio py sdk documentation issues as listed below missing optional arguments and or their default values in command titles request headers none in get object get partial object fget object copy conditions none in copy object content type application octet stream metadata none in put object fput object example fput object bucket name object name file path content type fput object bucket name object name file path content type application octet stream metadata none explanation for fput object command argument file path is wrong file path string path on the local filesystem to which the object data will be written file path string path on the local filesystem from which object data will be read optional argument objectprefix is listed as mandatory argument for list objects and list objects commands fix is needed in the title and in the table as shown below title list objects bucket name prefix none recursive false title list objects bucket name prefix none recursive false table objectprefix string the prefix of the objects that should be listed optional default is none return value object for list objects and list objects commands have missing parameters in their param tables param type description object bucket name string name of the bucket object resides in object content type string content type of the object object metadata dict contains any additional metadata on the object object is dir bool true if listed object is a dir prefix and false otherwise
1
585,627
17,512,790,739
IssuesEvent
2021-08-11 01:16:58
airbytehq/airbyte
https://api.github.com/repos/airbytehq/airbyte
closed
Do not lose workspace on page refresh.
type/enhancement area/frontend priority/medium airbyte-cloud cloud-private-beta
## Tell us about the problem you're trying to solve In cloud when we refresh the page we end up back in the workspaces list page. We want to stay in the workspace. In the long term we want to add workspace slug to the URL https://github.com/airbytehq/airbyte/issues/5227, but that is a bigger change than we can make before tomorrow. @Jamakase is just going to add workspace id to local storage for now and we will do the url change later.
1.0
Do not lose workspace on page refresh. - ## Tell us about the problem you're trying to solve In cloud when we refresh the page we end up back in the workspaces list page. We want to stay in the workspace. In the long term we want to add workspace slug to the URL https://github.com/airbytehq/airbyte/issues/5227, but that is a bigger change than we can make before tomorrow. @Jamakase is just going to add workspace id to local storage for now and we will do the url change later.
priority
do not lose workspace on page refresh tell us about the problem you re trying to solve in cloud when we refresh the page we end up back in the workspaces list page we want to stay in the workspace in the long term we want to add workspace slug to the url but that is a bigger change than we can make before tomorrow jamakase is just going to add workspace id to local storage for now and we will do the url change later
1
773,180
27,148,668,638
IssuesEvent
2023-02-16 22:23:13
balena-os/balena-supervisor
https://api.github.com/repos/balena-os/balena-supervisor
closed
Allow toggling the public URL from the supervisor API
type/feature Medium Priority low-hanging fruit
This should be a simple call to the backend, to enable the public url on the device resource. Should also provide an endpoint to detail the state of the public url.
1.0
Allow toggling the public URL from the supervisor API - This should be a simple call to the backend, to enable the public url on the device resource. Should also provide an endpoint to detail the state of the public url.
priority
allow toggling the public url from the supervisor api this should be a simple call to the backend to enable the public url on the device resource should also provide an endpoint to detail the state of the public url
1
418,729
12,202,844,163
IssuesEvent
2020-04-30 09:36:32
SKefalidis/DangerousAdventures
https://api.github.com/repos/SKefalidis/DangerousAdventures
closed
[BUG] Cannot reset movement direction after hitting a wall.
bug medium priority
**Describe the bug** The user cannot reset the direction of a monster (using r) if the monster has moved previously. **To Reproduce** Steps to reproduce the behavior: 1. Start turn 2. Select a direction and move to a wall 3. Select direction 4. Press R to reset **Expected behavior** The direction should be reset
1.0
[BUG] Cannot reset movement direction after hitting a wall. - **Describe the bug** The user cannot reset the direction of a monster (using r) if the monster has moved previously. **To Reproduce** Steps to reproduce the behavior: 1. Start turn 2. Select a direction and move to a wall 3. Select direction 4. Press R to reset **Expected behavior** The direction should be reset
priority
cannot reset movement direction after hitting a wall describe the bug the user cannot reset the direction of a monster using r if the monster has moved previously to reproduce steps to reproduce the behavior start turn select a direction and move to a wall select direction press r to reset expected behavior the direction should be reset
1
6,022
2,582,412,051
IssuesEvent
2015-02-15 06:40:37
dhowe/AdNauseam
https://api.github.com/repos/dhowe/AdNauseam
closed
Import/Export: Character encoding problem
Bug Needs-verification PRIORITY: Medium
Character encoding problem when exporting Ads Environment: OS X 10.10.2, Firefox 35, ADP 2.6.7, ADN 1.11 To reproduce this on any Chinese website with EasyList China: 1. export Ads 2. clear Ads 3. import Ads 4. open AD vault and messed up Chinese characters will be seen ![screen shot 2015-01-30 at 5 39 13 pm](https://cloud.githubusercontent.com/assets/2461812/5973769/337def92-a8a7-11e4-9536-a4757321352f.png) Is it using Unicode for exporting JSON now?
1.0
Import/Export: Character encoding problem - Character encoding problem when exporting Ads Environment: OS X 10.10.2, Firefox 35, ADP 2.6.7, ADN 1.11 To reproduce this on any Chinese website with EasyList China: 1. export Ads 2. clear Ads 3. import Ads 4. open AD vault and messed up Chinese characters will be seen ![screen shot 2015-01-30 at 5 39 13 pm](https://cloud.githubusercontent.com/assets/2461812/5973769/337def92-a8a7-11e4-9536-a4757321352f.png) Is it using Unicode for exporting JSON now?
priority
import export character encoding problem character encoding problem when exporting ads environment os x firefox adp adn to reproduce this on any chinese website with easylist china export ads clear ads import ads open ad vault and messed up chinese characters will be seen is it using unicode for exporting json now
1
610,339
18,905,006,718
IssuesEvent
2021-11-16 08:03:57
ThatOneGoat/sidewalk-cv-2021
https://api.github.com/repos/ThatOneGoat/sidewalk-cv-2021
closed
Data acquisiton summary at the end of the pipeline
enhancement Priority: Medium
Right now, we are simply showing analytics for the scraping/cropping at the end of each part's execution. That means on longer runs (perhaps overnight), we miss the analytics for the scraping section because it goes directly into cropping and we lose the terminal output. We should output all data acquisition analytics at the end, preferably in some sort of log file.
1.0
Data acquisiton summary at the end of the pipeline - Right now, we are simply showing analytics for the scraping/cropping at the end of each part's execution. That means on longer runs (perhaps overnight), we miss the analytics for the scraping section because it goes directly into cropping and we lose the terminal output. We should output all data acquisition analytics at the end, preferably in some sort of log file.
priority
data acquisiton summary at the end of the pipeline right now we are simply showing analytics for the scraping cropping at the end of each part s execution that means on longer runs perhaps overnight we miss the analytics for the scraping section because it goes directly into cropping and we lose the terminal output we should output all data acquisition analytics at the end preferably in some sort of log file
1
309,575
9,477,304,932
IssuesEvent
2019-04-19 18:08:39
openbmc/openbmc-test-automation
https://api.github.com/repos/openbmc/openbmc-test-automation
closed
Implement enumerate for redfish resources
Priority Medium
Legacy REST has the object/enumerate feature where it basically do a GET for the resources for the parent and all the child nodes.. There is existing one implemented in test code https://github.com/openbmc/openbmc-test-automation/blob/master/lib/bmc_redfish_utils.py#L162 but that code is not optimize and could be lengthy if the system objects properties are fully loaded.
1.0
Implement enumerate for redfish resources - Legacy REST has the object/enumerate feature where it basically do a GET for the resources for the parent and all the child nodes.. There is existing one implemented in test code https://github.com/openbmc/openbmc-test-automation/blob/master/lib/bmc_redfish_utils.py#L162 but that code is not optimize and could be lengthy if the system objects properties are fully loaded.
priority
implement enumerate for redfish resources legacy rest has the object enumerate feature where it basically do a get for the resources for the parent and all the child nodes there is existing one implemented in test code but that code is not optimize and could be lengthy if the system objects properties are fully loaded
1
41,527
2,869,057,516
IssuesEvent
2015-06-05 22:59:27
dart-lang/matcher
https://api.github.com/repos/dart-lang/matcher
opened
Matcher: extend a Predicate interface
enhancement help wanted Priority-Medium
<a href="https://github.com/seaneagan"><img src="https://avatars.githubusercontent.com/u/444270?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [seaneagan](https://github.com/seaneagan)** _Originally opened as dart-lang/sdk#3722_ ---- If we had: typedef bool Predicate&lt;T&gt;(T item); interface Matcher&lt;T&gt; extends Predicate&lt;T&gt; {...} instead of Matcher#matches, thus using &quot;operator call&quot;, then the following would work: collection.some(same(value)); collection.every(isNotNull); collection.filter(isPositive); Also, issue dart-lang/sdk#2949 could be solved with: switch(x) { &nbsp;&nbsp;match (isNegative) ... &nbsp;&nbsp;case (0) ... &nbsp;&nbsp;match (lessThan(5)) ... &nbsp;&nbsp;// avoid full blown Matcher since don't need mismatch messages &nbsp;&nbsp;match ((i) =&gt; i.isEven()) ... &nbsp;&nbsp;default ... }
1.0
Matcher: extend a Predicate interface - <a href="https://github.com/seaneagan"><img src="https://avatars.githubusercontent.com/u/444270?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [seaneagan](https://github.com/seaneagan)** _Originally opened as dart-lang/sdk#3722_ ---- If we had: typedef bool Predicate&lt;T&gt;(T item); interface Matcher&lt;T&gt; extends Predicate&lt;T&gt; {...} instead of Matcher#matches, thus using &quot;operator call&quot;, then the following would work: collection.some(same(value)); collection.every(isNotNull); collection.filter(isPositive); Also, issue dart-lang/sdk#2949 could be solved with: switch(x) { &nbsp;&nbsp;match (isNegative) ... &nbsp;&nbsp;case (0) ... &nbsp;&nbsp;match (lessThan(5)) ... &nbsp;&nbsp;// avoid full blown Matcher since don't need mismatch messages &nbsp;&nbsp;match ((i) =&gt; i.isEven()) ... &nbsp;&nbsp;default ... }
priority
matcher extend a predicate interface issue by originally opened as dart lang sdk if we had typedef bool predicate lt t gt t item interface matcher lt t gt extends predicate lt t gt instead of matcher matches thus using quot operator call quot then the following would work collection some same value collection every isnotnull collection filter ispositive also issue dart lang sdk could be solved with switch x nbsp nbsp match isnegative nbsp nbsp case nbsp nbsp match lessthan nbsp nbsp avoid full blown matcher since don t need mismatch messages nbsp nbsp match i gt i iseven nbsp nbsp default
1
315,108
9,606,734,319
IssuesEvent
2019-05-11 13:08:10
LifeMC/LifeSkript
https://api.github.com/repos/LifeMC/LifeSkript
closed
Add Timings support to Skript
enhancement feature request medium priority
**Is your feature request related to a problem? Please describe.** Skript sometimes make lag, and all we see in timings report was the delay class. **Describe the solution you'd like** Add timings (v2) support to Skript (backport it), bensku's fork already includes this, for reference. **Describe alternatives you've considered** Debugger agents are not timers; they are special trackers. Timings support should be added in anyway, so no alternative. **Additional information** N/A
1.0
Add Timings support to Skript - **Is your feature request related to a problem? Please describe.** Skript sometimes make lag, and all we see in timings report was the delay class. **Describe the solution you'd like** Add timings (v2) support to Skript (backport it), bensku's fork already includes this, for reference. **Describe alternatives you've considered** Debugger agents are not timers; they are special trackers. Timings support should be added in anyway, so no alternative. **Additional information** N/A
priority
add timings support to skript is your feature request related to a problem please describe skript sometimes make lag and all we see in timings report was the delay class describe the solution you d like add timings support to skript backport it bensku s fork already includes this for reference describe alternatives you ve considered debugger agents are not timers they are special trackers timings support should be added in anyway so no alternative additional information n a
1
207,815
7,134,081,260
IssuesEvent
2018-01-22 19:37:11
StrangeLoopGames/EcoIssues
https://api.github.com/repos/StrangeLoopGames/EcoIssues
closed
Can't connect to my server.
Medium Priority
Issues happening on GreenLeaf Main. [ClientUpdateException TargetInvocationException 01220331.zip](https://github.com/StrangeLoopGames/EcoIssues/files/1652857/ClientUpdateException.TargetInvocationException.01220331.zip) Server is running 6.4.1 as well as the client, and I seem to be the only one having this issue on Phlo (account name). On another note, I wanted to ask you if modders will get a "heads up" before Beta releases as the current update messed with all my models and nothing works. "Heads up" means a modkit that has been tested on the future update.
1.0
Can't connect to my server. - Issues happening on GreenLeaf Main. [ClientUpdateException TargetInvocationException 01220331.zip](https://github.com/StrangeLoopGames/EcoIssues/files/1652857/ClientUpdateException.TargetInvocationException.01220331.zip) Server is running 6.4.1 as well as the client, and I seem to be the only one having this issue on Phlo (account name). On another note, I wanted to ask you if modders will get a "heads up" before Beta releases as the current update messed with all my models and nothing works. "Heads up" means a modkit that has been tested on the future update.
priority
can t connect to my server issues happening on greenleaf main server is running as well as the client and i seem to be the only one having this issue on phlo account name on another note i wanted to ask you if modders will get a heads up before beta releases as the current update messed with all my models and nothing works heads up means a modkit that has been tested on the future update
1
319,886
9,761,854,142
IssuesEvent
2019-06-05 09:46:48
canonical-web-and-design/vanilla-framework
https://api.github.com/repos/canonical-web-and-design/vanilla-framework
closed
Provide variables to override navigation pattern colours
Priority: Medium
## Pattern to amend p-navigation ## Context Some colours are automatically generated within the p-navigation pattern from the colour variables passed in. There should be an option to override these generated colours with specific variables. e.g. The background colour of hovered nav items is `background-color: darken($color-navigation-background, 3%);` with no way of avoiding this behaviour with a variable
1.0
Provide variables to override navigation pattern colours - ## Pattern to amend p-navigation ## Context Some colours are automatically generated within the p-navigation pattern from the colour variables passed in. There should be an option to override these generated colours with specific variables. e.g. The background colour of hovered nav items is `background-color: darken($color-navigation-background, 3%);` with no way of avoiding this behaviour with a variable
priority
provide variables to override navigation pattern colours pattern to amend p navigation context some colours are automatically generated within the p navigation pattern from the colour variables passed in there should be an option to override these generated colours with specific variables e g the background colour of hovered nav items is background color darken color navigation background with no way of avoiding this behaviour with a variable
1
791,277
27,858,265,377
IssuesEvent
2023-03-21 02:11:50
masastack/MASA.DCC
https://api.github.com/repos/masastack/MASA.DCC
reopened
When there are no projects and applications, the default page is not needed for the time being
type/ui status/resolved severity/medium site/staging priority/p3
没有项目和应用时,暂时不需要缺省页。等后期UI设计后,所有项目统一更换 ![image](https://user-images.githubusercontent.com/95004531/225842253-0d55fcae-72f9-41b3-8ac2-339869db7db4.png) ![image](https://user-images.githubusercontent.com/95004531/225842288-235f6a44-e2b0-4d56-8a73-1249d9db95ea.png) ![image](https://user-images.githubusercontent.com/95004531/225842334-09761f43-86a1-4435-b00d-c2c6020afeaa.png)
1.0
When there are no projects and applications, the default page is not needed for the time being - 没有项目和应用时,暂时不需要缺省页。等后期UI设计后,所有项目统一更换 ![image](https://user-images.githubusercontent.com/95004531/225842253-0d55fcae-72f9-41b3-8ac2-339869db7db4.png) ![image](https://user-images.githubusercontent.com/95004531/225842288-235f6a44-e2b0-4d56-8a73-1249d9db95ea.png) ![image](https://user-images.githubusercontent.com/95004531/225842334-09761f43-86a1-4435-b00d-c2c6020afeaa.png)
priority
when there are no projects and applications the default page is not needed for the time being 没有项目和应用时,暂时不需要缺省页。等后期ui设计后,所有项目统一更换
1
693,018
23,759,413,088
IssuesEvent
2022-09-01 07:34:56
ut-issl/c2a-core
https://api.github.com/repos/ut-issl/c2a-core
opened
一旦 CCP_make_cmd_ret_without_err_code にしたところに,適切なエラーコードをいれるために CCP_make_cmd_ret になおしていく
enhancement priority::medium
## 概要 一旦 CCP_make_cmd_ret_without_err_code にしたところに,適切なエラーコードをいれるために CCP_make_cmd_ret になおしていく ## 詳細 - https://github.com/ut-issl/c2a-core/issues/376 のつづき ## close条件 ひととおりできたら
1.0
一旦 CCP_make_cmd_ret_without_err_code にしたところに,適切なエラーコードをいれるために CCP_make_cmd_ret になおしていく - ## 概要 一旦 CCP_make_cmd_ret_without_err_code にしたところに,適切なエラーコードをいれるために CCP_make_cmd_ret になおしていく ## 詳細 - https://github.com/ut-issl/c2a-core/issues/376 のつづき ## close条件 ひととおりできたら
priority
一旦 ccp make cmd ret without err code にしたところに,適切なエラーコードをいれるために ccp make cmd ret になおしていく 概要 一旦 ccp make cmd ret without err code にしたところに,適切なエラーコードをいれるために ccp make cmd ret になおしていく 詳細 のつづき close条件 ひととおりできたら
1
79,626
3,537,952,870
IssuesEvent
2016-01-18 06:43:23
proveit-js/proveit
https://api.github.com/repos/proveit-js/proveit
closed
Centralize CiteReference per-type configuration
enhancement imported Maintainability Priority-Medium
_From [matthew.flaschen@gatech.edu](https://code.google.com/u/108647890027017428365/) on March 02, 2012 01:19:23_ Centralize the per-type configuration in CiteReference. Rather than having separate default, required, icon, etc, put them in a single object indexed by type then other information. E.g.: var configuration = { web: { default: [], required: [], icon: '' }, // ... } This will simplify adding new types ( http://code.google.com/p/proveit-js/wiki/NewCiteType ), and may not even affect the CiteReference API. _Original issue: http://code.google.com/p/proveit-js/issues/detail?id=128_
1.0
Centralize CiteReference per-type configuration - _From [matthew.flaschen@gatech.edu](https://code.google.com/u/108647890027017428365/) on March 02, 2012 01:19:23_ Centralize the per-type configuration in CiteReference. Rather than having separate default, required, icon, etc, put them in a single object indexed by type then other information. E.g.: var configuration = { web: { default: [], required: [], icon: '' }, // ... } This will simplify adding new types ( http://code.google.com/p/proveit-js/wiki/NewCiteType ), and may not even affect the CiteReference API. _Original issue: http://code.google.com/p/proveit-js/issues/detail?id=128_
priority
centralize citereference per type configuration from on march centralize the per type configuration in citereference rather than having separate default required icon etc put them in a single object indexed by type then other information e g var configuration web default required icon this will simplify adding new types and may not even affect the citereference api original issue
1
175,200
6,548,037,643
IssuesEvent
2017-09-04 18:21:45
cms-gem-daq-project/vfatqc-python-scripts
https://api.github.com/repos/cms-gem-daq-project/vfatqc-python-scripts
opened
Bug Report: run_scans.py spawns additional processes
Priority: Medium Type: Bug
<!--- Provide a general summary of the issue in the Title above --> ## Brief summary of issue <!--- Provide a description of the issue, including any other issues or pull requests it references --> As shown: http://cmsonline.cern.ch/cms-elog/1007990 `run_scans.py` seems to have spawned 12 processes even though `chamber_config` has only one entry (link 0). This has been reported before and in mattermost. Investigating using `top` seems to show these processes are all stuck/dead and take 0% cpu. ### Types of issue <!--- Propsed labels (see CONTRIBUTING.md) to help maintainers label your issue: --> - [X] Bug report (report an issue with the code) - [ ] Feature request (request for change which adds functionality) ## Expected Behavior <!--- If you're describing a bug, tell us what should happen --> <!--- If you're suggesting a change/improvement, tell us how it should work --> When `chamber_config` has only one entry you should have `run_scans.py` only launch one additional process. ## Current Behavior <!--- If describing a bug, tell us what happens instead of the expected behavior --> <!--- If suggesting a change/improvement, explain the difference from current behavior --> Rarely we see additional processes launched, seems to always be 12, the same as the pool argument (weird...). ### Steps to Reproduce (for bugs) <!--- Provide a link to a live example, or an unambiguous set of steps to --> <!--- reproduce this bug. Include code to reproduce, if relevant --> 1. Open `gem-plotting-tools/mapping/chamberInfo.py` 2. Place only a single entry in `chamber_config` dict (also made GEBtype, and chamber_vfatMask dict's only have one entry, chamber_vfatDACSettings is empty). 3. Launch a trim run with run_scans.py I bet this is not distinct to only trim run. I also suspect that `confAllChambers.py` and `gem-plotting-tools/ana_scans.py` may suffer a similar problem? It seems using the `--series` option allows data-taking to continue unimpeded. ## Possible Solution (for bugs) <!--- Not obligatory, but suggest a fix/reason for the bug, --> <!--- or ideas how to implement the addition or change --> Not sure...rare...hard to reproduce. ## Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> * Version used: 18496a1506f0312d2834062bce2e5708abc6d583 * Shell used: /bin/zsh (running on gem904daq01) <!--- Template thanks to https://www.talater.com/open-source-templates/#/page/98 -->
1.0
Bug Report: run_scans.py spawns additional processes - <!--- Provide a general summary of the issue in the Title above --> ## Brief summary of issue <!--- Provide a description of the issue, including any other issues or pull requests it references --> As shown: http://cmsonline.cern.ch/cms-elog/1007990 `run_scans.py` seems to have spawned 12 processes even though `chamber_config` has only one entry (link 0). This has been reported before and in mattermost. Investigating using `top` seems to show these processes are all stuck/dead and take 0% cpu. ### Types of issue <!--- Propsed labels (see CONTRIBUTING.md) to help maintainers label your issue: --> - [X] Bug report (report an issue with the code) - [ ] Feature request (request for change which adds functionality) ## Expected Behavior <!--- If you're describing a bug, tell us what should happen --> <!--- If you're suggesting a change/improvement, tell us how it should work --> When `chamber_config` has only one entry you should have `run_scans.py` only launch one additional process. ## Current Behavior <!--- If describing a bug, tell us what happens instead of the expected behavior --> <!--- If suggesting a change/improvement, explain the difference from current behavior --> Rarely we see additional processes launched, seems to always be 12, the same as the pool argument (weird...). ### Steps to Reproduce (for bugs) <!--- Provide a link to a live example, or an unambiguous set of steps to --> <!--- reproduce this bug. Include code to reproduce, if relevant --> 1. Open `gem-plotting-tools/mapping/chamberInfo.py` 2. Place only a single entry in `chamber_config` dict (also made GEBtype, and chamber_vfatMask dict's only have one entry, chamber_vfatDACSettings is empty). 3. Launch a trim run with run_scans.py I bet this is not distinct to only trim run. I also suspect that `confAllChambers.py` and `gem-plotting-tools/ana_scans.py` may suffer a similar problem? It seems using the `--series` option allows data-taking to continue unimpeded. ## Possible Solution (for bugs) <!--- Not obligatory, but suggest a fix/reason for the bug, --> <!--- or ideas how to implement the addition or change --> Not sure...rare...hard to reproduce. ## Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> * Version used: 18496a1506f0312d2834062bce2e5708abc6d583 * Shell used: /bin/zsh (running on gem904daq01) <!--- Template thanks to https://www.talater.com/open-source-templates/#/page/98 -->
priority
bug report run scans py spawns additional processes brief summary of issue as shown run scans py seems to have spawned processes even though chamber config has only one entry link this has been reported before and in mattermost investigating using top seems to show these processes are all stuck dead and take cpu types of issue bug report report an issue with the code feature request request for change which adds functionality expected behavior when chamber config has only one entry you should have run scans py only launch one additional process current behavior rarely we see additional processes launched seems to always be the same as the pool argument weird steps to reproduce for bugs open gem plotting tools mapping chamberinfo py place only a single entry in chamber config dict also made gebtype and chamber vfatmask dict s only have one entry chamber vfatdacsettings is empty launch a trim run with run scans py i bet this is not distinct to only trim run i also suspect that confallchambers py and gem plotting tools ana scans py may suffer a similar problem it seems using the series option allows data taking to continue unimpeded possible solution for bugs not sure rare hard to reproduce your environment version used shell used bin zsh running on
1
587,305
17,612,622,881
IssuesEvent
2021-08-18 04:58:48
buddyboss/buddyboss-platform
https://api.github.com/repos/buddyboss/buddyboss-platform
closed
vertical scroll bar disappears (and progressive load fails) when message list height > 660 px
bug priority: medium
**Describe the bug** In Private Messaging, Progressively loads message history as long as .bp-messages-nav-panel is shorter than 660px height. However, on large monitors, the div (correctly) gets taller. But then the scrollbar disappears, and as a result, only the first 10 messages show. No progressive loading. **To Reproduce** https://drive.google.com/file/d/1vsnLOpuJV7gilM0jdiInCmP3sX5VWka0/view **Expected behavior** The scroll bar should be relative to .bp-messages-nav-panel and should always display as long as (132 + (66 x {number of total messages})) > height.bp-messages-nav-panel **Support ticket links** https://secure.helpscout.net/conversation/1142950781/69914?folderId=3709081
1.0
vertical scroll bar disappears (and progressive load fails) when message list height > 660 px - **Describe the bug** In Private Messaging, Progressively loads message history as long as .bp-messages-nav-panel is shorter than 660px height. However, on large monitors, the div (correctly) gets taller. But then the scrollbar disappears, and as a result, only the first 10 messages show. No progressive loading. **To Reproduce** https://drive.google.com/file/d/1vsnLOpuJV7gilM0jdiInCmP3sX5VWka0/view **Expected behavior** The scroll bar should be relative to .bp-messages-nav-panel and should always display as long as (132 + (66 x {number of total messages})) > height.bp-messages-nav-panel **Support ticket links** https://secure.helpscout.net/conversation/1142950781/69914?folderId=3709081
priority
vertical scroll bar disappears and progressive load fails when message list height px describe the bug in private messaging progressively loads message history as long as bp messages nav panel is shorter than height however on large monitors the div correctly gets taller but then the scrollbar disappears and as a result only the first messages show no progressive loading to reproduce expected behavior the scroll bar should be relative to bp messages nav panel and should always display as long as x number of total messages height bp messages nav panel support ticket links
1
264,254
8,306,930,669
IssuesEvent
2018-09-23 01:13:14
dirkwhoffmann/virtualc64
https://api.github.com/repos/dirkwhoffmann/virtualc64
opened
Test case VICII/split-tests/lightpen.prg fails
Priority-Medium bug
VirtualC64: <img width="791" alt="lightpen" src="https://user-images.githubusercontent.com/12561945/45923162-30098380-bf32-11e8-89d4-89cb4d84a320.png"> VICE: <img width="853" alt="lightpen_vice" src="https://user-images.githubusercontent.com/12561945/45923163-34ce3780-bf32-11e8-8157-445c0a3b7b8a.png">
1.0
Test case VICII/split-tests/lightpen.prg fails - VirtualC64: <img width="791" alt="lightpen" src="https://user-images.githubusercontent.com/12561945/45923162-30098380-bf32-11e8-89d4-89cb4d84a320.png"> VICE: <img width="853" alt="lightpen_vice" src="https://user-images.githubusercontent.com/12561945/45923163-34ce3780-bf32-11e8-8157-445c0a3b7b8a.png">
priority
test case vicii split tests lightpen prg fails img width alt lightpen src vice img width alt lightpen vice src
1
47,198
2,974,598,080
IssuesEvent
2015-07-15 02:13:31
Reimashi/jotai
https://api.github.com/repos/Reimashi/jotai
closed
CPU Temerature missing
auto-migrated Priority-Medium Type-Enhancement
``` What is the expected output? What do you see instead? I do not see CPU temperature, only use. Fir one of the HDs the temperature is missing. I do not see any output of the graphics card. What version of the product are you using? On what operating system? Version is 0.3.2 Beta Please provide any additional information below. Please attach a Report created with "File / Save Report...". ``` Original issue reported on code.google.com by `step...@familie-ott.info` on 1 Oct 2011 at 3:35 Attachments: * [OpenHardwareMonitor.Report.txt](https://storage.googleapis.com/google-code-attachments/open-hardware-monitor/issue-280/comment-0/OpenHardwareMonitor.Report.txt)
1.0
CPU Temerature missing - ``` What is the expected output? What do you see instead? I do not see CPU temperature, only use. Fir one of the HDs the temperature is missing. I do not see any output of the graphics card. What version of the product are you using? On what operating system? Version is 0.3.2 Beta Please provide any additional information below. Please attach a Report created with "File / Save Report...". ``` Original issue reported on code.google.com by `step...@familie-ott.info` on 1 Oct 2011 at 3:35 Attachments: * [OpenHardwareMonitor.Report.txt](https://storage.googleapis.com/google-code-attachments/open-hardware-monitor/issue-280/comment-0/OpenHardwareMonitor.Report.txt)
priority
cpu temerature missing what is the expected output what do you see instead i do not see cpu temperature only use fir one of the hds the temperature is missing i do not see any output of the graphics card what version of the product are you using on what operating system version is beta please provide any additional information below please attach a report created with file save report original issue reported on code google com by step familie ott info on oct at attachments
1
720,161
24,781,741,414
IssuesEvent
2022-10-24 06:04:39
airqo-platform/AirQo-frontend
https://api.github.com/repos/airqo-platform/AirQo-frontend
closed
Redirect all AirQo domains point to the default one (.net)
enhancement airqo-website feature-request priority-medium
**Is your feature request related to a problem? Please describe.** AirQo has many domain names, tracking the analytics for all these could be hard to manage. **Describe the solution you'd like** Redirect all of them to the default domain name which is tracked. This will make it easy to get analytics of all our visitors all in one place. **Describe alternatives you've considered** N/A **Additional context** This was suggested by Paul Z in one of the demo meetings....
1.0
Redirect all AirQo domains point to the default one (.net) - **Is your feature request related to a problem? Please describe.** AirQo has many domain names, tracking the analytics for all these could be hard to manage. **Describe the solution you'd like** Redirect all of them to the default domain name which is tracked. This will make it easy to get analytics of all our visitors all in one place. **Describe alternatives you've considered** N/A **Additional context** This was suggested by Paul Z in one of the demo meetings....
priority
redirect all airqo domains point to the default one net is your feature request related to a problem please describe airqo has many domain names tracking the analytics for all these could be hard to manage describe the solution you d like redirect all of them to the default domain name which is tracked this will make it easy to get analytics of all our visitors all in one place describe alternatives you ve considered n a additional context this was suggested by paul z in one of the demo meetings
1
772,363
27,118,423,390
IssuesEvent
2023-02-15 20:34:00
NASA-NAVO/navo-workshop
https://api.github.com/repos/NASA-NAVO/navo-workshop
closed
Make a separate "environment" repo that pulls the notebook content
enhancement priority:medium
Following [this Binder Discourse thread](https://discourse.jupyter.org/t/how-to-reduce-mybinder-org-repository-startup-time/4956), we can make a separate repository specifying the environment, then use `nbgitpuller` to pull the content from our notebooks repository. This way, Binder sessions will start quickly, even if we tweak notebooks just before a workshop. Furthermore, following [thread](https://discourse.jupyter.org/t/pycon-2019-and-mybinder-org/920), in this environment repo we can install a Jupyterhub extension that downloads a zipfile of all the notebooks, [nbzip](https://github.com/data-8/nbzip). Then we can encourage Binder users to download a zipfile whenever they want to save their work.
1.0
Make a separate "environment" repo that pulls the notebook content - Following [this Binder Discourse thread](https://discourse.jupyter.org/t/how-to-reduce-mybinder-org-repository-startup-time/4956), we can make a separate repository specifying the environment, then use `nbgitpuller` to pull the content from our notebooks repository. This way, Binder sessions will start quickly, even if we tweak notebooks just before a workshop. Furthermore, following [thread](https://discourse.jupyter.org/t/pycon-2019-and-mybinder-org/920), in this environment repo we can install a Jupyterhub extension that downloads a zipfile of all the notebooks, [nbzip](https://github.com/data-8/nbzip). Then we can encourage Binder users to download a zipfile whenever they want to save their work.
priority
make a separate environment repo that pulls the notebook content following we can make a separate repository specifying the environment then use nbgitpuller to pull the content from our notebooks repository this way binder sessions will start quickly even if we tweak notebooks just before a workshop furthermore following in this environment repo we can install a jupyterhub extension that downloads a zipfile of all the notebooks then we can encourage binder users to download a zipfile whenever they want to save their work
1
119,714
4,774,982,492
IssuesEvent
2016-10-27 08:53:25
MatchboxDorry/dorry-web
https://api.github.com/repos/MatchboxDorry/dorry-web
closed
[UI] test1-see the user head
censor: approved effort: 2 (medium) feature: view template flag: fixed priority: 2 (required) type: enhancement
**System:** Mac mini Os X EI Capitan **Browser:** Chrome **What I want to do** I want to see the user head and name. **Where I am** services page **What I have done** I click 'services' button in menu and see the page of services. **What I expect:** I can see user head and name at the top right-hand corner. ![user head](https://cloud.githubusercontent.com/assets/22925114/19512797/29f1023a-9620-11e6-8abd-f0a89e114939.png) **What really happened** ![user head and name](https://cloud.githubusercontent.com/assets/22925114/19512485/b37534ce-961e-11e6-909f-e818b7aa6d80.png)
1.0
[UI] test1-see the user head - **System:** Mac mini Os X EI Capitan **Browser:** Chrome **What I want to do** I want to see the user head and name. **Where I am** services page **What I have done** I click 'services' button in menu and see the page of services. **What I expect:** I can see user head and name at the top right-hand corner. ![user head](https://cloud.githubusercontent.com/assets/22925114/19512797/29f1023a-9620-11e6-8abd-f0a89e114939.png) **What really happened** ![user head and name](https://cloud.githubusercontent.com/assets/22925114/19512485/b37534ce-961e-11e6-909f-e818b7aa6d80.png)
priority
see the user head system mac mini os x ei capitan browser chrome what i want to do i want to see the user head and name where i am services page what i have done i click services button in menu and see the page of services what i expect i can see user head and name at the top right hand corner what really happened
1
660,362
21,963,331,413
IssuesEvent
2022-05-24 17:40:06
QuiltMC/quiltflower
https://api.github.com/repos/QuiltMC/quiltflower
closed
`requires static` is decompiled as `requires` in module-info
bug Subsystem: Writing Priority: Medium
Example: `META-INF/versions/16/module-info.class` in https://github.com/Juuxel/unprotect/releases/download/1.1.0/unprotect-1.1.0.jar The static vs normal `requires` info is available in the bytecode since JDK's javap can read it. (And produce a more accurate decompiled version of the module-info, ironically)
1.0
`requires static` is decompiled as `requires` in module-info - Example: `META-INF/versions/16/module-info.class` in https://github.com/Juuxel/unprotect/releases/download/1.1.0/unprotect-1.1.0.jar The static vs normal `requires` info is available in the bytecode since JDK's javap can read it. (And produce a more accurate decompiled version of the module-info, ironically)
priority
requires static is decompiled as requires in module info example meta inf versions module info class in the static vs normal requires info is available in the bytecode since jdk s javap can read it and produce a more accurate decompiled version of the module info ironically
1
277,842
8,633,367,292
IssuesEvent
2018-11-22 13:40:14
geosolutions-it/pyfulcrum
https://api.github.com/repos/geosolutions-it/pyfulcrum
closed
PyBackup filter/validation
Priority: Medium Task review
- [x] Option to possibly define a list of forms and users to be included in the backups. Records will be filtered according to this information in case it is defined. - [x] Incoming (**filtered**) records need to be validated before starting the involved backups procedures. Payload nature and contents (as well as requests headers if possible) need to be verified before accepting the record. In case it don’t fulfill the validation rules it will be rejected.
1.0
PyBackup filter/validation - - [x] Option to possibly define a list of forms and users to be included in the backups. Records will be filtered according to this information in case it is defined. - [x] Incoming (**filtered**) records need to be validated before starting the involved backups procedures. Payload nature and contents (as well as requests headers if possible) need to be verified before accepting the record. In case it don’t fulfill the validation rules it will be rejected.
priority
pybackup filter validation option to possibly define a list of forms and users to be included in the backups records will be filtered according to this information in case it is defined incoming filtered records need to be validated before starting the involved backups procedures payload nature and contents as well as requests headers if possible need to be verified before accepting the record in case it don’t fulfill the validation rules it will be rejected
1
188,681
6,779,848,363
IssuesEvent
2017-10-29 06:03:47
MyMICDS/MyMICDS-v2-Angular
https://api.github.com/repos/MyMICDS/MyMICDS-v2-Angular
closed
Edit button is there even when user isn't logged in
effort: easy priority: medium ui / ux work length: short
A great idea would be to keep the edit button, but have some sort of dialogue that says the user must create an account before customizing the homepage
1.0
Edit button is there even when user isn't logged in - A great idea would be to keep the edit button, but have some sort of dialogue that says the user must create an account before customizing the homepage
priority
edit button is there even when user isn t logged in a great idea would be to keep the edit button but have some sort of dialogue that says the user must create an account before customizing the homepage
1
331,307
10,063,995,978
IssuesEvent
2019-07-23 07:36:32
AbsaOSS/enceladus
https://api.github.com/repos/AbsaOSS/enceladus
closed
Add index sort direction and uniqueness constraints to migration framework
feature priority: medium
## Background * MongoDB compound indexes can have [a sort order](https://docs.mongodb.com/manual/core/index-compound/#index-ascending-and-descending). Currently, the Migration framework allows to create compound indexes, but only with ascending order. * MongoDB indexes can have [a uniqueness constraint](https://docs.mongodb.com/manual/core/index-unique/). Currently, the Migration framework does not allow index uniqueness to be enforced. ## Feature It would be very beneficial to expand the indexing functionality to include these features.
1.0
Add index sort direction and uniqueness constraints to migration framework - ## Background * MongoDB compound indexes can have [a sort order](https://docs.mongodb.com/manual/core/index-compound/#index-ascending-and-descending). Currently, the Migration framework allows to create compound indexes, but only with ascending order. * MongoDB indexes can have [a uniqueness constraint](https://docs.mongodb.com/manual/core/index-unique/). Currently, the Migration framework does not allow index uniqueness to be enforced. ## Feature It would be very beneficial to expand the indexing functionality to include these features.
priority
add index sort direction and uniqueness constraints to migration framework background mongodb compound indexes can have currently the migration framework allows to create compound indexes but only with ascending order mongodb indexes can have currently the migration framework does not allow index uniqueness to be enforced feature it would be very beneficial to expand the indexing functionality to include these features
1
657,468
21,794,894,729
IssuesEvent
2022-05-15 13:31:40
SELab-2/OSOC-6
https://api.github.com/repos/SELab-2/OSOC-6
opened
Communication list
enhancement priority:medium
This issue will track the creation of a communication list component. This component will be restricted to one student.
1.0
Communication list - This issue will track the creation of a communication list component. This component will be restricted to one student.
priority
communication list this issue will track the creation of a communication list component this component will be restricted to one student
1
316,284
9,640,112,506
IssuesEvent
2019-05-16 14:52:00
eJourn-al/eJournal
https://api.github.com/repos/eJourn-al/eJournal
closed
More intuitive adding of nodes in timeline
Priority: High Status: Review Needed Type: Enhancement Workload: Medium
**Describe the solution you'd like** Adding nodes to format editor needs to be in a more intuitive way upon pressing the new node button. Currently, a node is immediately added upon clicking the add node. We should have this work the same as adding a node to a journal by a student.
1.0
More intuitive adding of nodes in timeline - **Describe the solution you'd like** Adding nodes to format editor needs to be in a more intuitive way upon pressing the new node button. Currently, a node is immediately added upon clicking the add node. We should have this work the same as adding a node to a journal by a student.
priority
more intuitive adding of nodes in timeline describe the solution you d like adding nodes to format editor needs to be in a more intuitive way upon pressing the new node button currently a node is immediately added upon clicking the add node we should have this work the same as adding a node to a journal by a student
1
120,832
4,794,936,639
IssuesEvent
2016-10-31 22:42:00
w3prog/GoodTravel
https://api.github.com/repos/w3prog/GoodTravel
closed
Создание математической модели подбора маршрута по заданным критериями.
Feature help wanted Priority: MEDIUM
Нужно подобрать варианты расчет подходящий мест под определенные запросы
1.0
Создание математической модели подбора маршрута по заданным критериями. - Нужно подобрать варианты расчет подходящий мест под определенные запросы
priority
создание математической модели подбора маршрута по заданным критериями нужно подобрать варианты расчет подходящий мест под определенные запросы
1
405,439
11,873,254,919
IssuesEvent
2020-03-26 17:01:02
JEvents/JEvents
https://api.github.com/repos/JEvents/JEvents
closed
uikit: Club Theme Options tab missing from JEvents configuration
Priority - Medium
As stated, there is no tab in the JEvents configuration for club layouts.
1.0
uikit: Club Theme Options tab missing from JEvents configuration - As stated, there is no tab in the JEvents configuration for club layouts.
priority
uikit club theme options tab missing from jevents configuration as stated there is no tab in the jevents configuration for club layouts
1
587,713
17,629,815,288
IssuesEvent
2021-08-19 06:15:08
nimblehq/nimble-medium-ios
https://api.github.com/repos/nimblehq/nimble-medium-ios
opened
As a user, I can mark an article as my favorite
type : feature category : backend priority : medium
## Why Once logged in, the users will be able to mark an article as their favorite. ## Acceptance Criteria - [ ] Implement the API for marking an article as favorite via this [endpoint](https://github.com/gothinkster/realworld/tree/master/api#favorite-article). - [ ] The endpoint takes the article's `slug` as the query parameter in the request url. - [ ] Create a usecase for marking an article as favorite for later usage in integrate task. - [ ] Parse the response into the corresponding model, here is an example response for marking an article as favorite: ``` { "article": { "slug": "how-to-train-your-dragon", "title": "How to train your dragon", "description": "Ever wonder how?", "body": "It takes a Jacobian", "tagList": ["dragons", "training"], "createdAt": "2016-02-18T03:22:56.637Z", "updatedAt": "2016-02-18T03:48:35.824Z", "favorited": false, "favoritesCount": 0, "author": { "username": "jake", "bio": "I work at statefarm", "image": "https://i.stack.imgur.com/xHWG8.jpg", "following": false } } } ```
1.0
As a user, I can mark an article as my favorite - ## Why Once logged in, the users will be able to mark an article as their favorite. ## Acceptance Criteria - [ ] Implement the API for marking an article as favorite via this [endpoint](https://github.com/gothinkster/realworld/tree/master/api#favorite-article). - [ ] The endpoint takes the article's `slug` as the query parameter in the request url. - [ ] Create a usecase for marking an article as favorite for later usage in integrate task. - [ ] Parse the response into the corresponding model, here is an example response for marking an article as favorite: ``` { "article": { "slug": "how-to-train-your-dragon", "title": "How to train your dragon", "description": "Ever wonder how?", "body": "It takes a Jacobian", "tagList": ["dragons", "training"], "createdAt": "2016-02-18T03:22:56.637Z", "updatedAt": "2016-02-18T03:48:35.824Z", "favorited": false, "favoritesCount": 0, "author": { "username": "jake", "bio": "I work at statefarm", "image": "https://i.stack.imgur.com/xHWG8.jpg", "following": false } } } ```
priority
as a user i can mark an article as my favorite why once logged in the users will be able to mark an article as their favorite acceptance criteria implement the api for marking an article as favorite via this the endpoint takes the article s slug as the query parameter in the request url create a usecase for marking an article as favorite for later usage in integrate task parse the response into the corresponding model here is an example response for marking an article as favorite article slug how to train your dragon title how to train your dragon description ever wonder how body it takes a jacobian taglist createdat updatedat favorited false favoritescount author username jake bio i work at statefarm image following false
1
47,138
2,974,566,581
IssuesEvent
2015-07-15 01:52:56
MozillaHive/HiveCHI-rwm
https://api.github.com/repos/MozillaHive/HiveCHI-rwm
closed
Link to Registration page on Login page
Medium Priority
The registration page is accessible as /register. Put a button on the login page to link to it.
1.0
Link to Registration page on Login page - The registration page is accessible as /register. Put a button on the login page to link to it.
priority
link to registration page on login page the registration page is accessible as register put a button on the login page to link to it
1
293,660
8,998,645,519
IssuesEvent
2019-02-02 23:58:34
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
closed
Assert and printk not printed on RTT
area: Logging bug priority: medium
**Describe the bug** Log messages are printed, but assert message is not printed out. As a result, it gives impression that device hanged, but does not give any clue why. You can see that it was an assertion, by opening debugger and stopping after the device hanged: ``` >> ninja debug gdb> c gdb> (CTRL + C) ``` **To Reproduce** Happens for any assert in Zephyr subsystems. Can be reproduced on hello_world with small modification: ``` #include <zephyr.h> #include <misc/printk.h> #include <assert.h> #define HELLO_LOG_LEVEL 4 #include <logging/log.h> LOG_MODULE_REGISTER(hello, HELLO_LOG_LEVEL); void main(void) { for (int i=0; i<5; i++) { LOG_WRN("test %d", i); } k_sleep(100); __ASSERT(false, "Asserted!"); printk("Hello World! %s\n", CONFIG_BOARD); } ``` Output: ``` [00000000] <wrn> hello: test 0 [00000000] <wrn> hello: test 1 [00000000] <wrn> hello: test 2 [00000000] <wrn> hello: test 3 [00000000] <wrn> hello: test 4 ``` Assert message `ASSERTION FAIL ...` is not printed. Is is the same with `__ASSERT_NO_MSG` or if instead of assert there is just a call to `printk()`; **Environment:** In configuration, printk is processed by logger: `CONFIG_LOG_PRINTK=y` Full config: ``` CONFIG_ASSERT=y CONFIG_ASSERT_LEVEL=2 CONFIG_LOG=y CONFIG_LOG_DEFAULT_LEVEL=2 CONFIG_LOG_BACKEND_RTT=y CONFIG_LOG_BACKEND_RTT_MODE_DROP=y CONFIG_LOG_MODE_NO_OVERFLOW=y CONFIG_LOG_PRINTK=y CONFIG_LOG_PRINTK_MAX_STRING_LENGTH=256 CONFIG_LOG_BUFFER_SIZE=4096 CONFIG_LOG_BACKEND_RTT_MESSAGE_SIZE=256 CONFIG_LOG_STRDUP_BUF_COUNT=64 CONFIG_LOG_STRDUP_MAX_STRING=64 CONFIG_LOG_BACKEND_SHOW_COLOR=n CONFIG_LOG_BACKEND_FORMAT_TIMESTAMP=n CONFIG_LOG_PROCESS_THREAD_STACK_SIZE=1024 CONFIG_CONSOLE=y CONFIG_USE_SEGGER_RTT=y CONFIG_SEGGER_RTT_BUFFER_SIZE_UP=4096 CONFIG_RTT_CONSOLE=y CONFIG_UART_CONSOLE=n ```
1.0
Assert and printk not printed on RTT - **Describe the bug** Log messages are printed, but assert message is not printed out. As a result, it gives impression that device hanged, but does not give any clue why. You can see that it was an assertion, by opening debugger and stopping after the device hanged: ``` >> ninja debug gdb> c gdb> (CTRL + C) ``` **To Reproduce** Happens for any assert in Zephyr subsystems. Can be reproduced on hello_world with small modification: ``` #include <zephyr.h> #include <misc/printk.h> #include <assert.h> #define HELLO_LOG_LEVEL 4 #include <logging/log.h> LOG_MODULE_REGISTER(hello, HELLO_LOG_LEVEL); void main(void) { for (int i=0; i<5; i++) { LOG_WRN("test %d", i); } k_sleep(100); __ASSERT(false, "Asserted!"); printk("Hello World! %s\n", CONFIG_BOARD); } ``` Output: ``` [00000000] <wrn> hello: test 0 [00000000] <wrn> hello: test 1 [00000000] <wrn> hello: test 2 [00000000] <wrn> hello: test 3 [00000000] <wrn> hello: test 4 ``` Assert message `ASSERTION FAIL ...` is not printed. Is is the same with `__ASSERT_NO_MSG` or if instead of assert there is just a call to `printk()`; **Environment:** In configuration, printk is processed by logger: `CONFIG_LOG_PRINTK=y` Full config: ``` CONFIG_ASSERT=y CONFIG_ASSERT_LEVEL=2 CONFIG_LOG=y CONFIG_LOG_DEFAULT_LEVEL=2 CONFIG_LOG_BACKEND_RTT=y CONFIG_LOG_BACKEND_RTT_MODE_DROP=y CONFIG_LOG_MODE_NO_OVERFLOW=y CONFIG_LOG_PRINTK=y CONFIG_LOG_PRINTK_MAX_STRING_LENGTH=256 CONFIG_LOG_BUFFER_SIZE=4096 CONFIG_LOG_BACKEND_RTT_MESSAGE_SIZE=256 CONFIG_LOG_STRDUP_BUF_COUNT=64 CONFIG_LOG_STRDUP_MAX_STRING=64 CONFIG_LOG_BACKEND_SHOW_COLOR=n CONFIG_LOG_BACKEND_FORMAT_TIMESTAMP=n CONFIG_LOG_PROCESS_THREAD_STACK_SIZE=1024 CONFIG_CONSOLE=y CONFIG_USE_SEGGER_RTT=y CONFIG_SEGGER_RTT_BUFFER_SIZE_UP=4096 CONFIG_RTT_CONSOLE=y CONFIG_UART_CONSOLE=n ```
priority
assert and printk not printed on rtt describe the bug log messages are printed but assert message is not printed out as a result it gives impression that device hanged but does not give any clue why you can see that it was an assertion by opening debugger and stopping after the device hanged ninja debug gdb c gdb ctrl c to reproduce happens for any assert in zephyr subsystems can be reproduced on hello world with small modification include include include define hello log level include log module register hello hello log level void main void for int i i i log wrn test d i k sleep assert false asserted printk hello world s n config board output hello test hello test hello test hello test hello test assert message assertion fail is not printed is is the same with assert no msg or if instead of assert there is just a call to printk environment in configuration printk is processed by logger config log printk y full config config assert y config assert level config log y config log default level config log backend rtt y config log backend rtt mode drop y config log mode no overflow y config log printk y config log printk max string length config log buffer size config log backend rtt message size config log strdup buf count config log strdup max string config log backend show color n config log backend format timestamp n config log process thread stack size config console y config use segger rtt y config segger rtt buffer size up config rtt console y config uart console n
1
791,773
27,876,499,461
IssuesEvent
2023-03-21 16:20:25
AY2223S2-CS2113-F13-2/tp
https://api.github.com/repos/AY2223S2-CS2113-F13-2/tp
opened
Add Wishlist feature
enhancement type.Story priority.Medium
Allows users to have a wishlist, which is a list of goods that the user would like to save for and buy. Users will be able to add new goods to the wishlist. Users can dedicate certain incomes to their wishlist, effectively saving up for the product. Users will be able to see how much percentage of the good's price has been saved for purchase.
1.0
Add Wishlist feature - Allows users to have a wishlist, which is a list of goods that the user would like to save for and buy. Users will be able to add new goods to the wishlist. Users can dedicate certain incomes to their wishlist, effectively saving up for the product. Users will be able to see how much percentage of the good's price has been saved for purchase.
priority
add wishlist feature allows users to have a wishlist which is a list of goods that the user would like to save for and buy users will be able to add new goods to the wishlist users can dedicate certain incomes to their wishlist effectively saving up for the product users will be able to see how much percentage of the good s price has been saved for purchase
1
1,974
2,522,368,910
IssuesEvent
2015-01-19 21:32:52
roberttdev/dactyl4
https://api.github.com/repos/roberttdev/dactyl4
closed
Being Ready for Data Entry and Clicking Clone on Another Point is Visually Uncertain
medium priority
When in data entry, if I have one point ready for data entry and then click clone on another point, the first point appears ready for DE while the second point appears ready for cloning. Steps to reproduce: 1. Create a templated group 2. Click on the first point to do DE 3. It turns yellow 4. Click on the clone icon for the second point 5. It turns orange Result: One point is yellow; the other is orange. Expected result: The yellow point deselects (yellow goes away); the point for cloning is ready.
1.0
Being Ready for Data Entry and Clicking Clone on Another Point is Visually Uncertain - When in data entry, if I have one point ready for data entry and then click clone on another point, the first point appears ready for DE while the second point appears ready for cloning. Steps to reproduce: 1. Create a templated group 2. Click on the first point to do DE 3. It turns yellow 4. Click on the clone icon for the second point 5. It turns orange Result: One point is yellow; the other is orange. Expected result: The yellow point deselects (yellow goes away); the point for cloning is ready.
priority
being ready for data entry and clicking clone on another point is visually uncertain when in data entry if i have one point ready for data entry and then click clone on another point the first point appears ready for de while the second point appears ready for cloning steps to reproduce create a templated group click on the first point to do de it turns yellow click on the clone icon for the second point it turns orange result one point is yellow the other is orange expected result the yellow point deselects yellow goes away the point for cloning is ready
1
703,340
24,154,490,789
IssuesEvent
2022-09-22 06:17:23
redhat-developer/odo
https://api.github.com/repos/redhat-developer/odo
reopened
`odo dev` not handling changes to the Devfile after removing a Binding with `odo remove binding`
kind/bug priority/Medium area/binding triage/ready
/kind bug /area binding ## What versions of software are you using? **Operating System:** Fedora 36 **Output of `odo version`:** odo v3.0.0-rc1 (897f5f3e0) ## How did you run odo exactly? Reproduction steps: 1. New project, but you can use any other project: `odo init --name my-sample-go --devfile go --starter go-starter` 2. Run `odo dev`, and wait until the Dev session is up and running 3. Add a new binding, e.g.: `odo add binding --name my-sample-go-postgresql-binding --service postgresql` => Changes to the Devfile are automatically handled, and the pod is recreated and binding information injected, which is fine. 4. Wait until the Dev session is ok again. Now, if I remove the previous binding from the Devfile with `odo remove binding --name my-sample-go-postgresql-binding`, the running `odo dev` session will just display "Updating component...", but it looks like nothing is actually happening in the cluster. ## Actual behavior The pod is not recreated and the binding information is still there in the pod. ## Expected behavior Similar to how `odo add binding` affects the currently running Dev pod, I would expect `odo remove binding` to cause the Dev pod to be recreated with no binding data injected in it. ## Any logs, error output, etc?
1.0
`odo dev` not handling changes to the Devfile after removing a Binding with `odo remove binding` - /kind bug /area binding ## What versions of software are you using? **Operating System:** Fedora 36 **Output of `odo version`:** odo v3.0.0-rc1 (897f5f3e0) ## How did you run odo exactly? Reproduction steps: 1. New project, but you can use any other project: `odo init --name my-sample-go --devfile go --starter go-starter` 2. Run `odo dev`, and wait until the Dev session is up and running 3. Add a new binding, e.g.: `odo add binding --name my-sample-go-postgresql-binding --service postgresql` => Changes to the Devfile are automatically handled, and the pod is recreated and binding information injected, which is fine. 4. Wait until the Dev session is ok again. Now, if I remove the previous binding from the Devfile with `odo remove binding --name my-sample-go-postgresql-binding`, the running `odo dev` session will just display "Updating component...", but it looks like nothing is actually happening in the cluster. ## Actual behavior The pod is not recreated and the binding information is still there in the pod. ## Expected behavior Similar to how `odo add binding` affects the currently running Dev pod, I would expect `odo remove binding` to cause the Dev pod to be recreated with no binding data injected in it. ## Any logs, error output, etc?
priority
odo dev not handling changes to the devfile after removing a binding with odo remove binding kind bug area binding what versions of software are you using operating system fedora output of odo version odo how did you run odo exactly reproduction steps new project but you can use any other project odo init name my sample go devfile go starter go starter run odo dev and wait until the dev session is up and running add a new binding e g odo add binding name my sample go postgresql binding service postgresql changes to the devfile are automatically handled and the pod is recreated and binding information injected which is fine wait until the dev session is ok again now if i remove the previous binding from the devfile with odo remove binding name my sample go postgresql binding the running odo dev session will just display updating component but it looks like nothing is actually happening in the cluster actual behavior the pod is not recreated and the binding information is still there in the pod expected behavior similar to how odo add binding affects the currently running dev pod i would expect odo remove binding to cause the dev pod to be recreated with no binding data injected in it any logs error output etc
1
38,599
2,849,190,795
IssuesEvent
2015-05-30 13:30:45
joshdrummond/webpasswordsafe
https://api.github.com/repos/joshdrummond/webpasswordsafe
closed
Duo Security Multi-Factor Authentication Plugin
enhancement Milestone-Release1.4 Priority-Medium Type-Enhancement
Create an authentication plugin that supports Duo Security multi-factor authentication (https://www.duosecurity.com/)
1.0
Duo Security Multi-Factor Authentication Plugin - Create an authentication plugin that supports Duo Security multi-factor authentication (https://www.duosecurity.com/)
priority
duo security multi factor authentication plugin create an authentication plugin that supports duo security multi factor authentication
1
795,219
28,066,164,530
IssuesEvent
2023-03-29 15:33:37
yugabyte/yugabyte-db
https://api.github.com/repos/yugabyte/yugabyte-db
closed
[DocDB] Deadlock detector may overwrite wait-for information if received from multiple tablets
kind/bug area/docdb priority/medium
Jira Link: [DB-5459](https://yugabyte.atlassian.net/browse/DB-5459) ### Description In case we have a transaction waiting at multiple tablets, deadlock detection may fail to detect the deadlock, since it currently stores only the latest update it received from any tablet. We can hit such a case if the user issues an UPDATE which spans many rows. The following test demonstrates how it might fail: ``` TEST_F(PgWaitQueuesTest, YB_DISABLE_TEST_IN_TSAN(ParallelUpdatesDetectDeadlock)) { constexpr int kNumKeys = 20; auto setup_conn = ASSERT_RESULT(Connect()); ASSERT_OK(setup_conn.Execute("CREATE TABLE foo (k INT PRIMARY KEY, v INT)")); for (int deadlock_idx = 1; deadlock_idx <= kNumKeys; ++deadlock_idx) { ASSERT_OK(setup_conn.Execute("TRUNCATE TABLE foo")); ASSERT_OK(setup_conn.ExecuteFormat( "INSERT INTO foo SELECT generate_series(0, $0), 0", kNumKeys * 5)); TestThreadHolder thread_holder; auto update_conn = ASSERT_RESULT(Connect()); ASSERT_OK(update_conn.StartTransaction(IsolationLevel::SNAPSHOT_ISOLATION)); ASSERT_OK(update_conn.Fetch("SELECT * FROM foo WHERE k=0 FOR UPDATE")); CountDownLatch locked_key(kNumKeys); CountDownLatch did_deadlock(1); for (int key_idx = 1; key_idx <= kNumKeys; ++key_idx) { thread_holder.AddThreadFunctor([this, &did_deadlock, &locked_key, key_idx, deadlock_idx] { auto conn = ASSERT_RESULT(Connect()); ASSERT_OK(conn.StartTransaction(IsolationLevel::SNAPSHOT_ISOLATION)); ASSERT_OK(conn.FetchFormat("SELECT * FROM foo WHERE k=$0 FOR UPDATE", key_idx)); LOG(INFO) << "Thread " << key_idx << " locked key"; locked_key.CountDown(); ASSERT_TRUE(locked_key.WaitFor(10s * kTimeMultiplier)); if (deadlock_idx == key_idx) { std::this_thread::sleep_for(5s * kTimeMultiplier); LOG(INFO) << "Thread " << key_idx << " locking 0"; auto s = conn.Fetch("SELECT * FROM foo WHERE k=0 FOR UPDATE"); if (!s.ok()) { LOG(INFO) << "Thread " << key_idx << " failed to lock 0 " << s; did_deadlock.CountDown(); ASSERT_OK(conn.RollbackTransaction()); return; } else { LOG(INFO) << "Thread " << key_idx << " locked 0"; } } ASSERT_TRUE(did_deadlock.WaitFor(20s * kTimeMultiplier)); ASSERT_OK(conn.CommitTransaction()); LOG(INFO) << "Thread " << key_idx << " committed"; }); } ASSERT_TRUE(locked_key.WaitFor(5s * kTimeMultiplier)); LOG(INFO) << "About to update"; auto s = update_conn.ExecuteFormat("UPDATE foo SET v=20 WHERE k > 0 AND k <= $0", kNumKeys); if (s.ok()) { LOG(INFO) << "Successfully updated"; EXPECT_EQ(did_deadlock.count(), 0); ASSERT_OK(update_conn.CommitTransaction()); } else { LOG(INFO) << "Failed to update " << s; did_deadlock.CountDown(); } } } ``` [DB-5459]: https://yugabyte.atlassian.net/browse/DB-5459?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
1.0
[DocDB] Deadlock detector may overwrite wait-for information if received from multiple tablets - Jira Link: [DB-5459](https://yugabyte.atlassian.net/browse/DB-5459) ### Description In case we have a transaction waiting at multiple tablets, deadlock detection may fail to detect the deadlock, since it currently stores only the latest update it received from any tablet. We can hit such a case if the user issues an UPDATE which spans many rows. The following test demonstrates how it might fail: ``` TEST_F(PgWaitQueuesTest, YB_DISABLE_TEST_IN_TSAN(ParallelUpdatesDetectDeadlock)) { constexpr int kNumKeys = 20; auto setup_conn = ASSERT_RESULT(Connect()); ASSERT_OK(setup_conn.Execute("CREATE TABLE foo (k INT PRIMARY KEY, v INT)")); for (int deadlock_idx = 1; deadlock_idx <= kNumKeys; ++deadlock_idx) { ASSERT_OK(setup_conn.Execute("TRUNCATE TABLE foo")); ASSERT_OK(setup_conn.ExecuteFormat( "INSERT INTO foo SELECT generate_series(0, $0), 0", kNumKeys * 5)); TestThreadHolder thread_holder; auto update_conn = ASSERT_RESULT(Connect()); ASSERT_OK(update_conn.StartTransaction(IsolationLevel::SNAPSHOT_ISOLATION)); ASSERT_OK(update_conn.Fetch("SELECT * FROM foo WHERE k=0 FOR UPDATE")); CountDownLatch locked_key(kNumKeys); CountDownLatch did_deadlock(1); for (int key_idx = 1; key_idx <= kNumKeys; ++key_idx) { thread_holder.AddThreadFunctor([this, &did_deadlock, &locked_key, key_idx, deadlock_idx] { auto conn = ASSERT_RESULT(Connect()); ASSERT_OK(conn.StartTransaction(IsolationLevel::SNAPSHOT_ISOLATION)); ASSERT_OK(conn.FetchFormat("SELECT * FROM foo WHERE k=$0 FOR UPDATE", key_idx)); LOG(INFO) << "Thread " << key_idx << " locked key"; locked_key.CountDown(); ASSERT_TRUE(locked_key.WaitFor(10s * kTimeMultiplier)); if (deadlock_idx == key_idx) { std::this_thread::sleep_for(5s * kTimeMultiplier); LOG(INFO) << "Thread " << key_idx << " locking 0"; auto s = conn.Fetch("SELECT * FROM foo WHERE k=0 FOR UPDATE"); if (!s.ok()) { LOG(INFO) << "Thread " << key_idx << " failed to lock 0 " << s; did_deadlock.CountDown(); ASSERT_OK(conn.RollbackTransaction()); return; } else { LOG(INFO) << "Thread " << key_idx << " locked 0"; } } ASSERT_TRUE(did_deadlock.WaitFor(20s * kTimeMultiplier)); ASSERT_OK(conn.CommitTransaction()); LOG(INFO) << "Thread " << key_idx << " committed"; }); } ASSERT_TRUE(locked_key.WaitFor(5s * kTimeMultiplier)); LOG(INFO) << "About to update"; auto s = update_conn.ExecuteFormat("UPDATE foo SET v=20 WHERE k > 0 AND k <= $0", kNumKeys); if (s.ok()) { LOG(INFO) << "Successfully updated"; EXPECT_EQ(did_deadlock.count(), 0); ASSERT_OK(update_conn.CommitTransaction()); } else { LOG(INFO) << "Failed to update " << s; did_deadlock.CountDown(); } } } ``` [DB-5459]: https://yugabyte.atlassian.net/browse/DB-5459?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
priority
deadlock detector may overwrite wait for information if received from multiple tablets jira link description in case we have a transaction waiting at multiple tablets deadlock detection may fail to detect the deadlock since it currently stores only the latest update it received from any tablet we can hit such a case if the user issues an update which spans many rows the following test demonstrates how it might fail test f pgwaitqueuestest yb disable test in tsan parallelupdatesdetectdeadlock constexpr int knumkeys auto setup conn assert result connect assert ok setup conn execute create table foo k int primary key v int for int deadlock idx deadlock idx knumkeys deadlock idx assert ok setup conn execute truncate table foo assert ok setup conn executeformat insert into foo select generate series knumkeys testthreadholder thread holder auto update conn assert result connect assert ok update conn starttransaction isolationlevel snapshot isolation assert ok update conn fetch select from foo where k for update countdownlatch locked key knumkeys countdownlatch did deadlock for int key idx key idx knumkeys key idx thread holder addthreadfunctor auto conn assert result connect assert ok conn starttransaction isolationlevel snapshot isolation assert ok conn fetchformat select from foo where k for update key idx log info thread key idx locked key locked key countdown assert true locked key waitfor ktimemultiplier if deadlock idx key idx std this thread sleep for ktimemultiplier log info thread key idx locking auto s conn fetch select from foo where k for update if s ok log info thread key idx failed to lock s did deadlock countdown assert ok conn rollbacktransaction return else log info thread key idx locked assert true did deadlock waitfor ktimemultiplier assert ok conn committransaction log info thread key idx committed assert true locked key waitfor ktimemultiplier log info about to update auto s update conn executeformat update foo set v where k and k knumkeys if s ok log info successfully updated expect eq did deadlock count assert ok update conn committransaction else log info failed to update s did deadlock countdown
1
48,288
2,997,019,201
IssuesEvent
2015-07-23 02:53:27
theminted/lesswrong-migrated
https://api.github.com/repos/theminted/lesswrong-migrated
closed
add distinct viewable blogs for individual contributors
Contributions-Welcome imported Priority-Medium Type-Feature
_From [wjmo...@gmail.com](https://code.google.com/u/117567618910921056910/) on January 28, 2009 17:42:32_ Gerrit: I think this means filtered views of the post list, which can be by author, but may as well be by a number of filter criteria, integrating the following point. _Original issue: http://code.google.com/p/lesswrong/issues/detail?id=6_
1.0
add distinct viewable blogs for individual contributors - _From [wjmo...@gmail.com](https://code.google.com/u/117567618910921056910/) on January 28, 2009 17:42:32_ Gerrit: I think this means filtered views of the post list, which can be by author, but may as well be by a number of filter criteria, integrating the following point. _Original issue: http://code.google.com/p/lesswrong/issues/detail?id=6_
priority
add distinct viewable blogs for individual contributors from on january gerrit i think this means filtered views of the post list which can be by author but may as well be by a number of filter criteria integrating the following point original issue
1
531,083
15,440,441,024
IssuesEvent
2021-03-08 03:19:25
space-wizards/space-station-14
https://api.github.com/repos/space-wizards/space-station-14
opened
Add a disposable tag
Difficulty: 2 - Medium Priority: 4-wishlist Size: 2 - Small Type: Improvement
Just like recyclable or whatever it should be cleaner than just checking for storeable or IBody IMO.
1.0
Add a disposable tag - Just like recyclable or whatever it should be cleaner than just checking for storeable or IBody IMO.
priority
add a disposable tag just like recyclable or whatever it should be cleaner than just checking for storeable or ibody imo
1
428,200
12,404,581,686
IssuesEvent
2020-05-21 15:46:44
department-of-veterans-affairs/caseflow
https://api.github.com/repos/department-of-veterans-affairs/caseflow
closed
Set up logging for hearing dispositions
Priority: Medium Product: caseflow-hearings Stakeholder: BVA Team: Tango 💃
## Description Record when dispositions were set on hearings and what they were ## Acceptance criteria - [ ] Write query - [ ] Set up graphs in Metabase ## Background/context/resources Long-term goal: increase hearing capacity Short-term goal: allow more hearings to be held during Covid-19 stay-at-home orders ## Technical notes
1.0
Set up logging for hearing dispositions - ## Description Record when dispositions were set on hearings and what they were ## Acceptance criteria - [ ] Write query - [ ] Set up graphs in Metabase ## Background/context/resources Long-term goal: increase hearing capacity Short-term goal: allow more hearings to be held during Covid-19 stay-at-home orders ## Technical notes
priority
set up logging for hearing dispositions description record when dispositions were set on hearings and what they were acceptance criteria write query set up graphs in metabase background context resources long term goal increase hearing capacity short term goal allow more hearings to be held during covid stay at home orders technical notes
1
671,493
22,763,450,768
IssuesEvent
2022-07-08 00:11:35
DinoDevs/GladiatusCrazyAddon
https://api.github.com/repos/DinoDevs/GladiatusCrazyAddon
opened
Show mercenary max stats on tooltips
:bulb: feature request 📈 Medium priority
I think this is helpful comparing mercenaries: ![image](https://user-images.githubusercontent.com/9620510/177891317-27321ee2-1113-4062-a2fb-4b55c4747465.png) Stats are the maximum values from mercenary's stats
1.0
Show mercenary max stats on tooltips - I think this is helpful comparing mercenaries: ![image](https://user-images.githubusercontent.com/9620510/177891317-27321ee2-1113-4062-a2fb-4b55c4747465.png) Stats are the maximum values from mercenary's stats
priority
show mercenary max stats on tooltips i think this is helpful comparing mercenaries stats are the maximum values from mercenary s stats
1
717,135
24,663,303,444
IssuesEvent
2022-10-18 08:26:26
geosolutions-it/MapStore2
https://api.github.com/repos/geosolutions-it/MapStore2
opened
Point and Line layers make it difficult to use Identify
bug Priority: Medium 3D C040-COMUNE_GE-2022-CUSTOM-SUPPORT
## Description <!-- Add here a few sentences describing the bug. --> In 3D mode it is quite difficult to obtain Identify response when clicking on a Line or Point WFS layers. The current logic make it really difficult to understand where to click to obtain Identify information probably due to the camera orientation in 3D. ## How to reproduce <!-- A list of steps to reproduce the bug --> - Try with [this map](https://mappe.comune.genova.it/MapStore2/#/viewer/openlayers/1000009465) layers - FONTI ACQUA POTABILE ACQUEDOTTO - LINEA ACQUEDOTTO STORICO *Expected Result* <!-- Describe here the expected result --> It should be possible to easily click on the layer to obtain Identify info (a kind of tolerance around the clicked point?) *Current Result* <!-- Describe here the current behavior --> You click on the layer and the most part of the time you don't receive info especially for point WFS Layers ![image](https://user-images.githubusercontent.com/1280027/191727819-98b801f9-84dd-4ec7-a5b4-390011cfcd7b.png) - [x] Not browser related <details><summary> <b>Browser info</b> </summary> <!-- If browser related, please compile the following table --> <!-- If your browser is not in the list please add a new row to the table with the version --> (use this site: <a href="https://www.whatsmybrowser.org/">https://www.whatsmybrowser.org/</a> for non expert users) | Browser Affected | Version | |---|---| |Internet Explorer| | |Edge| | |Chrome| | |Firefox| | |Safari| | </details> ## Other useful information <!-- error stack trace, screenshot, videos, or link to repository code are welcome -->
1.0
Point and Line layers make it difficult to use Identify - ## Description <!-- Add here a few sentences describing the bug. --> In 3D mode it is quite difficult to obtain Identify response when clicking on a Line or Point WFS layers. The current logic make it really difficult to understand where to click to obtain Identify information probably due to the camera orientation in 3D. ## How to reproduce <!-- A list of steps to reproduce the bug --> - Try with [this map](https://mappe.comune.genova.it/MapStore2/#/viewer/openlayers/1000009465) layers - FONTI ACQUA POTABILE ACQUEDOTTO - LINEA ACQUEDOTTO STORICO *Expected Result* <!-- Describe here the expected result --> It should be possible to easily click on the layer to obtain Identify info (a kind of tolerance around the clicked point?) *Current Result* <!-- Describe here the current behavior --> You click on the layer and the most part of the time you don't receive info especially for point WFS Layers ![image](https://user-images.githubusercontent.com/1280027/191727819-98b801f9-84dd-4ec7-a5b4-390011cfcd7b.png) - [x] Not browser related <details><summary> <b>Browser info</b> </summary> <!-- If browser related, please compile the following table --> <!-- If your browser is not in the list please add a new row to the table with the version --> (use this site: <a href="https://www.whatsmybrowser.org/">https://www.whatsmybrowser.org/</a> for non expert users) | Browser Affected | Version | |---|---| |Internet Explorer| | |Edge| | |Chrome| | |Firefox| | |Safari| | </details> ## Other useful information <!-- error stack trace, screenshot, videos, or link to repository code are welcome -->
priority
point and line layers make it difficult to use identify description in mode it is quite difficult to obtain identify response when clicking on a line or point wfs layers the current logic make it really difficult to understand where to click to obtain identify information probably due to the camera orientation in how to reproduce try with layers fonti acqua potabile acquedotto linea acquedotto storico expected result it should be possible to easily click on the layer to obtain identify info a kind of tolerance around the clicked point current result you click on the layer and the most part of the time you don t receive info especially for point wfs layers not browser related browser info use this site a href for non expert users browser affected version internet explorer edge chrome firefox safari other useful information
1
409,610
11,965,386,945
IssuesEvent
2020-04-05 23:10:55
poissonconsulting/shinyssdtools
https://api.github.com/repos/poissonconsulting/shinyssdtools
closed
Update and redeploy for ssdtools 0.1.1.9003
Difficulty: 2 Intermediate Effort: 2 Medium Priority: 2 High Type: Refactor
Relevant changes ``` - Change computable (standard errors) to FALSE by default. - Only give warning about standard errors if computable = TRUE. - Replaced 'burrIII2' for 'llogis' in default set. - Deprecated 'burrIII2' for 'llogis'. - Provide warning message about change in default for ci argument in predict function. - Enforce only 'llogis', 'llog' or 'burrIII2' (because all the same) ```
1.0
Update and redeploy for ssdtools 0.1.1.9003 - Relevant changes ``` - Change computable (standard errors) to FALSE by default. - Only give warning about standard errors if computable = TRUE. - Replaced 'burrIII2' for 'llogis' in default set. - Deprecated 'burrIII2' for 'llogis'. - Provide warning message about change in default for ci argument in predict function. - Enforce only 'llogis', 'llog' or 'burrIII2' (because all the same) ```
priority
update and redeploy for ssdtools relevant changes change computable standard errors to false by default only give warning about standard errors if computable true replaced for llogis in default set deprecated for llogis provide warning message about change in default for ci argument in predict function enforce only llogis llog or because all the same
1
157,722
6,011,555,383
IssuesEvent
2017-06-06 15:25:53
infolab-csail/WikipediaBase
https://api.github.com/repos/infolab-csail/WikipediaBase
opened
IMAGE-LIVE --> "local variable 'img_url' referenced before assignment"
bug priority/medium
Most of the time `IMAGE-LIVE` works fine, but not in this case: ``` $ telnet <host> 8023 (get "wikibase-term" "William \"Tiger\" Dunlop" (:CODE "IMAGE-LIVE") (:FILE "canmedaj01489-0107-a.gif" :MAX-WIDTH 400)) ((:error UnboundLocalError :message "local variable 'img_url' referenced before assignment")) ``` When I examine the [article in Wikipedia](https://en.wikipedia.org/wiki/William_%22Tiger%22_Dunlop), I can see a reference to `canmedaj01489-0107-a.gif` in the source, and the image gets displayed in the rendered page. So it looks like our fault and not mere bad data. I tried this in WikipediaBase servers on several hosts, including the one that's running a freshly regenerated backend. Fix the bug. Or if there is a legitimate problem with the image, then make the error message more informative.
1.0
IMAGE-LIVE --> "local variable 'img_url' referenced before assignment" - Most of the time `IMAGE-LIVE` works fine, but not in this case: ``` $ telnet <host> 8023 (get "wikibase-term" "William \"Tiger\" Dunlop" (:CODE "IMAGE-LIVE") (:FILE "canmedaj01489-0107-a.gif" :MAX-WIDTH 400)) ((:error UnboundLocalError :message "local variable 'img_url' referenced before assignment")) ``` When I examine the [article in Wikipedia](https://en.wikipedia.org/wiki/William_%22Tiger%22_Dunlop), I can see a reference to `canmedaj01489-0107-a.gif` in the source, and the image gets displayed in the rendered page. So it looks like our fault and not mere bad data. I tried this in WikipediaBase servers on several hosts, including the one that's running a freshly regenerated backend. Fix the bug. Or if there is a legitimate problem with the image, then make the error message more informative.
priority
image live local variable img url referenced before assignment most of the time image live works fine but not in this case telnet get wikibase term william tiger dunlop code image live file a gif max width error unboundlocalerror message local variable img url referenced before assignment when i examine the i can see a reference to a gif in the source and the image gets displayed in the rendered page so it looks like our fault and not mere bad data i tried this in wikipediabase servers on several hosts including the one that s running a freshly regenerated backend fix the bug or if there is a legitimate problem with the image then make the error message more informative
1
205,515
7,102,778,464
IssuesEvent
2018-01-16 00:23:37
davide-romanini/comictagger
https://api.github.com/repos/davide-romanini/comictagger
closed
Will Not Save Tags
Priority-Medium bug imported
_From [kylekoco...@gmail.com](https://code.google.com/u/101735958907267326152/) on March 09, 2014 18:58:09_ What steps will reproduce the problem? 1. Searching for tags 2. Auto-tag 3. Auto search What is the expected output? What do you see instead? When I click save tags, it should save the tags. It worked for about a dozen comics. Now, it keeps giving me an error saying Failed to save. All file names are the same and no matter what file I try, it does not work. What version of the product are you using? On what operating system? I have tried the most recent version as well as the previous version and it does not work. Please provide any additional information below. _Original issue: http://code.google.com/p/comictagger/issues/detail?id=44_
1.0
Will Not Save Tags - _From [kylekoco...@gmail.com](https://code.google.com/u/101735958907267326152/) on March 09, 2014 18:58:09_ What steps will reproduce the problem? 1. Searching for tags 2. Auto-tag 3. Auto search What is the expected output? What do you see instead? When I click save tags, it should save the tags. It worked for about a dozen comics. Now, it keeps giving me an error saying Failed to save. All file names are the same and no matter what file I try, it does not work. What version of the product are you using? On what operating system? I have tried the most recent version as well as the previous version and it does not work. Please provide any additional information below. _Original issue: http://code.google.com/p/comictagger/issues/detail?id=44_
priority
will not save tags from on march what steps will reproduce the problem searching for tags auto tag auto search what is the expected output what do you see instead when i click save tags it should save the tags it worked for about a dozen comics now it keeps giving me an error saying failed to save all file names are the same and no matter what file i try it does not work what version of the product are you using on what operating system i have tried the most recent version as well as the previous version and it does not work please provide any additional information below original issue
1
270,105
8,452,417,716
IssuesEvent
2018-10-20 03:30:38
minio/minio
https://api.github.com/repos/minio/minio
closed
Unable to heal deleted files in distributed mode
priority: medium
## Expected Behavior In distributed mode, when a node is taken out of the cluster for maintenance or due to outage, the node can be returned to the cluster and healed using the Heal API. After healing, cluster should be restored to 100% Green state. ## Current Behavior If a file has been deleted from Minio while a node was offline, the heal result will report Grey for that file. Heal status can never be restored to Green. ## Possible Solution Heal should be able to sync the deleted state. In addition, some documentation of the Heal API and expected usage would be helpful. ## Steps to Reproduce (for bugs) 1. Start up minio in distributed mode. I have 6 modes, but 4 should be enough. 2. Create a bucket with several files. 3. Shut down one node. 4. Delete one or more files. 5. Bring the node back online. 6. Run a Heal command. The files deleted will appear Grey. ## Context We are testing the new Heal API and need to ensure our systems will be stable in normal usage. In production it is expected that nodes will be taken offline for maintenance, or may be down for some reason. The cluster must survive this. We need to be able to recover from this scenario and return to a healthy state, without the need to restore from backup. ## Your Environment * Environment name and version (e.g. nginx 1.9.1): MINIO_VERSION=RELEASE.2018-05-16T23-35-33Z * Server type and version: Cloud Virtual Servers * Operating System and version (uname -a):Linux 4.4.0-119-generic #143-Ubuntu SMP Mon Apr 2 16:08:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
1.0
Unable to heal deleted files in distributed mode - ## Expected Behavior In distributed mode, when a node is taken out of the cluster for maintenance or due to outage, the node can be returned to the cluster and healed using the Heal API. After healing, cluster should be restored to 100% Green state. ## Current Behavior If a file has been deleted from Minio while a node was offline, the heal result will report Grey for that file. Heal status can never be restored to Green. ## Possible Solution Heal should be able to sync the deleted state. In addition, some documentation of the Heal API and expected usage would be helpful. ## Steps to Reproduce (for bugs) 1. Start up minio in distributed mode. I have 6 modes, but 4 should be enough. 2. Create a bucket with several files. 3. Shut down one node. 4. Delete one or more files. 5. Bring the node back online. 6. Run a Heal command. The files deleted will appear Grey. ## Context We are testing the new Heal API and need to ensure our systems will be stable in normal usage. In production it is expected that nodes will be taken offline for maintenance, or may be down for some reason. The cluster must survive this. We need to be able to recover from this scenario and return to a healthy state, without the need to restore from backup. ## Your Environment * Environment name and version (e.g. nginx 1.9.1): MINIO_VERSION=RELEASE.2018-05-16T23-35-33Z * Server type and version: Cloud Virtual Servers * Operating System and version (uname -a):Linux 4.4.0-119-generic #143-Ubuntu SMP Mon Apr 2 16:08:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
priority
unable to heal deleted files in distributed mode expected behavior in distributed mode when a node is taken out of the cluster for maintenance or due to outage the node can be returned to the cluster and healed using the heal api after healing cluster should be restored to green state current behavior if a file has been deleted from minio while a node was offline the heal result will report grey for that file heal status can never be restored to green possible solution heal should be able to sync the deleted state in addition some documentation of the heal api and expected usage would be helpful steps to reproduce for bugs start up minio in distributed mode i have modes but should be enough create a bucket with several files shut down one node delete one or more files bring the node back online run a heal command the files deleted will appear grey context we are testing the new heal api and need to ensure our systems will be stable in normal usage in production it is expected that nodes will be taken offline for maintenance or may be down for some reason the cluster must survive this we need to be able to recover from this scenario and return to a healthy state without the need to restore from backup your environment environment name and version e g nginx minio version release server type and version cloud virtual servers operating system and version uname a linux generic ubuntu smp mon apr utc gnu linux
1
65,163
3,226,915,075
IssuesEvent
2015-10-10 18:25:23
Sonarr/Sonarr
https://api.github.com/repos/Sonarr/Sonarr
closed
Blackhole doesn't treat previous grab as meeting cutoff
priority:medium suboptimal
Normally Sonarr relies on the queue to determine if the cutoff has been met by an item in the queue, but with blackhole it doesn't have a queue to track, in this scenario Sonarr should use its history to see if the history if the cutoff has been met, until it knows otherwise (failed or completed).
1.0
Blackhole doesn't treat previous grab as meeting cutoff - Normally Sonarr relies on the queue to determine if the cutoff has been met by an item in the queue, but with blackhole it doesn't have a queue to track, in this scenario Sonarr should use its history to see if the history if the cutoff has been met, until it knows otherwise (failed or completed).
priority
blackhole doesn t treat previous grab as meeting cutoff normally sonarr relies on the queue to determine if the cutoff has been met by an item in the queue but with blackhole it doesn t have a queue to track in this scenario sonarr should use its history to see if the history if the cutoff has been met until it knows otherwise failed or completed
1
122,651
4,838,579,540
IssuesEvent
2016-11-09 04:24:49
smirkspace/smirkspace
https://api.github.com/repos/smirkspace/smirkspace
opened
Add Blurbs
backend frontend medium priority
Add blurbs for each space. Before entering a space a user will enter a blurb of what they want to talk about.
1.0
Add Blurbs - Add blurbs for each space. Before entering a space a user will enter a blurb of what they want to talk about.
priority
add blurbs add blurbs for each space before entering a space a user will enter a blurb of what they want to talk about
1
76,156
3,482,131,601
IssuesEvent
2015-12-29 21:00:10
M-Zuber/MyHome
https://api.github.com/repos/M-Zuber/MyHome
closed
The dropdown in DataChartUI shows a category for both transaction types - even though it is only in one
bug Medium Priority
I am not sure what layer the code that causes the bug is on... [EDIT] See below as the issue is caused when entering a new transaction
1.0
The dropdown in DataChartUI shows a category for both transaction types - even though it is only in one - I am not sure what layer the code that causes the bug is on... [EDIT] See below as the issue is caused when entering a new transaction
priority
the dropdown in datachartui shows a category for both transaction types even though it is only in one i am not sure what layer the code that causes the bug is on see below as the issue is caused when entering a new transaction
1
365,255
10,780,098,860
IssuesEvent
2019-11-04 12:11:07
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
closed
When METRICS_DISTRIBUTED_DATASTRUCTURES is enabled, Cache statistics is not available on the metrics
Module: Diagnostics Module: ICache Module: Metrics Priority: Medium Source: Internal Source: Jet Team: Core
The reason is that the Cache service doesn't implement the `StatisticsAwareService` interface. P.S. While implementing this, there is an opportunity to remove the cache specific statistics collection code from `com.hazelcast.internal.management.TimedMemberStateFactory#createMemState`
1.0
When METRICS_DISTRIBUTED_DATASTRUCTURES is enabled, Cache statistics is not available on the metrics - The reason is that the Cache service doesn't implement the `StatisticsAwareService` interface. P.S. While implementing this, there is an opportunity to remove the cache specific statistics collection code from `com.hazelcast.internal.management.TimedMemberStateFactory#createMemState`
priority
when metrics distributed datastructures is enabled cache statistics is not available on the metrics the reason is that the cache service doesn t implement the statisticsawareservice interface p s while implementing this there is an opportunity to remove the cache specific statistics collection code from com hazelcast internal management timedmemberstatefactory creatememstate
1
523,154
15,173,727,783
IssuesEvent
2021-02-13 15:25:15
glotaran/pyglotaran
https://api.github.com/repos/glotaran/pyglotaran
closed
Store the parameters of each optimization step
Priority: Medium Status: In Progress Type: Enhancement
An optimization call will optimize a set of parameters given a model in a number of steps or iterations. The information of the trajectory that each parameter following in optimization space is very valuable information for modelling. Which parameters were optimized the most, by how much. Which parameters were largely stagnant (candidates for fixing / new starting values) Which parameters were all over the place (candidates for fixing / elimination) In addition to the parameter values the cost, cost reduction, step norm and optimality would also be required.
1.0
Store the parameters of each optimization step - An optimization call will optimize a set of parameters given a model in a number of steps or iterations. The information of the trajectory that each parameter following in optimization space is very valuable information for modelling. Which parameters were optimized the most, by how much. Which parameters were largely stagnant (candidates for fixing / new starting values) Which parameters were all over the place (candidates for fixing / elimination) In addition to the parameter values the cost, cost reduction, step norm and optimality would also be required.
priority
store the parameters of each optimization step an optimization call will optimize a set of parameters given a model in a number of steps or iterations the information of the trajectory that each parameter following in optimization space is very valuable information for modelling which parameters were optimized the most by how much which parameters were largely stagnant candidates for fixing new starting values which parameters were all over the place candidates for fixing elimination in addition to the parameter values the cost cost reduction step norm and optimality would also be required
1
43,551
2,889,846,970
IssuesEvent
2015-06-13 20:26:05
damonkohler/sl4a
https://api.github.com/repos/damonkohler/sl4a
opened
Add scipy and numpy modules to standard modules
auto-migrated Priority-Medium Type-Enhancement
_From @GoogleCodeExporter on May 31, 2015 11:25_ ``` Is it possible to add numpy and scipy to the list of modules bundled with ASE. They are not pure python modules and so cannot be added on the sd card. ``` Original issue reported on code.google.com by `mszar...@gmail.com` on 23 Mar 2010 at 11:33 _Copied from original issue: damonkohler/android-scripting#260_
1.0
Add scipy and numpy modules to standard modules - _From @GoogleCodeExporter on May 31, 2015 11:25_ ``` Is it possible to add numpy and scipy to the list of modules bundled with ASE. They are not pure python modules and so cannot be added on the sd card. ``` Original issue reported on code.google.com by `mszar...@gmail.com` on 23 Mar 2010 at 11:33 _Copied from original issue: damonkohler/android-scripting#260_
priority
add scipy and numpy modules to standard modules from googlecodeexporter on may is it possible to add numpy and scipy to the list of modules bundled with ase they are not pure python modules and so cannot be added on the sd card original issue reported on code google com by mszar gmail com on mar at copied from original issue damonkohler android scripting
1
297,565
9,172,193,831
IssuesEvent
2019-03-04 05:59:35
OpenPrinting/openprinting.github.io
https://api.github.com/repos/OpenPrinting/openprinting.github.io
closed
Add content to the 'Downloads' page
content migration difficulty/low help wanted priority/medium
Add content to the `Downloads` page located here: https://openprinting.github.io/downloads/ Depends on: #36
1.0
Add content to the 'Downloads' page - Add content to the `Downloads` page located here: https://openprinting.github.io/downloads/ Depends on: #36
priority
add content to the downloads page add content to the downloads page located here depends on
1
94,241
3,923,495,856
IssuesEvent
2016-04-22 11:36:14
haiwen/seafile
https://api.github.com/repos/haiwen/seafile
reopened
Move file, then upload with same name -> file already exists
priority-medium
Server Version: 5.0.2 If you move a file through the web interface to a subfolder and then upload a new file with exactly the same name to the old location, the message pops up that the file already exists. How to reproduce: 1. Create a file called foo.txt in your library root `/foo.txt` 2. Open the web interface and move `foo.txt` to the subfolder `/bar`. Results in `/bar/foo.txt` 3. Without reloading the page upload a file with the same name `foo.txt` through the webinterface into '/foo.txt' 4. You will get the popup that the file already exists and if you want to replace it. The popup should not be shown since the file was already moved out of the way. This problem doesn't occur if you use rename.
1.0
Move file, then upload with same name -> file already exists - Server Version: 5.0.2 If you move a file through the web interface to a subfolder and then upload a new file with exactly the same name to the old location, the message pops up that the file already exists. How to reproduce: 1. Create a file called foo.txt in your library root `/foo.txt` 2. Open the web interface and move `foo.txt` to the subfolder `/bar`. Results in `/bar/foo.txt` 3. Without reloading the page upload a file with the same name `foo.txt` through the webinterface into '/foo.txt' 4. You will get the popup that the file already exists and if you want to replace it. The popup should not be shown since the file was already moved out of the way. This problem doesn't occur if you use rename.
priority
move file then upload with same name file already exists server version if you move a file through the web interface to a subfolder and then upload a new file with exactly the same name to the old location the message pops up that the file already exists how to reproduce create a file called foo txt in your library root foo txt open the web interface and move foo txt to the subfolder bar results in bar foo txt without reloading the page upload a file with the same name foo txt through the webinterface into foo txt you will get the popup that the file already exists and if you want to replace it the popup should not be shown since the file was already moved out of the way this problem doesn t occur if you use rename
1
39,335
2,853,636,325
IssuesEvent
2015-06-01 19:36:52
Sistema-Integrado-Gestao-Academica/SiGA
https://api.github.com/repos/Sistema-Integrado-Gestao-Academica/SiGA
closed
Avaliação: Estrutura Básica
[Medium Priority]
Como **Administrador** desejo visualizar a avaliação dos programas do sistema, para que possa acompanhar em tempo real a condição de cada programa frente a avaliação da **CAPES**. -------------- * Construir a estrutura básica para abrigar a **Avaliação** para os N programas dentro do software. * Cada programa possui uma avaliação, que deve baseada num tempo. (4 anos), ou seja de 4 em 4 anos o programa recebe uma nota. * Deve ser possível alterar a periodicidade da Avaliação (de 4 anos para 3 anos por exemplo, ou 5 anos). * Atenção para próxima issue na sequência: A avaliação possui "parciais" anuais. Atenção para não limitar a classe.
1.0
Avaliação: Estrutura Básica - Como **Administrador** desejo visualizar a avaliação dos programas do sistema, para que possa acompanhar em tempo real a condição de cada programa frente a avaliação da **CAPES**. -------------- * Construir a estrutura básica para abrigar a **Avaliação** para os N programas dentro do software. * Cada programa possui uma avaliação, que deve baseada num tempo. (4 anos), ou seja de 4 em 4 anos o programa recebe uma nota. * Deve ser possível alterar a periodicidade da Avaliação (de 4 anos para 3 anos por exemplo, ou 5 anos). * Atenção para próxima issue na sequência: A avaliação possui "parciais" anuais. Atenção para não limitar a classe.
priority
avaliação estrutura básica como administrador desejo visualizar a avaliação dos programas do sistema para que possa acompanhar em tempo real a condição de cada programa frente a avaliação da capes construir a estrutura básica para abrigar a avaliação para os n programas dentro do software cada programa possui uma avaliação que deve baseada num tempo anos ou seja de em anos o programa recebe uma nota deve ser possível alterar a periodicidade da avaliação de anos para anos por exemplo ou anos atenção para próxima issue na sequência a avaliação possui parciais anuais atenção para não limitar a classe
1
75,371
3,461,844,657
IssuesEvent
2015-12-20 12:45:27
PowerPointLabs/powerpointlabs
https://api.github.com/repos/PowerPointLabs/powerpointlabs
closed
Make drop-zone remember its position
Feature.ImagesLab Priority.Medium status.releaseCandidate type-enhancement
e.g. if I remove it my other window where I do the image search, I would like it to remain there (or appear at the same location) until I move it again.
1.0
Make drop-zone remember its position - e.g. if I remove it my other window where I do the image search, I would like it to remain there (or appear at the same location) until I move it again.
priority
make drop zone remember its position e g if i remove it my other window where i do the image search i would like it to remain there or appear at the same location until i move it again
1
427,079
12,392,667,118
IssuesEvent
2020-05-20 14:20:04
OaklandDevTeam/umbrella
https://api.github.com/repos/OaklandDevTeam/umbrella
closed
Endpoint to login/register Users
Priority: Medium back-end
Users should be able to register an account to use the application. Users should also be able to login to their respective accounts to use the application. A REST endpoint must be created to register an account with the auth layer. A REST endpoint must be created to login an account with the auth layer.
1.0
Endpoint to login/register Users - Users should be able to register an account to use the application. Users should also be able to login to their respective accounts to use the application. A REST endpoint must be created to register an account with the auth layer. A REST endpoint must be created to login an account with the auth layer.
priority
endpoint to login register users users should be able to register an account to use the application users should also be able to login to their respective accounts to use the application a rest endpoint must be created to register an account with the auth layer a rest endpoint must be created to login an account with the auth layer
1
651,770
21,509,833,762
IssuesEvent
2022-04-28 02:22:47
pystardust/ani-cli
https://api.github.com/repos/pystardust/ani-cli
opened
Updates that occur during a loss of internet connectivity can cause the ani-cli script to be corrupted on more likely be void.
type: bug priority 2: medium
**Metadata (please complete the following information)** Version: 2.1.5 OS: Arch lts kernel Shell: fish Anime: N/A **Describe the bug** Updates that occur during a loss of internet connectivity can cause the ani-cli script to be corrupted on more likely be void. **Steps To Reproduce** 1. Run `ani-cli -U` during a internet outage **Expected behavior** Update failure is reported **Additional context** N/A
1.0
Updates that occur during a loss of internet connectivity can cause the ani-cli script to be corrupted on more likely be void. - **Metadata (please complete the following information)** Version: 2.1.5 OS: Arch lts kernel Shell: fish Anime: N/A **Describe the bug** Updates that occur during a loss of internet connectivity can cause the ani-cli script to be corrupted on more likely be void. **Steps To Reproduce** 1. Run `ani-cli -U` during a internet outage **Expected behavior** Update failure is reported **Additional context** N/A
priority
updates that occur during a loss of internet connectivity can cause the ani cli script to be corrupted on more likely be void metadata please complete the following information version os arch lts kernel shell fish anime n a describe the bug updates that occur during a loss of internet connectivity can cause the ani cli script to be corrupted on more likely be void steps to reproduce run ani cli u during a internet outage expected behavior update failure is reported additional context n a
1
819,044
30,717,790,495
IssuesEvent
2023-07-27 14:04:48
yugabyte/yugabyte-db
https://api.github.com/repos/yugabyte/yugabyte-db
closed
[xCluster] Simplify the xcluster test helper functions
kind/enhancement area/docdb priority/medium
Jira Link: [DB-7394](https://yugabyte.atlassian.net/browse/DB-7394) ### Description SetupUniverseReplication returns a list of tables in a weirdly sorted order. Each xcluster test has to parse and extract out producer and consumer tables from this list. Instead we can directly store the two lists in the base class. ### Warning: Please confirm that this issue does not contain any sensitive information - [X] I confirm this issue does not contain any sensitive information. [DB-7394]: https://yugabyte.atlassian.net/browse/DB-7394?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
1.0
[xCluster] Simplify the xcluster test helper functions - Jira Link: [DB-7394](https://yugabyte.atlassian.net/browse/DB-7394) ### Description SetupUniverseReplication returns a list of tables in a weirdly sorted order. Each xcluster test has to parse and extract out producer and consumer tables from this list. Instead we can directly store the two lists in the base class. ### Warning: Please confirm that this issue does not contain any sensitive information - [X] I confirm this issue does not contain any sensitive information. [DB-7394]: https://yugabyte.atlassian.net/browse/DB-7394?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
priority
simplify the xcluster test helper functions jira link description setupuniversereplication returns a list of tables in a weirdly sorted order each xcluster test has to parse and extract out producer and consumer tables from this list instead we can directly store the two lists in the base class warning please confirm that this issue does not contain any sensitive information i confirm this issue does not contain any sensitive information
1
655,156
21,678,924,161
IssuesEvent
2022-05-09 02:55:50
TeamTwilight/twilightforest-fabric
https://api.github.com/repos/TeamTwilight/twilightforest-fabric
closed
no fog in spooky forest
bug Priority: Medium missing feature
### Forge Version 0.11.7 ### Twilight Forest Version 4.0.69 ### Client Log _No response_ ### Crash Report (if applicable) _No response_ ### Steps to Reproduce there is no pink or magenta fog in spooky forest ### What You Expected there to be pink or magenta fog in the spooky forest like normal. ### What Happened Instead no pink or magenta fog. ### Additional Details _No response_ ### Please Read and Confirm The Following - [X] I have confirmed this bug can be replicated without the use of Optifine. - [X] I have confirmed the details provided in this report are concise as possible and does not contained vague information (ie. Versions are properly recorded, answers to questions are clear).
1.0
no fog in spooky forest - ### Forge Version 0.11.7 ### Twilight Forest Version 4.0.69 ### Client Log _No response_ ### Crash Report (if applicable) _No response_ ### Steps to Reproduce there is no pink or magenta fog in spooky forest ### What You Expected there to be pink or magenta fog in the spooky forest like normal. ### What Happened Instead no pink or magenta fog. ### Additional Details _No response_ ### Please Read and Confirm The Following - [X] I have confirmed this bug can be replicated without the use of Optifine. - [X] I have confirmed the details provided in this report are concise as possible and does not contained vague information (ie. Versions are properly recorded, answers to questions are clear).
priority
no fog in spooky forest forge version twilight forest version client log no response crash report if applicable no response steps to reproduce there is no pink or magenta fog in spooky forest what you expected there to be pink or magenta fog in the spooky forest like normal what happened instead no pink or magenta fog additional details no response please read and confirm the following i have confirmed this bug can be replicated without the use of optifine i have confirmed the details provided in this report are concise as possible and does not contained vague information ie versions are properly recorded answers to questions are clear
1
247,112
7,896,222,810
IssuesEvent
2018-06-29 07:48:07
Creepsky/creepMiner
https://api.github.com/repos/Creepsky/creepMiner
closed
The pool calculated a different deadline + during validate the plotfile the Miner chrashes with segmentation fault (1.8.2) + ignoring memory allocation
Priority: Medium Status: To check
### Subject of the issue Mining with POC2 plots causes this message: "The pool calculated a different deadline for your nonce than your miner has!" I tried to check the plotfile with CreepMiner. Everytime i click the validate button in the webserver the CreepMiner crashes with the message Segmentation fault. I got this error some time ago on storing not plot files on the HDD but this time there is only one plotfile per HDD and nothing else. During using forwarding feature with several PC the miner ignores the memory settings in the config and fully fills the RAM and the SWAP file and then it crashes. ### Your environment * version of creepMiner: 1.8.2 (on Solo Mining) * version of OS: Mintlinux 18.3 * Wallet Version: BRS 2.2.1, Maria Db * which browser and its version. Firefox 59.0.2 ### Steps to reproduce Every restart of the Miner causes exactly the same behaviour on my machine ### Expected behavior Finding and submitting nonces without different deadline error ### Actual behavior The miner gives a mesage that the pool calculated a different deadline for my nonce than my miner has. On Validating the POC2 plot file with the Miner it crashes with error Segmentation fault error message. ### Other information Some days ago the miner flushes my memory until it was all filled with creepminer data. I was setting 34 GB but CreepMiner got more than 60 GB. At this time I had a lot more HDD and POC1 files in there. Actually only testing with POC2 files and a small amount of HDD (6 HDD each 10 TB, POC2 per Plotfile 9,8 TB). In my actual log file i can see "Allocated memory: 0.0 MB". So i think there could be a memory allocation issue that causes this issue and perhaps that could be the problem for the segmentation fault issue too. But i am no developer. Perhaps there could be a bad POC2 file cause this issue too, i think. I used the "cg_obup-PoC2" Plotter for creating my plotfiles. But all my POC2 files are affected with the "deadline error" Cause i am on solo mining i think the calculated deadline from my pool is form my own Wallet BRS 2.2.1. So perhaps the wallet sends a wrong deadline message to the miner and perhaps the miner is working fine but the wallet is only sending wrong dealines ? I will try to set the a reward assignmnet again with the new BRS Wallet. I don't know. In the end you can find my mining.config setting and log files (i deleted personal information in these files and set ... for these information) [mining2.config.txt](https://github.com/Creepsky/creepMiner/files/2135994/mining2.config.txt) [creepMiner_20180626_063601.194979.txt](https://github.com/Creepsky/creepMiner/files/2135997/creepMiner_20180626_063601.194979.txt)
1.0
The pool calculated a different deadline + during validate the plotfile the Miner chrashes with segmentation fault (1.8.2) + ignoring memory allocation - ### Subject of the issue Mining with POC2 plots causes this message: "The pool calculated a different deadline for your nonce than your miner has!" I tried to check the plotfile with CreepMiner. Everytime i click the validate button in the webserver the CreepMiner crashes with the message Segmentation fault. I got this error some time ago on storing not plot files on the HDD but this time there is only one plotfile per HDD and nothing else. During using forwarding feature with several PC the miner ignores the memory settings in the config and fully fills the RAM and the SWAP file and then it crashes. ### Your environment * version of creepMiner: 1.8.2 (on Solo Mining) * version of OS: Mintlinux 18.3 * Wallet Version: BRS 2.2.1, Maria Db * which browser and its version. Firefox 59.0.2 ### Steps to reproduce Every restart of the Miner causes exactly the same behaviour on my machine ### Expected behavior Finding and submitting nonces without different deadline error ### Actual behavior The miner gives a mesage that the pool calculated a different deadline for my nonce than my miner has. On Validating the POC2 plot file with the Miner it crashes with error Segmentation fault error message. ### Other information Some days ago the miner flushes my memory until it was all filled with creepminer data. I was setting 34 GB but CreepMiner got more than 60 GB. At this time I had a lot more HDD and POC1 files in there. Actually only testing with POC2 files and a small amount of HDD (6 HDD each 10 TB, POC2 per Plotfile 9,8 TB). In my actual log file i can see "Allocated memory: 0.0 MB". So i think there could be a memory allocation issue that causes this issue and perhaps that could be the problem for the segmentation fault issue too. But i am no developer. Perhaps there could be a bad POC2 file cause this issue too, i think. I used the "cg_obup-PoC2" Plotter for creating my plotfiles. But all my POC2 files are affected with the "deadline error" Cause i am on solo mining i think the calculated deadline from my pool is form my own Wallet BRS 2.2.1. So perhaps the wallet sends a wrong deadline message to the miner and perhaps the miner is working fine but the wallet is only sending wrong dealines ? I will try to set the a reward assignmnet again with the new BRS Wallet. I don't know. In the end you can find my mining.config setting and log files (i deleted personal information in these files and set ... for these information) [mining2.config.txt](https://github.com/Creepsky/creepMiner/files/2135994/mining2.config.txt) [creepMiner_20180626_063601.194979.txt](https://github.com/Creepsky/creepMiner/files/2135997/creepMiner_20180626_063601.194979.txt)
priority
the pool calculated a different deadline during validate the plotfile the miner chrashes with segmentation fault ignoring memory allocation subject of the issue mining with plots causes this message the pool calculated a different deadline for your nonce than your miner has i tried to check the plotfile with creepminer everytime i click the validate button in the webserver the creepminer crashes with the message segmentation fault i got this error some time ago on storing not plot files on the hdd but this time there is only one plotfile per hdd and nothing else during using forwarding feature with several pc the miner ignores the memory settings in the config and fully fills the ram and the swap file and then it crashes your environment version of creepminer on solo mining version of os mintlinux wallet version brs maria db which browser and its version firefox steps to reproduce every restart of the miner causes exactly the same behaviour on my machine expected behavior finding and submitting nonces without different deadline error actual behavior the miner gives a mesage that the pool calculated a different deadline for my nonce than my miner has on validating the plot file with the miner it crashes with error segmentation fault error message other information some days ago the miner flushes my memory until it was all filled with creepminer data i was setting gb but creepminer got more than gb at this time i had a lot more hdd and files in there actually only testing with files and a small amount of hdd hdd each tb per plotfile tb in my actual log file i can see allocated memory mb so i think there could be a memory allocation issue that causes this issue and perhaps that could be the problem for the segmentation fault issue too but i am no developer perhaps there could be a bad file cause this issue too i think i used the cg obup plotter for creating my plotfiles but all my files are affected with the deadline error cause i am on solo mining i think the calculated deadline from my pool is form my own wallet brs so perhaps the wallet sends a wrong deadline message to the miner and perhaps the miner is working fine but the wallet is only sending wrong dealines i will try to set the a reward assignmnet again with the new brs wallet i don t know in the end you can find my mining config setting and log files i deleted personal information in these files and set for these information
1
13,782
2,610,300,342
IssuesEvent
2015-02-26 19:36:31
chrsmith/hedgewars
https://api.github.com/repos/chrsmith/hedgewars
closed
Idea: Hammer knocks hogs out.
auto-migrated Priority-Medium Type-Enhancement
``` The idea is that additional to the current effect of the hammer weapon hitting a hedgehog it would immediately paralyze the hedgehog until the start of the second next turn of its team. This effect can be reversed by any further damage (e.g. falling damage when being hit through a girder) as well as being hit by a hammer again (so you can't paralyze a hog through 2 consequent turns. (also drowning, for respawn reasons). * If the hog was the only team member alive the whole turn would be skipped. * Otherwise the next available hog in the team would be selected if it was the knocked out hedgehog's turn. * The hog would be unable to be selected with the hog switch tool. ``` ----- Original issue reported on code.google.com by `sheepyluva` on 4 Apr 2011 at 12:11
1.0
Idea: Hammer knocks hogs out. - ``` The idea is that additional to the current effect of the hammer weapon hitting a hedgehog it would immediately paralyze the hedgehog until the start of the second next turn of its team. This effect can be reversed by any further damage (e.g. falling damage when being hit through a girder) as well as being hit by a hammer again (so you can't paralyze a hog through 2 consequent turns. (also drowning, for respawn reasons). * If the hog was the only team member alive the whole turn would be skipped. * Otherwise the next available hog in the team would be selected if it was the knocked out hedgehog's turn. * The hog would be unable to be selected with the hog switch tool. ``` ----- Original issue reported on code.google.com by `sheepyluva` on 4 Apr 2011 at 12:11
priority
idea hammer knocks hogs out the idea is that additional to the current effect of the hammer weapon hitting a hedgehog it would immediately paralyze the hedgehog until the start of the second next turn of its team this effect can be reversed by any further damage e g falling damage when being hit through a girder as well as being hit by a hammer again so you can t paralyze a hog through consequent turns also drowning for respawn reasons if the hog was the only team member alive the whole turn would be skipped otherwise the next available hog in the team would be selected if it was the knocked out hedgehog s turn the hog would be unable to be selected with the hog switch tool original issue reported on code google com by sheepyluva on apr at
1
304,938
9,347,433,619
IssuesEvent
2019-03-31 01:33:46
cmput301w19t25/3AM-
https://api.github.com/repos/cmput301w19t25/3AM-
closed
Fix loading data when launching the app after close
Medium priority
Current solution is a little gimmicky; we wait for the data to be loaded from the backend class, then we start the activity. This currently looks weird since the sign-in activity is launched for a second or two, and then the homepage activity is launched.
1.0
Fix loading data when launching the app after close - Current solution is a little gimmicky; we wait for the data to be loaded from the backend class, then we start the activity. This currently looks weird since the sign-in activity is launched for a second or two, and then the homepage activity is launched.
priority
fix loading data when launching the app after close current solution is a little gimmicky we wait for the data to be loaded from the backend class then we start the activity this currently looks weird since the sign in activity is launched for a second or two and then the homepage activity is launched
1
80,608
3,568,152,084
IssuesEvent
2016-01-26 03:08:59
sumpatel/Software-Testing-a1
https://api.github.com/repos/sumpatel/Software-Testing-a1
closed
[Verified] Balance inquiry in valid account
High Priority Medium Severity
Function Tested: Balance inquiry in valid account Initial State of System: Idle Steps to reproduce: 1. Insert card 2. Enter pin 3. Press balance inquiry Expected Outcome: Expected savings and checking options Actual Outcome: Shows money market instead of savings
1.0
[Verified] Balance inquiry in valid account - Function Tested: Balance inquiry in valid account Initial State of System: Idle Steps to reproduce: 1. Insert card 2. Enter pin 3. Press balance inquiry Expected Outcome: Expected savings and checking options Actual Outcome: Shows money market instead of savings
priority
balance inquiry in valid account function tested balance inquiry in valid account initial state of system idle steps to reproduce insert card enter pin press balance inquiry expected outcome expected savings and checking options actual outcome shows money market instead of savings
1
738,341
25,553,837,334
IssuesEvent
2022-11-30 03:44:01
ansible-collections/azure
https://api.github.com/repos/ansible-collections/azure
closed
add async and idempotency support to azurerm_managed_disk
bug enhancement medium_priority work in
<!--- Verify first that your feature was not already discussed on GitHub --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Describe the new feature/improvement briefly below --> When attempting to attach a large number of disks to many (100+) systems at scale via the Azure `azurerm_managed_disk` module, `async` support should allow the ability to independently create and attach multiple disks across a large number of Azure virtual machines. This supported configuration exists with other competitors and is in use in many of our production environments, allowing for attachment of a dozen or more disks across hundreds of systems. ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> azurerm_managed_disk ##### ADDITIONAL INFORMATION <!--- Describe how the feature would be used, why it is needed and what it would solve --> Both Azure Commercial and AzureUSGovernment are affected cloud providers. This works inside other disparate cloud providers for both commercial and higher security compliance requirements without issue. The below code snippet gets us around it temporarily leveraging the REST API but this is unsustainable long-term. <!--- Paste example playbooks or commands between quotes below --> ```yaml name: Attach disks to Azure VM via Rest API when: disk_create | bool and (data_disk_json | length > 0) delegate_to: "{{ task_delegation | default(omit, true) }}" azure_rm_resource: resource_group: "{{ azure_resource_group }}" provider: compute resource_type: virtualMachines resource_name: "{{ azure_vm_name }}" api_version: "2021-11-01" body: { "location": "{{ azure_location }}", "properties": { "storageProfile": { "dataDisks": "{{ data_disk_json }}" } } } ``` <!--- HINT: You can also paste gist.github.com links for larger files -->
1.0
add async and idempotency support to azurerm_managed_disk - <!--- Verify first that your feature was not already discussed on GitHub --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Describe the new feature/improvement briefly below --> When attempting to attach a large number of disks to many (100+) systems at scale via the Azure `azurerm_managed_disk` module, `async` support should allow the ability to independently create and attach multiple disks across a large number of Azure virtual machines. This supported configuration exists with other competitors and is in use in many of our production environments, allowing for attachment of a dozen or more disks across hundreds of systems. ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> azurerm_managed_disk ##### ADDITIONAL INFORMATION <!--- Describe how the feature would be used, why it is needed and what it would solve --> Both Azure Commercial and AzureUSGovernment are affected cloud providers. This works inside other disparate cloud providers for both commercial and higher security compliance requirements without issue. The below code snippet gets us around it temporarily leveraging the REST API but this is unsustainable long-term. <!--- Paste example playbooks or commands between quotes below --> ```yaml name: Attach disks to Azure VM via Rest API when: disk_create | bool and (data_disk_json | length > 0) delegate_to: "{{ task_delegation | default(omit, true) }}" azure_rm_resource: resource_group: "{{ azure_resource_group }}" provider: compute resource_type: virtualMachines resource_name: "{{ azure_vm_name }}" api_version: "2021-11-01" body: { "location": "{{ azure_location }}", "properties": { "storageProfile": { "dataDisks": "{{ data_disk_json }}" } } } ``` <!--- HINT: You can also paste gist.github.com links for larger files -->
priority
add async and idempotency support to azurerm managed disk summary when attempting to attach a large number of disks to many systems at scale via the azure azurerm managed disk module async support should allow the ability to independently create and attach multiple disks across a large number of azure virtual machines this supported configuration exists with other competitors and is in use in many of our production environments allowing for attachment of a dozen or more disks across hundreds of systems issue type feature idea component name azurerm managed disk additional information both azure commercial and azureusgovernment are affected cloud providers this works inside other disparate cloud providers for both commercial and higher security compliance requirements without issue the below code snippet gets us around it temporarily leveraging the rest api but this is unsustainable long term yaml name attach disks to azure vm via rest api when disk create bool and data disk json length delegate to task delegation default omit true azure rm resource resource group azure resource group provider compute resource type virtualmachines resource name azure vm name api version body location azure location properties storageprofile datadisks data disk json
1
625,623
19,758,834,295
IssuesEvent
2022-01-16 03:32:08
cloudguruab/ginop
https://api.github.com/repos/cloudguruab/ginop
closed
add logic for Permission instance to entire codebase
blockchain foundational medium priority high priority
- Its currently incomplete until more code is introduced in other layers - Add logic, logging, and custom exceptions in the Permission instance
2.0
add logic for Permission instance to entire codebase - - Its currently incomplete until more code is introduced in other layers - Add logic, logging, and custom exceptions in the Permission instance
priority
add logic for permission instance to entire codebase its currently incomplete until more code is introduced in other layers add logic logging and custom exceptions in the permission instance
1
560,260
16,591,583,783
IssuesEvent
2021-06-01 08:23:11
HabitRPG/habitica-android
https://api.github.com/repos/HabitRPG/habitica-android
closed
Hairstyle set not showing as purchased, not equippable
Priority: medium Type: Bug
> Device: samsung SM-J330FN > Android Version: 28 > AppVersion: Version 3.2.2 (2888) > User ID: 6d95ab4a-630b-4180-a4e2-919ca7dd97bc > Level: 111 > Class: warrior > Is in Inn: false > Uses Costume: true > Custom Day Start: 0 > Timezone Offset: 0 > Details: > > Hi Habitica Team, > Hope you are keeping safe and well. > I have a problem with one of the hair bases in Set 1. I bought all the sets, but Android tells me I need to pay 2 gems for one of the hair bases (the others are all fine). > This doesn't happen on the website, only on the app. > It started a few days ago. > I have attached a photo. > Thank you again for all your help. ![Screenshot_20210320-072502_Habitica](https://user-images.githubusercontent.com/8144640/111968333-f14be880-8af0-11eb-8c5f-8dc7c01a0614.jpg) Alys confirmed that they definitely own the hairstyle.
1.0
Hairstyle set not showing as purchased, not equippable - > Device: samsung SM-J330FN > Android Version: 28 > AppVersion: Version 3.2.2 (2888) > User ID: 6d95ab4a-630b-4180-a4e2-919ca7dd97bc > Level: 111 > Class: warrior > Is in Inn: false > Uses Costume: true > Custom Day Start: 0 > Timezone Offset: 0 > Details: > > Hi Habitica Team, > Hope you are keeping safe and well. > I have a problem with one of the hair bases in Set 1. I bought all the sets, but Android tells me I need to pay 2 gems for one of the hair bases (the others are all fine). > This doesn't happen on the website, only on the app. > It started a few days ago. > I have attached a photo. > Thank you again for all your help. ![Screenshot_20210320-072502_Habitica](https://user-images.githubusercontent.com/8144640/111968333-f14be880-8af0-11eb-8c5f-8dc7c01a0614.jpg) Alys confirmed that they definitely own the hairstyle.
priority
hairstyle set not showing as purchased not equippable device samsung sm android version appversion version user id level class warrior is in inn false uses costume true custom day start timezone offset details hi habitica team hope you are keeping safe and well i have a problem with one of the hair bases in set i bought all the sets but android tells me i need to pay gems for one of the hair bases the others are all fine this doesn t happen on the website only on the app it started a few days ago i have attached a photo thank you again for all your help alys confirmed that they definitely own the hairstyle
1
40,738
2,868,939,176
IssuesEvent
2015-06-05 22:04:51
dart-lang/pub
https://api.github.com/repos/dart-lang/pub
closed
pub lish is not filtering out .svn
bug Fixed Priority-Medium
<a href="https://github.com/dgrove"><img src="https://avatars.githubusercontent.com/u/2108507?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [dgrove](https://github.com/dgrove)** _Originally opened as dart-lang/sdk#7487_ ---- see http://commondatastorage.googleapis.com/pub.dartlang.org/packages/args-0.2.9+7.tar.gz for a package uploaded with pub from SDK 0.2.9.7 - it has .svn directories in it.
1.0
pub lish is not filtering out .svn - <a href="https://github.com/dgrove"><img src="https://avatars.githubusercontent.com/u/2108507?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [dgrove](https://github.com/dgrove)** _Originally opened as dart-lang/sdk#7487_ ---- see http://commondatastorage.googleapis.com/pub.dartlang.org/packages/args-0.2.9+7.tar.gz for a package uploaded with pub from SDK 0.2.9.7 - it has .svn directories in it.
priority
pub lish is not filtering out svn issue by originally opened as dart lang sdk see for a package uploaded with pub from sdk it has svn directories in it
1
220,835
7,371,479,279
IssuesEvent
2018-03-13 11:54:37
pmem/issues
https://api.github.com/repos/pmem/issues
closed
SIGABRT/SIGSEGV raised when creating pool based on directory poolset with insufficient size
Exposure: Medium Priority: 3 medium State: To be verified Type: Bug
Steps to reproduce: 1) Create pool set file: ``` $cat pool.set PMEMPOOLSET 20K /dev/shm/ ``` 2) Create obj/log/blk pool: ``` $pmempool create obj pool.set (...) <libpmempool>: <3> [mmap.c:66 util_mmap_init] <libpmempool>: <3> [libpmempool.c:69 libpmempool_init] <libpmempool>: <3> [set.c:120 util_remote_init] <libpmemobj>: <3> [obj.c:1238 pmemobj_createU] path pool.set layout (null) poolsize 0 mode 664 <libpmemobj>: <3> [obj.c:1208 obj_get_nlanes] <libpmemobj>: <3> [set.c:3214 util_pool_create] setp 0x7fffc39f29c8 path pool.set poolsize 0 minsize 8388608 minpartsize 2097152 attr 0x7f7d7c0ccec0 nlanes 0x7fffc39f29c4 can_have_rep 1 <libpmemobj>: <3> [set.c:3013 util_pool_create_uuids] setp 0x7fffc39f29c8 path pool.set poolsize 0 minsize 8388608 minpartsize 2097152 pattr 0x7f7d7c0ccec0 nlanes 0x7fffc39f29c4 can_have_rep 1 remote 0 <libpmemobj>: <3> [set.c:2052 util_poolset_create_set] setp 0x7fffc39f29c8 path pool.set poolsize 0 minsize 8388608 <libpmemobj>: <3> [file.c:186 util_file_is_device_dax] path "pool.set" <libpmemobj>: <3> [file.c:133 util_fd_is_device_dax] fd 3 <libpmemobj>: <4> [file.c:153 util_fd_is_device_dax] not a character device <libpmemobj>: <4> [file.c:174 util_fd_is_device_dax] returning 0 <libpmemobj>: <4> [file.c:211 util_file_is_device_dax] returning 0 <libpmemobj>: <3> [file.c:503 util_file_open] path "pool.set" size 0x7fffc39f2848 minsize 0 flags 0 <libpmemobj>: <3> [file.c:222 util_file_get_size] path "pool.set" <libpmemobj>: <3> [file.c:186 util_file_is_device_dax] path "pool.set" <libpmemobj>: <3> [file.c:133 util_fd_is_device_dax] fd 4 <libpmemobj>: <4> [file.c:153 util_fd_is_device_dax] not a character device <libpmemobj>: <4> [file.c:174 util_fd_is_device_dax] returning 0 <libpmemobj>: <4> [file.c:211 util_file_is_device_dax] returning 0 <libpmemobj>: <4> [file.c:236 util_file_get_size] file length 26 <libpmemobj>: <4> [file.c:543 util_file_open] actual file size 26 <libpmemobj>: <3> [set.c:1465 util_poolset_parse] setp 0x7fffc39f29c8 path pool.set fd 3 <libpmemobj>: <3> [set.c:1039 util_parse_add_replica] setp 0x7fffc39f27b0 <libpmemobj>: <3> [file_posix.c:139 util_is_absolute_path] path: /dev/shm/ <libpmemobj>: <3> [set.c:1019 util_parse_add_element] set 0x146e980 path /dev/shm/ filesize 20480 <libpmemobj>: <3> [set.c:985 util_parse_add_directory] set 0x146e980 path /dev/shm/ filesize 20480 <libpmemobj>: <3> [set.c:1113 util_poolset_check_devdax] set 0x146e980 <libpmemobj>: <3> [set.c:1373 util_poolset_directories_load] set 0x146e980 <libpmemobj>: <3> [set.c:1311 util_poolset_directory_load] rep 0x146e9e0 dir "/dev/shm/" <libpmemobj>: <4> [set.c:1634 util_poolset_parse] set file format correct (pool.set) <libpmemobj>: <3> [set.c:1150 util_poolset_check_options] set 0x146e980 <libpmemobj>: <3> [set.c:1166 util_poolset_set_size] set 0x146e980 <libpmemobj>: <3> [set.c:1202 util_poolset_set_size] pool size set to 0 <libpmemobj>: <3> [set.c:2850 util_poolset_append_new_part] set 0x146e980 size 8388608 <libpmemobj>: <3> [set.c:951 util_replica_add_part] replica 0x146e9e0 path "/dev/shm//000000.pmem" filesize 8388608 <libpmemobj>: <3> [set.c:913 util_replica_add_part_by_idx] replica 0x146e9e0 path /dev/shm//000000.pmem filesize 8388608 <libpmemobj>: <3> [set.c:883 util_replica_reserve] replica 0x146e9e0 n 1 <libpmemobj>: <3> [file.c:186 util_file_is_device_dax] path "/dev/shm//000000.pmem" <libpmemobj>: <4> [file.c:211 util_file_is_device_dax] returning 0 <libpmemobj>: <3> [set.c:1166 util_poolset_set_size] set 0x146e980 <libpmemobj>: <3> [set.c:1202 util_poolset_set_size] pool size set to 8388608 <libpmemobj>: <3> [set.c:1937 util_poolset_files_local] set 0x146e980 minpartsize 2097152 create 1 <libpmemobj>: <3> [set.c:1725 util_part_open] part 0x146d9d0 minsize 2097152 create 1 <libpmemobj>: <3> [file.c:438 util_file_create] path "/dev/shm//000000.pmem" size 8388608 minsize 2097152 <libpmemobj>: <3> [set.c:2547 util_replica_map_local] set 0x146e980 repidx 0 flags 1 <libpmemobj>: <3> [mmap_posix.c:153 util_map_hint] len 20480 req_align 0 <libpmemobj>: <4> [mmap_posix.c:174 util_map_hint] system choice 0x7f7d7cba5000 <libpmemobj>: <4> [mmap_posix.c:179 util_map_hint] hint 0x7f7d7cba5000 <libpmemobj>: <3> [set.c:434 util_map_part] part 0x146d9d0 addr 0x7f7d7cba5000 size 20480 offset 0 flags 1 <libpmemobj>: <4> [mmap_posix.c:214 util_map_sync] mmap with MAP_SYNC not supported <libpmemobj>: <3> [set.c:1077 util_replica_check_map_sync] set 0x146e980 repidx 0 <libpmemobj>: <3> [set.c:2664 util_replica_map_local] replica #0 addr 0x7f7d7cba5000 <libpmemobj>: <3> [set.c:2734 util_replica_create_local] set 0x146e980 repidx 0 flags 1 attr 0x7f7d7c0ccec0 <libpmemobj>: <3> [set.c:2690 util_replica_init_headers_local] set 0x146e980 repidx 0 flags 1 attr 0x7f7d7c0ccec0 <libpmemobj>: <3> [set.c:357 util_map_hdr] part 0x146d9d0 flags 1 <libpmemobj>: <4> [mmap_posix.c:214 util_map_sync] mmap with MAP_SYNC not supported <libpmemobj>: <3> [set.c:2160 util_header_create] set 0x146e980 repidx 0 partidx 0 attr 0x7f7d7c0ccec0 overwrite 0 <libpmemobj>: <3> [set.c:3496 util_pool_attr2hdr] hdr 0x7f7d7cbaa000, attr 0x7f7d7c0ccec0 <libpmemobj>: <3> [shutdown_state.c:69 shutdown_state_init] sds 0x7f7d7cbaafb8 <libpmemobj>: <3> [shutdown_state.c:55 shutdown_state_checksum] sds 0x7f7d7cbaafb8 <libpmemobj>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x146d9d0 addr 0x7f7d7cbaafb8 len 64 flush 1 <libpmemobj>: <3> [shutdown_state.c:85 shutdown_state_add_part] sds 0x7f7d7cbaafb8, path /dev/shm//000000.pmem <libpmemobj>: <3> [os_dimm_none.c:60 os_dimm_usc] path /dev/shm//000000.pmem, usc 0x7fffc39f26e8 <libpmemobj>: <3> [os_dimm_none.c:45 os_dimm_uid] path /dev/shm//000000.pmem, uid (nil), len 0 <libpmemobj>: <3> [os_dimm_none.c:45 os_dimm_uid] path /dev/shm//000000.pmem, uid 0x146da50, len 4 <libpmemobj>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x146d9d0 addr 0x7f7d7cbaafb8 len 64 flush 1 <libpmemobj>: <3> [shutdown_state.c:55 shutdown_state_checksum] sds 0x7f7d7cbaafb8 <libpmemobj>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x146d9d0 addr 0x7f7d7cbaafb8 len 64 flush 1 <libpmemobj>: <3> [shutdown_state.c:133 shutdown_state_set_flag] sds 0x7f7d7cbaafb8 <libpmemobj>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x146d9d0 addr 0x7f7d7cbaafb8 len 64 flush 1 <libpmemobj>: <3> [shutdown_state.c:55 shutdown_state_checksum] sds 0x7f7d7cbaafb8 <libpmemobj>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x146d9d0 addr 0x7f7d7cbaafb8 len 64 flush 1 <libpmemobj>: <3> [util_pmem.h:67 util_persist_auto] is_pmem 0, addr 0x7f7d7cbaa000, len 4096 <libpmemobj>: <3> [util_pmem.h:53 util_persist] is_pmem 0, addr 0x7f7d7cbaa000, len 4096 <libpmemobj>: <4> [set.c:414 util_unmap_hdr] munmap: addr 0x7f7d7cbaa000 size 4096 <libpmemobj>: <3> [obj.c:907 obj_replica_init_local] rep 0x7f7d7cba5000 is_pmem 0 resvsize 20480 <libpmemobj>: <3> [obj.c:798 obj_descr_create] pop 0x7f7d7cba5000 layout (null) poolsize 8388608 Segmentation fault (core dumped) ``` ``` $pmempool create log pool.set (...) <libpmemlog>: <3> [log.c:192 pmemlog_createU] path pool.set poolsize 0 mode 436 <libpmemlog>: <3> [set.c:3214 util_pool_create] setp 0x7ffd1c2a2450 path pool.set poolsize 0 minsize 2097152 minpartsize 2097152 attr 0x7f45a337d4e0 nlanes (nil) can_have_rep 0 <libpmemlog>: <3> [set.c:3013 util_pool_create_uuids] setp 0x7ffd1c2a2450 path pool.set poolsize 0 minsize 2097152 minpartsize 2097152 pattr 0x7f45a337d4e0 nlanes (nil) can_have_rep 0 remote 0 <libpmemlog>: <3> [set.c:2052 util_poolset_create_set] setp 0x7ffd1c2a2450 path pool.set poolsize 0 minsize 2097152 <libpmemlog>: <3> [file.c:186 util_file_is_device_dax] path "pool.set" <libpmemlog>: <3> [file.c:133 util_fd_is_device_dax] fd 3 <libpmemlog>: <4> [file.c:153 util_fd_is_device_dax] not a character device <libpmemlog>: <4> [file.c:174 util_fd_is_device_dax] returning 0 <libpmemlog>: <4> [file.c:211 util_file_is_device_dax] returning 0 <libpmemlog>: <3> [file.c:503 util_file_open] path "pool.set" size 0x7ffd1c2a22d8 minsize 0 flags 0 <libpmemlog>: <3> [file.c:222 util_file_get_size] path "pool.set" <libpmemlog>: <3> [file.c:186 util_file_is_device_dax] path "pool.set" <libpmemlog>: <3> [file.c:133 util_fd_is_device_dax] fd 4 <libpmemlog>: <4> [file.c:153 util_fd_is_device_dax] not a character device <libpmemlog>: <4> [file.c:174 util_fd_is_device_dax] returning 0 <libpmemlog>: <4> [file.c:211 util_file_is_device_dax] returning 0 <libpmemlog>: <4> [file.c:236 util_file_get_size] file length 26 <libpmemlog>: <4> [file.c:543 util_file_open] actual file size 26 <libpmemlog>: <3> [set.c:1465 util_poolset_parse] setp 0x7ffd1c2a2450 path pool.set fd 3 <libpmemlog>: <3> [set.c:1039 util_parse_add_replica] setp 0x7ffd1c2a2240 <libpmemlog>: <3> [file_posix.c:139 util_is_absolute_path] path: /dev/shm/ <libpmemlog>: <3> [set.c:1019 util_parse_add_element] set 0x1d7d980 path /dev/shm/ filesize 20480 <libpmemlog>: <3> [set.c:985 util_parse_add_directory] set 0x1d7d980 path /dev/shm/ filesize 20480 <libpmemlog>: <3> [set.c:1113 util_poolset_check_devdax] set 0x1d7d980 <libpmemlog>: <3> [set.c:1373 util_poolset_directories_load] set 0x1d7d980 <libpmemlog>: <3> [set.c:1311 util_poolset_directory_load] rep 0x1d7d9e0 dir "/dev/shm/" <libpmemlog>: <4> [set.c:1634 util_poolset_parse] set file format correct (pool.set) <libpmemlog>: <3> [set.c:1150 util_poolset_check_options] set 0x1d7d980 <libpmemlog>: <3> [set.c:1166 util_poolset_set_size] set 0x1d7d980 <libpmemlog>: <3> [set.c:1202 util_poolset_set_size] pool size set to 0 <libpmemlog>: <3> [set.c:2850 util_poolset_append_new_part] set 0x1d7d980 size 2097152 <libpmemlog>: <3> [set.c:951 util_replica_add_part] replica 0x1d7d9e0 path "/dev/shm//000000.pmem" filesize 2097152 <libpmemlog>: <3> [set.c:913 util_replica_add_part_by_idx] replica 0x1d7d9e0 path /dev/shm//000000.pmem filesize 2097152 <libpmemlog>: <3> [set.c:883 util_replica_reserve] replica 0x1d7d9e0 n 1 <libpmemlog>: <3> [file.c:186 util_file_is_device_dax] path "/dev/shm//000000.pmem" <libpmemlog>: <4> [file.c:211 util_file_is_device_dax] returning 0 <libpmemlog>: <3> [set.c:1166 util_poolset_set_size] set 0x1d7d980 <libpmemlog>: <3> [set.c:1202 util_poolset_set_size] pool size set to 2097152 <libpmemlog>: <3> [set.c:1937 util_poolset_files_local] set 0x1d7d980 minpartsize 2097152 create 1 <libpmemlog>: <3> [set.c:1725 util_part_open] part 0x1d7c9d0 minsize 2097152 create 1 <libpmemlog>: <3> [file.c:438 util_file_create] path "/dev/shm//000000.pmem" size 2097152 minsize 2097152 <libpmemlog>: <3> [set.c:2547 util_replica_map_local] set 0x1d7d980 repidx 0 flags 1 <libpmemlog>: <3> [mmap_posix.c:153 util_map_hint] len 20480 req_align 0 <libpmemlog>: <4> [mmap_posix.c:174 util_map_hint] system choice 0x7f45a3c25000 <libpmemlog>: <4> [mmap_posix.c:179 util_map_hint] hint 0x7f45a3c25000 <libpmemlog>: <3> [set.c:434 util_map_part] part 0x1d7c9d0 addr 0x7f45a3c25000 size 20480 offset 0 flags 1 <libpmemlog>: <4> [mmap_posix.c:214 util_map_sync] mmap with MAP_SYNC not supported <libpmemlog>: <3> [set.c:1077 util_replica_check_map_sync] set 0x1d7d980 repidx 0 <libpmemlog>: <3> [set.c:2664 util_replica_map_local] replica #0 addr 0x7f45a3c25000 <libpmemlog>: <3> [set.c:2734 util_replica_create_local] set 0x1d7d980 repidx 0 flags 1 attr 0x7f45a337d4e0 <libpmemlog>: <3> [set.c:2690 util_replica_init_headers_local] set 0x1d7d980 repidx 0 flags 1 attr 0x7f45a337d4e0 <libpmemlog>: <3> [set.c:357 util_map_hdr] part 0x1d7c9d0 flags 1 <libpmemlog>: <4> [mmap_posix.c:214 util_map_sync] mmap with MAP_SYNC not supported <libpmemlog>: <3> [set.c:2160 util_header_create] set 0x1d7d980 repidx 0 partidx 0 attr 0x7f45a337d4e0 overwrite 0 <libpmemlog>: <3> [set.c:3496 util_pool_attr2hdr] hdr 0x7f45a3c2a000, attr 0x7f45a337d4e0 <libpmemlog>: <3> [shutdown_state.c:69 shutdown_state_init] sds 0x7f45a3c2afb8 <libpmemlog>: <3> [shutdown_state.c:55 shutdown_state_checksum] sds 0x7f45a3c2afb8 <libpmemlog>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x1d7c9d0 addr 0x7f45a3c2afb8 len 64 flush 1 <libpmemlog>: <3> [shutdown_state.c:85 shutdown_state_add_part] sds 0x7f45a3c2afb8, path /dev/shm//000000.pmem <libpmemlog>: <3> [os_dimm_none.c:60 os_dimm_usc] path /dev/shm//000000.pmem, usc 0x7ffd1c2a2178 <libpmemlog>: <3> [os_dimm_none.c:45 os_dimm_uid] path /dev/shm//000000.pmem, uid (nil), len 0 <libpmemlog>: <3> [os_dimm_none.c:45 os_dimm_uid] path /dev/shm//000000.pmem, uid 0x1d7ca50, len 4 <libpmemlog>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x1d7c9d0 addr 0x7f45a3c2afb8 len 64 flush 1 <libpmemlog>: <3> [shutdown_state.c:55 shutdown_state_checksum] sds 0x7f45a3c2afb8 <libpmemlog>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x1d7c9d0 addr 0x7f45a3c2afb8 len 64 flush 1 <libpmemlog>: <3> [shutdown_state.c:133 shutdown_state_set_flag] sds 0x7f45a3c2afb8 <libpmemlog>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x1d7c9d0 addr 0x7f45a3c2afb8 len 64 flush 1 <libpmemlog>: <3> [shutdown_state.c:55 shutdown_state_checksum] sds 0x7f45a3c2afb8 <libpmemlog>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x1d7c9d0 addr 0x7f45a3c2afb8 len 64 flush 1 <libpmemlog>: <3> [util_pmem.h:67 util_persist_auto] is_pmem 0, addr 0x7f45a3c2a000, len 4096 <libpmemlog>: <3> [util_pmem.h:53 util_persist] is_pmem 0, addr 0x7f45a3c2a000, len 4096 <libpmemlog>: <4> [set.c:414 util_unmap_hdr] munmap: addr 0x7f45a3c2a000 size 4096 <libpmemlog>: <3> [log.c:84 log_descr_create] plp 0x7f45a3c25000 poolsize 2097152 <libpmemlog>: <3> [util_pmem.h:53 util_persist] is_pmem 0, addr 0x7f45a3c26000, len 24 <libpmemlog>: <3> [log.c:142 log_runtime_init] plp 0x7f45a3c25000 rdonly 0 <libpmemlog>: <3> [mmap.c:267 util_range_none] addr 0x7f45a3c25000 len 4096 <libpmemlog>: <3> [mmap.c:209 util_range_ro] addr 0x7f45a3c26000 len 2093056 <libpmemlog>: <1> [mmap.c:227 util_range_ro] mprotect: PROT_READ: Cannot allocate memory <libpmemlog>: <1> [log.c:178 log_runtime_init] assertion failure: util_range_ro((char *)plp->addr + sizeof(struct pool_hdr), plp->size - sizeof(struct pool_hdr)) >= 0 Aborted (core dumped) ``` ``` $pmempool create blk 512 pool.set <libpmemblk>: <3> [set.c:3214 util_pool_create] setp 0x7ffdb7c149c0 path pool.set poolsize 0 minsize 16785408 minpartsize 2097152 attr 0x7f42ed834860 nlanes (nil) can_have_rep 0 <libpmemblk>: <3> [set.c:3013 util_pool_create_uuids] setp 0x7ffdb7c149c0 path pool.set poolsize 0 minsize 16785408 minpartsize 2097152 pattr 0x7f42ed834860 nlanes (nil) can_have_rep 0 remote 0 <libpmemblk>: <3> [set.c:2052 util_poolset_create_set] setp 0x7ffdb7c149c0 path pool.set poolsize 0 minsize 16785408 <libpmemblk>: <3> [file.c:186 util_file_is_device_dax] path "pool.set" <libpmemblk>: <3> [file.c:133 util_fd_is_device_dax] fd 3 <libpmemblk>: <4> [file.c:153 util_fd_is_device_dax] not a character device <libpmemblk>: <4> [file.c:174 util_fd_is_device_dax] returning 0 <libpmemblk>: <4> [file.c:211 util_file_is_device_dax] returning 0 <libpmemblk>: <3> [file.c:503 util_file_open] path "pool.set" size 0x7ffdb7c14848 minsize 0 flags 0 <libpmemblk>: <3> [file.c:222 util_file_get_size] path "pool.set" <libpmemblk>: <3> [file.c:186 util_file_is_device_dax] path "pool.set" <libpmemblk>: <3> [file.c:133 util_fd_is_device_dax] fd 4 <libpmemblk>: <4> [file.c:153 util_fd_is_device_dax] not a character device <libpmemblk>: <4> [file.c:174 util_fd_is_device_dax] returning 0 <libpmemblk>: <4> [file.c:211 util_file_is_device_dax] returning 0 <libpmemblk>: <4> [file.c:236 util_file_get_size] file length 26 <libpmemblk>: <4> [file.c:543 util_file_open] actual file size 26 <libpmemblk>: <3> [set.c:1465 util_poolset_parse] setp 0x7ffdb7c149c0 path pool.set fd 3 <libpmemblk>: <3> [set.c:1039 util_parse_add_replica] setp 0x7ffdb7c147b0 <libpmemblk>: <3> [file_posix.c:139 util_is_absolute_path] path: /dev/shm/ <libpmemblk>: <3> [set.c:1019 util_parse_add_element] set 0x1838980 path /dev/shm/ filesize 20480 <libpmemblk>: <3> [set.c:985 util_parse_add_directory] set 0x1838980 path /dev/shm/ filesize 20480 <libpmemblk>: <3> [set.c:1113 util_poolset_check_devdax] set 0x1838980 <libpmemblk>: <3> [set.c:1373 util_poolset_directories_load] set 0x1838980 <libpmemblk>: <3> [set.c:1311 util_poolset_directory_load] rep 0x18389e0 dir "/dev/shm/" <libpmemblk>: <4> [set.c:1634 util_poolset_parse] set file format correct (pool.set) <libpmemblk>: <3> [set.c:1150 util_poolset_check_options] set 0x1838980 <libpmemblk>: <3> [set.c:1166 util_poolset_set_size] set 0x1838980 <libpmemblk>: <3> [set.c:1202 util_poolset_set_size] pool size set to 0 <libpmemblk>: <3> [set.c:2850 util_poolset_append_new_part] set 0x1838980 size 16785408 <libpmemblk>: <3> [set.c:951 util_replica_add_part] replica 0x18389e0 path "/dev/shm//000000.pmem" filesize 16785408 <libpmemblk>: <3> [set.c:913 util_replica_add_part_by_idx] replica 0x18389e0 path /dev/shm//000000.pmem filesize 16785408 <libpmemblk>: <3> [set.c:883 util_replica_reserve] replica 0x18389e0 n 1 <libpmemblk>: <3> [file.c:186 util_file_is_device_dax] path "/dev/shm//000000.pmem" <libpmemblk>: <4> [file.c:211 util_file_is_device_dax] returning 0 <libpmemblk>: <3> [set.c:1166 util_poolset_set_size] set 0x1838980 <libpmemblk>: <3> [set.c:1202 util_poolset_set_size] pool size set to 16785408 <libpmemblk>: <3> [set.c:1937 util_poolset_files_local] set 0x1838980 minpartsize 2097152 create 1 <libpmemblk>: <3> [set.c:1725 util_part_open] part 0x18379d0 minsize 2097152 create 1 <libpmemblk>: <3> [file.c:438 util_file_create] path "/dev/shm//000000.pmem" size 16785408 minsize 2097152 <libpmemblk>: <3> [set.c:2547 util_replica_map_local] set 0x1838980 repidx 0 flags 1 <libpmemblk>: <3> [mmap_posix.c:153 util_map_hint] len 20480 req_align 0 <libpmemblk>: <4> [mmap_posix.c:174 util_map_hint] system choice 0x7f42edeb3000 <libpmemblk>: <4> [mmap_posix.c:179 util_map_hint] hint 0x7f42edeb3000 <libpmemblk>: <3> [set.c:434 util_map_part] part 0x18379d0 addr 0x7f42edeb3000 size 20480 offset 0 flags 1 <libpmemblk>: <4> [mmap_posix.c:214 util_map_sync] mmap with MAP_SYNC not supported <libpmemblk>: <3> [set.c:1077 util_replica_check_map_sync] set 0x1838980 repidx 0 <libpmemblk>: <3> [set.c:2664 util_replica_map_local] replica #0 addr 0x7f42edeb3000 <libpmemblk>: <3> [set.c:2734 util_replica_create_local] set 0x1838980 repidx 0 flags 1 attr 0x7f42ed834860 <libpmemblk>: <3> [set.c:2690 util_replica_init_headers_local] set 0x1838980 repidx 0 flags 1 attr 0x7f42ed834860 <libpmemblk>: <3> [set.c:357 util_map_hdr] part 0x18379d0 flags 1 <libpmemblk>: <4> [mmap_posix.c:214 util_map_sync] mmap with MAP_SYNC not supported <libpmemblk>: <3> [set.c:2160 util_header_create] set 0x1838980 repidx 0 partidx 0 attr 0x7f42ed834860 overwrite 0 <libpmemblk>: <3> [set.c:3496 util_pool_attr2hdr] hdr 0x7f42edeb8000, attr 0x7f42ed834860 <libpmemblk>: <3> [shutdown_state.c:69 shutdown_state_init] sds 0x7f42edeb8fb8 <libpmemblk>: <3> [shutdown_state.c:55 shutdown_state_checksum] sds 0x7f42edeb8fb8 <libpmemblk>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x18379d0 addr 0x7f42edeb8fb8 len 64 flush 1 <libpmemblk>: <3> [shutdown_state.c:85 shutdown_state_add_part] sds 0x7f42edeb8fb8, path /dev/shm//000000.pmem <libpmemblk>: <3> [os_dimm_none.c:60 os_dimm_usc] path /dev/shm//000000.pmem, usc 0x7ffdb7c146e8 <libpmemblk>: <3> [os_dimm_none.c:45 os_dimm_uid] path /dev/shm//000000.pmem, uid (nil), len 0 <libpmemblk>: <3> [os_dimm_none.c:45 os_dimm_uid] path /dev/shm//000000.pmem, uid 0x1837a50, len 4 <libpmemblk>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x18379d0 addr 0x7f42edeb8fb8 len 64 flush 1 <libpmemblk>: <3> [shutdown_state.c:55 shutdown_state_checksum] sds 0x7f42edeb8fb8 <libpmemblk>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x18379d0 addr 0x7f42edeb8fb8 len 64 flush 1 <libpmemblk>: <3> [shutdown_state.c:133 shutdown_state_set_flag] sds 0x7f42edeb8fb8 <libpmemblk>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x18379d0 addr 0x7f42edeb8fb8 len 64 flush 1 <libpmemblk>: <3> [shutdown_state.c:55 shutdown_state_checksum] sds 0x7f42edeb8fb8 <libpmemblk>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x18379d0 addr 0x7f42edeb8fb8 len 64 flush 1 <libpmemblk>: <3> [util_pmem.h:67 util_persist_auto] is_pmem 0, addr 0x7f42edeb8000, len 4096 <libpmemblk>: <3> [util_pmem.h:53 util_persist] is_pmem 0, addr 0x7f42edeb8000, len 4096 <libpmemblk>: <4> [set.c:414 util_unmap_hdr] munmap: addr 0x7f42edeb8000 size 4096 <libpmemblk>: <3> [blk.c:292 blk_descr_create] pbp 0x7f42edeb3000 bsize 512 zeroed 1 <libpmemblk>: <3> [util_pmem.h:53 util_persist] is_pmem 0, addr 0x7f42edeb4000, len 4 <libpmemblk>: <3> [util_pmem.h:53 util_persist] is_pmem 0, addr 0x7f42edeb4004, len 4 <libpmemblk>: <3> [blk.c:330 blk_runtime_init] pbp 0x7f42edeb3000 bsize 512 rdonly 0 <libpmemblk>: <4> [blk.c:352 blk_runtime_init] data area 0x7f42edeb5000 data size 16777216 bsize 512 <libpmemblk>: <3> [btt.c:1436 btt_init] rawsize 16777216 lbasize 512 <libpmemblk>: <3> [btt.c:1280 read_layout] bttp 0x1837a70 <libpmemblk>: <3> [btt.c:326 read_info] infop 0x7ffdb7c13850 <libpmemblk>: <3> [btt.c:329 read_info] signature invalid <libpmemblk>: <3> [btt.c:1101 write_layout] bttp 0x1837a70 lane 0 write 0 <libpmemblk>: <4> [btt.c:1126 write_layout] narena 1 <libpmemblk>: <4> [btt.c:1131 write_layout] adjusted internal_lbasize 512 <libpmemblk>: <4> [btt.c:1142 write_layout] layout arena 0 <libpmemblk>: <4> [btt.c:1158 write_layout] internal_nlba 32458 external_nlba 32202 <libpmemblk>: <3> [btt.c:1479 btt_init] success, bttp 0x1837a70 nlane 16 <libpmemblk>: <3> [btt.c:1493 btt_nlane] bttp 0x1837a70 <libpmemblk>: <3> [mmap.c:267 util_range_none] addr 0x7f42edeb3000 len 4096 <libpmemblk>: <3> [mmap.c:209 util_range_ro] addr 0x7f42edeb5000 len 16777216 <libpmemblk>: <1> [mmap.c:227 util_range_ro] mprotect: PROT_READ: Cannot allocate memory <libpmemblk>: <1> [blk.c:398 blk_runtime_init] assertion failure: util_range_ro(pbp->data, pbp->datasize) >= 0 Aborted (core dumped) ``` File is created under /dev/shm/ ``` $ls /dev/shm 000000.pmem ``` Found in 1.4-rc1-19-gf8d0b0668
1.0
SIGABRT/SIGSEGV raised when creating pool based on directory poolset with insufficient size - Steps to reproduce: 1) Create pool set file: ``` $cat pool.set PMEMPOOLSET 20K /dev/shm/ ``` 2) Create obj/log/blk pool: ``` $pmempool create obj pool.set (...) <libpmempool>: <3> [mmap.c:66 util_mmap_init] <libpmempool>: <3> [libpmempool.c:69 libpmempool_init] <libpmempool>: <3> [set.c:120 util_remote_init] <libpmemobj>: <3> [obj.c:1238 pmemobj_createU] path pool.set layout (null) poolsize 0 mode 664 <libpmemobj>: <3> [obj.c:1208 obj_get_nlanes] <libpmemobj>: <3> [set.c:3214 util_pool_create] setp 0x7fffc39f29c8 path pool.set poolsize 0 minsize 8388608 minpartsize 2097152 attr 0x7f7d7c0ccec0 nlanes 0x7fffc39f29c4 can_have_rep 1 <libpmemobj>: <3> [set.c:3013 util_pool_create_uuids] setp 0x7fffc39f29c8 path pool.set poolsize 0 minsize 8388608 minpartsize 2097152 pattr 0x7f7d7c0ccec0 nlanes 0x7fffc39f29c4 can_have_rep 1 remote 0 <libpmemobj>: <3> [set.c:2052 util_poolset_create_set] setp 0x7fffc39f29c8 path pool.set poolsize 0 minsize 8388608 <libpmemobj>: <3> [file.c:186 util_file_is_device_dax] path "pool.set" <libpmemobj>: <3> [file.c:133 util_fd_is_device_dax] fd 3 <libpmemobj>: <4> [file.c:153 util_fd_is_device_dax] not a character device <libpmemobj>: <4> [file.c:174 util_fd_is_device_dax] returning 0 <libpmemobj>: <4> [file.c:211 util_file_is_device_dax] returning 0 <libpmemobj>: <3> [file.c:503 util_file_open] path "pool.set" size 0x7fffc39f2848 minsize 0 flags 0 <libpmemobj>: <3> [file.c:222 util_file_get_size] path "pool.set" <libpmemobj>: <3> [file.c:186 util_file_is_device_dax] path "pool.set" <libpmemobj>: <3> [file.c:133 util_fd_is_device_dax] fd 4 <libpmemobj>: <4> [file.c:153 util_fd_is_device_dax] not a character device <libpmemobj>: <4> [file.c:174 util_fd_is_device_dax] returning 0 <libpmemobj>: <4> [file.c:211 util_file_is_device_dax] returning 0 <libpmemobj>: <4> [file.c:236 util_file_get_size] file length 26 <libpmemobj>: <4> [file.c:543 util_file_open] actual file size 26 <libpmemobj>: <3> [set.c:1465 util_poolset_parse] setp 0x7fffc39f29c8 path pool.set fd 3 <libpmemobj>: <3> [set.c:1039 util_parse_add_replica] setp 0x7fffc39f27b0 <libpmemobj>: <3> [file_posix.c:139 util_is_absolute_path] path: /dev/shm/ <libpmemobj>: <3> [set.c:1019 util_parse_add_element] set 0x146e980 path /dev/shm/ filesize 20480 <libpmemobj>: <3> [set.c:985 util_parse_add_directory] set 0x146e980 path /dev/shm/ filesize 20480 <libpmemobj>: <3> [set.c:1113 util_poolset_check_devdax] set 0x146e980 <libpmemobj>: <3> [set.c:1373 util_poolset_directories_load] set 0x146e980 <libpmemobj>: <3> [set.c:1311 util_poolset_directory_load] rep 0x146e9e0 dir "/dev/shm/" <libpmemobj>: <4> [set.c:1634 util_poolset_parse] set file format correct (pool.set) <libpmemobj>: <3> [set.c:1150 util_poolset_check_options] set 0x146e980 <libpmemobj>: <3> [set.c:1166 util_poolset_set_size] set 0x146e980 <libpmemobj>: <3> [set.c:1202 util_poolset_set_size] pool size set to 0 <libpmemobj>: <3> [set.c:2850 util_poolset_append_new_part] set 0x146e980 size 8388608 <libpmemobj>: <3> [set.c:951 util_replica_add_part] replica 0x146e9e0 path "/dev/shm//000000.pmem" filesize 8388608 <libpmemobj>: <3> [set.c:913 util_replica_add_part_by_idx] replica 0x146e9e0 path /dev/shm//000000.pmem filesize 8388608 <libpmemobj>: <3> [set.c:883 util_replica_reserve] replica 0x146e9e0 n 1 <libpmemobj>: <3> [file.c:186 util_file_is_device_dax] path "/dev/shm//000000.pmem" <libpmemobj>: <4> [file.c:211 util_file_is_device_dax] returning 0 <libpmemobj>: <3> [set.c:1166 util_poolset_set_size] set 0x146e980 <libpmemobj>: <3> [set.c:1202 util_poolset_set_size] pool size set to 8388608 <libpmemobj>: <3> [set.c:1937 util_poolset_files_local] set 0x146e980 minpartsize 2097152 create 1 <libpmemobj>: <3> [set.c:1725 util_part_open] part 0x146d9d0 minsize 2097152 create 1 <libpmemobj>: <3> [file.c:438 util_file_create] path "/dev/shm//000000.pmem" size 8388608 minsize 2097152 <libpmemobj>: <3> [set.c:2547 util_replica_map_local] set 0x146e980 repidx 0 flags 1 <libpmemobj>: <3> [mmap_posix.c:153 util_map_hint] len 20480 req_align 0 <libpmemobj>: <4> [mmap_posix.c:174 util_map_hint] system choice 0x7f7d7cba5000 <libpmemobj>: <4> [mmap_posix.c:179 util_map_hint] hint 0x7f7d7cba5000 <libpmemobj>: <3> [set.c:434 util_map_part] part 0x146d9d0 addr 0x7f7d7cba5000 size 20480 offset 0 flags 1 <libpmemobj>: <4> [mmap_posix.c:214 util_map_sync] mmap with MAP_SYNC not supported <libpmemobj>: <3> [set.c:1077 util_replica_check_map_sync] set 0x146e980 repidx 0 <libpmemobj>: <3> [set.c:2664 util_replica_map_local] replica #0 addr 0x7f7d7cba5000 <libpmemobj>: <3> [set.c:2734 util_replica_create_local] set 0x146e980 repidx 0 flags 1 attr 0x7f7d7c0ccec0 <libpmemobj>: <3> [set.c:2690 util_replica_init_headers_local] set 0x146e980 repidx 0 flags 1 attr 0x7f7d7c0ccec0 <libpmemobj>: <3> [set.c:357 util_map_hdr] part 0x146d9d0 flags 1 <libpmemobj>: <4> [mmap_posix.c:214 util_map_sync] mmap with MAP_SYNC not supported <libpmemobj>: <3> [set.c:2160 util_header_create] set 0x146e980 repidx 0 partidx 0 attr 0x7f7d7c0ccec0 overwrite 0 <libpmemobj>: <3> [set.c:3496 util_pool_attr2hdr] hdr 0x7f7d7cbaa000, attr 0x7f7d7c0ccec0 <libpmemobj>: <3> [shutdown_state.c:69 shutdown_state_init] sds 0x7f7d7cbaafb8 <libpmemobj>: <3> [shutdown_state.c:55 shutdown_state_checksum] sds 0x7f7d7cbaafb8 <libpmemobj>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x146d9d0 addr 0x7f7d7cbaafb8 len 64 flush 1 <libpmemobj>: <3> [shutdown_state.c:85 shutdown_state_add_part] sds 0x7f7d7cbaafb8, path /dev/shm//000000.pmem <libpmemobj>: <3> [os_dimm_none.c:60 os_dimm_usc] path /dev/shm//000000.pmem, usc 0x7fffc39f26e8 <libpmemobj>: <3> [os_dimm_none.c:45 os_dimm_uid] path /dev/shm//000000.pmem, uid (nil), len 0 <libpmemobj>: <3> [os_dimm_none.c:45 os_dimm_uid] path /dev/shm//000000.pmem, uid 0x146da50, len 4 <libpmemobj>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x146d9d0 addr 0x7f7d7cbaafb8 len 64 flush 1 <libpmemobj>: <3> [shutdown_state.c:55 shutdown_state_checksum] sds 0x7f7d7cbaafb8 <libpmemobj>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x146d9d0 addr 0x7f7d7cbaafb8 len 64 flush 1 <libpmemobj>: <3> [shutdown_state.c:133 shutdown_state_set_flag] sds 0x7f7d7cbaafb8 <libpmemobj>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x146d9d0 addr 0x7f7d7cbaafb8 len 64 flush 1 <libpmemobj>: <3> [shutdown_state.c:55 shutdown_state_checksum] sds 0x7f7d7cbaafb8 <libpmemobj>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x146d9d0 addr 0x7f7d7cbaafb8 len 64 flush 1 <libpmemobj>: <3> [util_pmem.h:67 util_persist_auto] is_pmem 0, addr 0x7f7d7cbaa000, len 4096 <libpmemobj>: <3> [util_pmem.h:53 util_persist] is_pmem 0, addr 0x7f7d7cbaa000, len 4096 <libpmemobj>: <4> [set.c:414 util_unmap_hdr] munmap: addr 0x7f7d7cbaa000 size 4096 <libpmemobj>: <3> [obj.c:907 obj_replica_init_local] rep 0x7f7d7cba5000 is_pmem 0 resvsize 20480 <libpmemobj>: <3> [obj.c:798 obj_descr_create] pop 0x7f7d7cba5000 layout (null) poolsize 8388608 Segmentation fault (core dumped) ``` ``` $pmempool create log pool.set (...) <libpmemlog>: <3> [log.c:192 pmemlog_createU] path pool.set poolsize 0 mode 436 <libpmemlog>: <3> [set.c:3214 util_pool_create] setp 0x7ffd1c2a2450 path pool.set poolsize 0 minsize 2097152 minpartsize 2097152 attr 0x7f45a337d4e0 nlanes (nil) can_have_rep 0 <libpmemlog>: <3> [set.c:3013 util_pool_create_uuids] setp 0x7ffd1c2a2450 path pool.set poolsize 0 minsize 2097152 minpartsize 2097152 pattr 0x7f45a337d4e0 nlanes (nil) can_have_rep 0 remote 0 <libpmemlog>: <3> [set.c:2052 util_poolset_create_set] setp 0x7ffd1c2a2450 path pool.set poolsize 0 minsize 2097152 <libpmemlog>: <3> [file.c:186 util_file_is_device_dax] path "pool.set" <libpmemlog>: <3> [file.c:133 util_fd_is_device_dax] fd 3 <libpmemlog>: <4> [file.c:153 util_fd_is_device_dax] not a character device <libpmemlog>: <4> [file.c:174 util_fd_is_device_dax] returning 0 <libpmemlog>: <4> [file.c:211 util_file_is_device_dax] returning 0 <libpmemlog>: <3> [file.c:503 util_file_open] path "pool.set" size 0x7ffd1c2a22d8 minsize 0 flags 0 <libpmemlog>: <3> [file.c:222 util_file_get_size] path "pool.set" <libpmemlog>: <3> [file.c:186 util_file_is_device_dax] path "pool.set" <libpmemlog>: <3> [file.c:133 util_fd_is_device_dax] fd 4 <libpmemlog>: <4> [file.c:153 util_fd_is_device_dax] not a character device <libpmemlog>: <4> [file.c:174 util_fd_is_device_dax] returning 0 <libpmemlog>: <4> [file.c:211 util_file_is_device_dax] returning 0 <libpmemlog>: <4> [file.c:236 util_file_get_size] file length 26 <libpmemlog>: <4> [file.c:543 util_file_open] actual file size 26 <libpmemlog>: <3> [set.c:1465 util_poolset_parse] setp 0x7ffd1c2a2450 path pool.set fd 3 <libpmemlog>: <3> [set.c:1039 util_parse_add_replica] setp 0x7ffd1c2a2240 <libpmemlog>: <3> [file_posix.c:139 util_is_absolute_path] path: /dev/shm/ <libpmemlog>: <3> [set.c:1019 util_parse_add_element] set 0x1d7d980 path /dev/shm/ filesize 20480 <libpmemlog>: <3> [set.c:985 util_parse_add_directory] set 0x1d7d980 path /dev/shm/ filesize 20480 <libpmemlog>: <3> [set.c:1113 util_poolset_check_devdax] set 0x1d7d980 <libpmemlog>: <3> [set.c:1373 util_poolset_directories_load] set 0x1d7d980 <libpmemlog>: <3> [set.c:1311 util_poolset_directory_load] rep 0x1d7d9e0 dir "/dev/shm/" <libpmemlog>: <4> [set.c:1634 util_poolset_parse] set file format correct (pool.set) <libpmemlog>: <3> [set.c:1150 util_poolset_check_options] set 0x1d7d980 <libpmemlog>: <3> [set.c:1166 util_poolset_set_size] set 0x1d7d980 <libpmemlog>: <3> [set.c:1202 util_poolset_set_size] pool size set to 0 <libpmemlog>: <3> [set.c:2850 util_poolset_append_new_part] set 0x1d7d980 size 2097152 <libpmemlog>: <3> [set.c:951 util_replica_add_part] replica 0x1d7d9e0 path "/dev/shm//000000.pmem" filesize 2097152 <libpmemlog>: <3> [set.c:913 util_replica_add_part_by_idx] replica 0x1d7d9e0 path /dev/shm//000000.pmem filesize 2097152 <libpmemlog>: <3> [set.c:883 util_replica_reserve] replica 0x1d7d9e0 n 1 <libpmemlog>: <3> [file.c:186 util_file_is_device_dax] path "/dev/shm//000000.pmem" <libpmemlog>: <4> [file.c:211 util_file_is_device_dax] returning 0 <libpmemlog>: <3> [set.c:1166 util_poolset_set_size] set 0x1d7d980 <libpmemlog>: <3> [set.c:1202 util_poolset_set_size] pool size set to 2097152 <libpmemlog>: <3> [set.c:1937 util_poolset_files_local] set 0x1d7d980 minpartsize 2097152 create 1 <libpmemlog>: <3> [set.c:1725 util_part_open] part 0x1d7c9d0 minsize 2097152 create 1 <libpmemlog>: <3> [file.c:438 util_file_create] path "/dev/shm//000000.pmem" size 2097152 minsize 2097152 <libpmemlog>: <3> [set.c:2547 util_replica_map_local] set 0x1d7d980 repidx 0 flags 1 <libpmemlog>: <3> [mmap_posix.c:153 util_map_hint] len 20480 req_align 0 <libpmemlog>: <4> [mmap_posix.c:174 util_map_hint] system choice 0x7f45a3c25000 <libpmemlog>: <4> [mmap_posix.c:179 util_map_hint] hint 0x7f45a3c25000 <libpmemlog>: <3> [set.c:434 util_map_part] part 0x1d7c9d0 addr 0x7f45a3c25000 size 20480 offset 0 flags 1 <libpmemlog>: <4> [mmap_posix.c:214 util_map_sync] mmap with MAP_SYNC not supported <libpmemlog>: <3> [set.c:1077 util_replica_check_map_sync] set 0x1d7d980 repidx 0 <libpmemlog>: <3> [set.c:2664 util_replica_map_local] replica #0 addr 0x7f45a3c25000 <libpmemlog>: <3> [set.c:2734 util_replica_create_local] set 0x1d7d980 repidx 0 flags 1 attr 0x7f45a337d4e0 <libpmemlog>: <3> [set.c:2690 util_replica_init_headers_local] set 0x1d7d980 repidx 0 flags 1 attr 0x7f45a337d4e0 <libpmemlog>: <3> [set.c:357 util_map_hdr] part 0x1d7c9d0 flags 1 <libpmemlog>: <4> [mmap_posix.c:214 util_map_sync] mmap with MAP_SYNC not supported <libpmemlog>: <3> [set.c:2160 util_header_create] set 0x1d7d980 repidx 0 partidx 0 attr 0x7f45a337d4e0 overwrite 0 <libpmemlog>: <3> [set.c:3496 util_pool_attr2hdr] hdr 0x7f45a3c2a000, attr 0x7f45a337d4e0 <libpmemlog>: <3> [shutdown_state.c:69 shutdown_state_init] sds 0x7f45a3c2afb8 <libpmemlog>: <3> [shutdown_state.c:55 shutdown_state_checksum] sds 0x7f45a3c2afb8 <libpmemlog>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x1d7c9d0 addr 0x7f45a3c2afb8 len 64 flush 1 <libpmemlog>: <3> [shutdown_state.c:85 shutdown_state_add_part] sds 0x7f45a3c2afb8, path /dev/shm//000000.pmem <libpmemlog>: <3> [os_dimm_none.c:60 os_dimm_usc] path /dev/shm//000000.pmem, usc 0x7ffd1c2a2178 <libpmemlog>: <3> [os_dimm_none.c:45 os_dimm_uid] path /dev/shm//000000.pmem, uid (nil), len 0 <libpmemlog>: <3> [os_dimm_none.c:45 os_dimm_uid] path /dev/shm//000000.pmem, uid 0x1d7ca50, len 4 <libpmemlog>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x1d7c9d0 addr 0x7f45a3c2afb8 len 64 flush 1 <libpmemlog>: <3> [shutdown_state.c:55 shutdown_state_checksum] sds 0x7f45a3c2afb8 <libpmemlog>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x1d7c9d0 addr 0x7f45a3c2afb8 len 64 flush 1 <libpmemlog>: <3> [shutdown_state.c:133 shutdown_state_set_flag] sds 0x7f45a3c2afb8 <libpmemlog>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x1d7c9d0 addr 0x7f45a3c2afb8 len 64 flush 1 <libpmemlog>: <3> [shutdown_state.c:55 shutdown_state_checksum] sds 0x7f45a3c2afb8 <libpmemlog>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x1d7c9d0 addr 0x7f45a3c2afb8 len 64 flush 1 <libpmemlog>: <3> [util_pmem.h:67 util_persist_auto] is_pmem 0, addr 0x7f45a3c2a000, len 4096 <libpmemlog>: <3> [util_pmem.h:53 util_persist] is_pmem 0, addr 0x7f45a3c2a000, len 4096 <libpmemlog>: <4> [set.c:414 util_unmap_hdr] munmap: addr 0x7f45a3c2a000 size 4096 <libpmemlog>: <3> [log.c:84 log_descr_create] plp 0x7f45a3c25000 poolsize 2097152 <libpmemlog>: <3> [util_pmem.h:53 util_persist] is_pmem 0, addr 0x7f45a3c26000, len 24 <libpmemlog>: <3> [log.c:142 log_runtime_init] plp 0x7f45a3c25000 rdonly 0 <libpmemlog>: <3> [mmap.c:267 util_range_none] addr 0x7f45a3c25000 len 4096 <libpmemlog>: <3> [mmap.c:209 util_range_ro] addr 0x7f45a3c26000 len 2093056 <libpmemlog>: <1> [mmap.c:227 util_range_ro] mprotect: PROT_READ: Cannot allocate memory <libpmemlog>: <1> [log.c:178 log_runtime_init] assertion failure: util_range_ro((char *)plp->addr + sizeof(struct pool_hdr), plp->size - sizeof(struct pool_hdr)) >= 0 Aborted (core dumped) ``` ``` $pmempool create blk 512 pool.set <libpmemblk>: <3> [set.c:3214 util_pool_create] setp 0x7ffdb7c149c0 path pool.set poolsize 0 minsize 16785408 minpartsize 2097152 attr 0x7f42ed834860 nlanes (nil) can_have_rep 0 <libpmemblk>: <3> [set.c:3013 util_pool_create_uuids] setp 0x7ffdb7c149c0 path pool.set poolsize 0 minsize 16785408 minpartsize 2097152 pattr 0x7f42ed834860 nlanes (nil) can_have_rep 0 remote 0 <libpmemblk>: <3> [set.c:2052 util_poolset_create_set] setp 0x7ffdb7c149c0 path pool.set poolsize 0 minsize 16785408 <libpmemblk>: <3> [file.c:186 util_file_is_device_dax] path "pool.set" <libpmemblk>: <3> [file.c:133 util_fd_is_device_dax] fd 3 <libpmemblk>: <4> [file.c:153 util_fd_is_device_dax] not a character device <libpmemblk>: <4> [file.c:174 util_fd_is_device_dax] returning 0 <libpmemblk>: <4> [file.c:211 util_file_is_device_dax] returning 0 <libpmemblk>: <3> [file.c:503 util_file_open] path "pool.set" size 0x7ffdb7c14848 minsize 0 flags 0 <libpmemblk>: <3> [file.c:222 util_file_get_size] path "pool.set" <libpmemblk>: <3> [file.c:186 util_file_is_device_dax] path "pool.set" <libpmemblk>: <3> [file.c:133 util_fd_is_device_dax] fd 4 <libpmemblk>: <4> [file.c:153 util_fd_is_device_dax] not a character device <libpmemblk>: <4> [file.c:174 util_fd_is_device_dax] returning 0 <libpmemblk>: <4> [file.c:211 util_file_is_device_dax] returning 0 <libpmemblk>: <4> [file.c:236 util_file_get_size] file length 26 <libpmemblk>: <4> [file.c:543 util_file_open] actual file size 26 <libpmemblk>: <3> [set.c:1465 util_poolset_parse] setp 0x7ffdb7c149c0 path pool.set fd 3 <libpmemblk>: <3> [set.c:1039 util_parse_add_replica] setp 0x7ffdb7c147b0 <libpmemblk>: <3> [file_posix.c:139 util_is_absolute_path] path: /dev/shm/ <libpmemblk>: <3> [set.c:1019 util_parse_add_element] set 0x1838980 path /dev/shm/ filesize 20480 <libpmemblk>: <3> [set.c:985 util_parse_add_directory] set 0x1838980 path /dev/shm/ filesize 20480 <libpmemblk>: <3> [set.c:1113 util_poolset_check_devdax] set 0x1838980 <libpmemblk>: <3> [set.c:1373 util_poolset_directories_load] set 0x1838980 <libpmemblk>: <3> [set.c:1311 util_poolset_directory_load] rep 0x18389e0 dir "/dev/shm/" <libpmemblk>: <4> [set.c:1634 util_poolset_parse] set file format correct (pool.set) <libpmemblk>: <3> [set.c:1150 util_poolset_check_options] set 0x1838980 <libpmemblk>: <3> [set.c:1166 util_poolset_set_size] set 0x1838980 <libpmemblk>: <3> [set.c:1202 util_poolset_set_size] pool size set to 0 <libpmemblk>: <3> [set.c:2850 util_poolset_append_new_part] set 0x1838980 size 16785408 <libpmemblk>: <3> [set.c:951 util_replica_add_part] replica 0x18389e0 path "/dev/shm//000000.pmem" filesize 16785408 <libpmemblk>: <3> [set.c:913 util_replica_add_part_by_idx] replica 0x18389e0 path /dev/shm//000000.pmem filesize 16785408 <libpmemblk>: <3> [set.c:883 util_replica_reserve] replica 0x18389e0 n 1 <libpmemblk>: <3> [file.c:186 util_file_is_device_dax] path "/dev/shm//000000.pmem" <libpmemblk>: <4> [file.c:211 util_file_is_device_dax] returning 0 <libpmemblk>: <3> [set.c:1166 util_poolset_set_size] set 0x1838980 <libpmemblk>: <3> [set.c:1202 util_poolset_set_size] pool size set to 16785408 <libpmemblk>: <3> [set.c:1937 util_poolset_files_local] set 0x1838980 minpartsize 2097152 create 1 <libpmemblk>: <3> [set.c:1725 util_part_open] part 0x18379d0 minsize 2097152 create 1 <libpmemblk>: <3> [file.c:438 util_file_create] path "/dev/shm//000000.pmem" size 16785408 minsize 2097152 <libpmemblk>: <3> [set.c:2547 util_replica_map_local] set 0x1838980 repidx 0 flags 1 <libpmemblk>: <3> [mmap_posix.c:153 util_map_hint] len 20480 req_align 0 <libpmemblk>: <4> [mmap_posix.c:174 util_map_hint] system choice 0x7f42edeb3000 <libpmemblk>: <4> [mmap_posix.c:179 util_map_hint] hint 0x7f42edeb3000 <libpmemblk>: <3> [set.c:434 util_map_part] part 0x18379d0 addr 0x7f42edeb3000 size 20480 offset 0 flags 1 <libpmemblk>: <4> [mmap_posix.c:214 util_map_sync] mmap with MAP_SYNC not supported <libpmemblk>: <3> [set.c:1077 util_replica_check_map_sync] set 0x1838980 repidx 0 <libpmemblk>: <3> [set.c:2664 util_replica_map_local] replica #0 addr 0x7f42edeb3000 <libpmemblk>: <3> [set.c:2734 util_replica_create_local] set 0x1838980 repidx 0 flags 1 attr 0x7f42ed834860 <libpmemblk>: <3> [set.c:2690 util_replica_init_headers_local] set 0x1838980 repidx 0 flags 1 attr 0x7f42ed834860 <libpmemblk>: <3> [set.c:357 util_map_hdr] part 0x18379d0 flags 1 <libpmemblk>: <4> [mmap_posix.c:214 util_map_sync] mmap with MAP_SYNC not supported <libpmemblk>: <3> [set.c:2160 util_header_create] set 0x1838980 repidx 0 partidx 0 attr 0x7f42ed834860 overwrite 0 <libpmemblk>: <3> [set.c:3496 util_pool_attr2hdr] hdr 0x7f42edeb8000, attr 0x7f42ed834860 <libpmemblk>: <3> [shutdown_state.c:69 shutdown_state_init] sds 0x7f42edeb8fb8 <libpmemblk>: <3> [shutdown_state.c:55 shutdown_state_checksum] sds 0x7f42edeb8fb8 <libpmemblk>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x18379d0 addr 0x7f42edeb8fb8 len 64 flush 1 <libpmemblk>: <3> [shutdown_state.c:85 shutdown_state_add_part] sds 0x7f42edeb8fb8, path /dev/shm//000000.pmem <libpmemblk>: <3> [os_dimm_none.c:60 os_dimm_usc] path /dev/shm//000000.pmem, usc 0x7ffdb7c146e8 <libpmemblk>: <3> [os_dimm_none.c:45 os_dimm_uid] path /dev/shm//000000.pmem, uid (nil), len 0 <libpmemblk>: <3> [os_dimm_none.c:45 os_dimm_uid] path /dev/shm//000000.pmem, uid 0x1837a50, len 4 <libpmemblk>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x18379d0 addr 0x7f42edeb8fb8 len 64 flush 1 <libpmemblk>: <3> [shutdown_state.c:55 shutdown_state_checksum] sds 0x7f42edeb8fb8 <libpmemblk>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x18379d0 addr 0x7f42edeb8fb8 len 64 flush 1 <libpmemblk>: <3> [shutdown_state.c:133 shutdown_state_set_flag] sds 0x7f42edeb8fb8 <libpmemblk>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x18379d0 addr 0x7f42edeb8fb8 len 64 flush 1 <libpmemblk>: <3> [shutdown_state.c:55 shutdown_state_checksum] sds 0x7f42edeb8fb8 <libpmemblk>: <3> [os_deep_linux.c:179 os_part_deep_common] part 0x18379d0 addr 0x7f42edeb8fb8 len 64 flush 1 <libpmemblk>: <3> [util_pmem.h:67 util_persist_auto] is_pmem 0, addr 0x7f42edeb8000, len 4096 <libpmemblk>: <3> [util_pmem.h:53 util_persist] is_pmem 0, addr 0x7f42edeb8000, len 4096 <libpmemblk>: <4> [set.c:414 util_unmap_hdr] munmap: addr 0x7f42edeb8000 size 4096 <libpmemblk>: <3> [blk.c:292 blk_descr_create] pbp 0x7f42edeb3000 bsize 512 zeroed 1 <libpmemblk>: <3> [util_pmem.h:53 util_persist] is_pmem 0, addr 0x7f42edeb4000, len 4 <libpmemblk>: <3> [util_pmem.h:53 util_persist] is_pmem 0, addr 0x7f42edeb4004, len 4 <libpmemblk>: <3> [blk.c:330 blk_runtime_init] pbp 0x7f42edeb3000 bsize 512 rdonly 0 <libpmemblk>: <4> [blk.c:352 blk_runtime_init] data area 0x7f42edeb5000 data size 16777216 bsize 512 <libpmemblk>: <3> [btt.c:1436 btt_init] rawsize 16777216 lbasize 512 <libpmemblk>: <3> [btt.c:1280 read_layout] bttp 0x1837a70 <libpmemblk>: <3> [btt.c:326 read_info] infop 0x7ffdb7c13850 <libpmemblk>: <3> [btt.c:329 read_info] signature invalid <libpmemblk>: <3> [btt.c:1101 write_layout] bttp 0x1837a70 lane 0 write 0 <libpmemblk>: <4> [btt.c:1126 write_layout] narena 1 <libpmemblk>: <4> [btt.c:1131 write_layout] adjusted internal_lbasize 512 <libpmemblk>: <4> [btt.c:1142 write_layout] layout arena 0 <libpmemblk>: <4> [btt.c:1158 write_layout] internal_nlba 32458 external_nlba 32202 <libpmemblk>: <3> [btt.c:1479 btt_init] success, bttp 0x1837a70 nlane 16 <libpmemblk>: <3> [btt.c:1493 btt_nlane] bttp 0x1837a70 <libpmemblk>: <3> [mmap.c:267 util_range_none] addr 0x7f42edeb3000 len 4096 <libpmemblk>: <3> [mmap.c:209 util_range_ro] addr 0x7f42edeb5000 len 16777216 <libpmemblk>: <1> [mmap.c:227 util_range_ro] mprotect: PROT_READ: Cannot allocate memory <libpmemblk>: <1> [blk.c:398 blk_runtime_init] assertion failure: util_range_ro(pbp->data, pbp->datasize) >= 0 Aborted (core dumped) ``` File is created under /dev/shm/ ``` $ls /dev/shm 000000.pmem ``` Found in 1.4-rc1-19-gf8d0b0668
priority
sigabrt sigsegv raised when creating pool based on directory poolset with insufficient size steps to reproduce create pool set file cat pool set pmempoolset dev shm create obj log blk pool pmempool create obj pool set path pool set layout null poolsize mode setp path pool set poolsize minsize minpartsize attr nlanes can have rep setp path pool set poolsize minsize minpartsize pattr nlanes can have rep remote setp path pool set poolsize minsize path pool set fd not a character device returning returning path pool set size minsize flags path pool set path pool set fd not a character device returning returning file length actual file size setp path pool set fd setp path dev shm set path dev shm filesize set path dev shm filesize set set rep dir dev shm set file format correct pool set set set pool size set to set size replica path dev shm pmem filesize replica path dev shm pmem filesize replica n path dev shm pmem returning set pool size set to set minpartsize create part minsize create path dev shm pmem size minsize set repidx flags len req align system choice hint part addr size offset flags mmap with map sync not supported set repidx replica addr set repidx flags attr set repidx flags attr part flags mmap with map sync not supported set repidx partidx attr overwrite hdr attr sds sds part addr len flush sds path dev shm pmem path dev shm pmem usc path dev shm pmem uid nil len path dev shm pmem uid len part addr len flush sds part addr len flush sds part addr len flush sds part addr len flush is pmem addr len is pmem addr len munmap addr size rep is pmem resvsize pop layout null poolsize segmentation fault core dumped pmempool create log pool set path pool set poolsize mode setp path pool set poolsize minsize minpartsize attr nlanes nil can have rep setp path pool set poolsize minsize minpartsize pattr nlanes nil can have rep remote setp path pool set poolsize minsize path pool set fd not a character device returning returning path pool set size minsize flags path pool set path pool set fd not a character device returning returning file length actual file size setp path pool set fd setp path dev shm set path dev shm filesize set path dev shm filesize set set rep dir dev shm set file format correct pool set set set pool size set to set size replica path dev shm pmem filesize replica path dev shm pmem filesize replica n path dev shm pmem returning set pool size set to set minpartsize create part minsize create path dev shm pmem size minsize set repidx flags len req align system choice hint part addr size offset flags mmap with map sync not supported set repidx replica addr set repidx flags attr set repidx flags attr part flags mmap with map sync not supported set repidx partidx attr overwrite hdr attr sds sds part addr len flush sds path dev shm pmem path dev shm pmem usc path dev shm pmem uid nil len path dev shm pmem uid len part addr len flush sds part addr len flush sds part addr len flush sds part addr len flush is pmem addr len is pmem addr len munmap addr size plp poolsize is pmem addr len plp rdonly addr len addr len mprotect prot read cannot allocate memory assertion failure util range ro char plp addr sizeof struct pool hdr plp size sizeof struct pool hdr aborted core dumped pmempool create blk pool set setp path pool set poolsize minsize minpartsize attr nlanes nil can have rep setp path pool set poolsize minsize minpartsize pattr nlanes nil can have rep remote setp path pool set poolsize minsize path pool set fd not a character device returning returning path pool set size minsize flags path pool set path pool set fd not a character device returning returning file length actual file size setp path pool set fd setp path dev shm set path dev shm filesize set path dev shm filesize set set rep dir dev shm set file format correct pool set set set pool size set to set size replica path dev shm pmem filesize replica path dev shm pmem filesize replica n path dev shm pmem returning set pool size set to set minpartsize create part minsize create path dev shm pmem size minsize set repidx flags len req align system choice hint part addr size offset flags mmap with map sync not supported set repidx replica addr set repidx flags attr set repidx flags attr part flags mmap with map sync not supported set repidx partidx attr overwrite hdr attr sds sds part addr len flush sds path dev shm pmem path dev shm pmem usc path dev shm pmem uid nil len path dev shm pmem uid len part addr len flush sds part addr len flush sds part addr len flush sds part addr len flush is pmem addr len is pmem addr len munmap addr size pbp bsize zeroed is pmem addr len is pmem addr len pbp bsize rdonly data area data size bsize rawsize lbasize bttp infop signature invalid bttp lane write narena adjusted internal lbasize layout arena internal nlba external nlba success bttp nlane bttp addr len addr len mprotect prot read cannot allocate memory assertion failure util range ro pbp data pbp datasize aborted core dumped file is created under dev shm ls dev shm pmem found in
1
317,280
9,662,771,997
IssuesEvent
2019-05-20 21:48:41
bitmal/TDot-Paths
https://api.github.com/repos/bitmal/TDot-Paths
closed
Listview split screen buggy
bug medium priority ui wontfix
When removing the List View for the merchant items, it appears to stretch the map in an obtuse way. I have a feeling it has something to do with the map canvas not properly updating to account for the change.
1.0
Listview split screen buggy - When removing the List View for the merchant items, it appears to stretch the map in an obtuse way. I have a feeling it has something to do with the map canvas not properly updating to account for the change.
priority
listview split screen buggy when removing the list view for the merchant items it appears to stretch the map in an obtuse way i have a feeling it has something to do with the map canvas not properly updating to account for the change
1
809,279
30,184,299,553
IssuesEvent
2023-07-04 10:58:55
Benjamin-Loison/YouTube-operational-API
https://api.github.com/repos/Benjamin-Loison/YouTube-operational-API
opened
Don't use numbers to identify private instances to disable counting/enumerating them
enhancement low priority medium security
Note that the downside is a more complex URL, while we already have some entropy in `instanceKey`, it's debatable to keep `instanceKey` if make the URL more complex. In fact I don't see any advantage to keep `instanceKey` in such a scenario. Could for instance use `tr -dc a-z0-9 </dev/urandom | head -c 32 ; echo ''` to generate the subdomain name.
1.0
Don't use numbers to identify private instances to disable counting/enumerating them - Note that the downside is a more complex URL, while we already have some entropy in `instanceKey`, it's debatable to keep `instanceKey` if make the URL more complex. In fact I don't see any advantage to keep `instanceKey` in such a scenario. Could for instance use `tr -dc a-z0-9 </dev/urandom | head -c 32 ; echo ''` to generate the subdomain name.
priority
don t use numbers to identify private instances to disable counting enumerating them note that the downside is a more complex url while we already have some entropy in instancekey it s debatable to keep instancekey if make the url more complex in fact i don t see any advantage to keep instancekey in such a scenario could for instance use tr dc a dev urandom head c echo to generate the subdomain name
1
130,232
5,112,204,561
IssuesEvent
2017-01-06 10:14:09
kulish-alina/HR_Project
https://api.github.com/repos/kulish-alina/HR_Project
closed
Close status after hiring
enhancement frontend medium priority
Status of vacansy should be "Closed" after hiring candidate. ![image](https://cloud.githubusercontent.com/assets/11043173/21259878/2c58cbea-c38c-11e6-869e-4e63961a1094.png)
1.0
Close status after hiring - Status of vacansy should be "Closed" after hiring candidate. ![image](https://cloud.githubusercontent.com/assets/11043173/21259878/2c58cbea-c38c-11e6-869e-4e63961a1094.png)
priority
close status after hiring status of vacansy should be closed after hiring candidate
1
514,497
14,940,077,541
IssuesEvent
2021-01-25 17:46:50
AZMAG/Peoria-Business-Resources-Tool
https://api.github.com/repos/AZMAG/Peoria-Business-Resources-Tool
opened
Data issue to check
Issue: Enhancement Issue: Maintenance Priority: Medium
Please go ahead and proceed with entering the businesses on the attached spreadsheet except Pita Jungle into the mapping system. There might be a glitch with Pita Jungle as it is reflecting Amber Costa (a City of Peoria co-worker) as well as a May entry date. Please remove Musclemaker Grill from the current map of businesses as it closed and a new restaurant will soon open in its place.
1.0
Data issue to check - Please go ahead and proceed with entering the businesses on the attached spreadsheet except Pita Jungle into the mapping system. There might be a glitch with Pita Jungle as it is reflecting Amber Costa (a City of Peoria co-worker) as well as a May entry date. Please remove Musclemaker Grill from the current map of businesses as it closed and a new restaurant will soon open in its place.
priority
data issue to check please go ahead and proceed with entering the businesses on the attached spreadsheet except pita jungle into the mapping system there might be a glitch with pita jungle as it is reflecting amber costa a city of peoria co worker as well as a may entry date please remove musclemaker grill from the current map of businesses as it closed and a new restaurant will soon open in its place
1
241,698
7,822,360,347
IssuesEvent
2018-06-14 02:01:16
MarshallAsch/veil-droid
https://api.github.com/repos/MarshallAsch/veil-droid
closed
Unit test Data store
Priority: Medium Status: Review Needed
Add unit tests to make sure that the data store works as expected on a single device. Depends on #40
1.0
Unit test Data store - Add unit tests to make sure that the data store works as expected on a single device. Depends on #40
priority
unit test data store add unit tests to make sure that the data store works as expected on a single device depends on
1
233,829
7,704,947,171
IssuesEvent
2018-05-21 14:02:30
salesagility/SuiteCRM
https://api.github.com/repos/salesagility/SuiteCRM
closed
Report Module - HTML contains invalid UTF-8 character(s)
Fix Proposed Medium Priority Resolved: Next Release bug
Hello, #### Issue <!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug --> I found a issue, and also found what is the caused of the problem, but doesn't know how to fix this. I create a basic report that how Open cases group by clients: ![image](https://user-images.githubusercontent.com/5199918/27273459-1ed44b46-549d-11e7-9077-2c1b9df535dc.png) But when i try to "Download PDF", i receive a white HTML page with the message "Report Module - HTML contains invalid UTF-8 character(s)" #### Possible Fix Like i mentioned before, i found what caused the issue, it is a spanish accent symbol(ó) in the description field of an case: ![image](https://user-images.githubusercontent.com/5199918/27273609-d2393ff2-549d-11e7-9c20-80077f2e57a1.png) If i remove that symbol(ó), everything works ok.
1.0
Report Module - HTML contains invalid UTF-8 character(s) - Hello, #### Issue <!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug --> I found a issue, and also found what is the caused of the problem, but doesn't know how to fix this. I create a basic report that how Open cases group by clients: ![image](https://user-images.githubusercontent.com/5199918/27273459-1ed44b46-549d-11e7-9077-2c1b9df535dc.png) But when i try to "Download PDF", i receive a white HTML page with the message "Report Module - HTML contains invalid UTF-8 character(s)" #### Possible Fix Like i mentioned before, i found what caused the issue, it is a spanish accent symbol(ó) in the description field of an case: ![image](https://user-images.githubusercontent.com/5199918/27273609-d2393ff2-549d-11e7-9c20-80077f2e57a1.png) If i remove that symbol(ó), everything works ok.
priority
report module html contains invalid utf character s hello issue i found a issue and also found what is the caused of the problem but doesn t know how to fix this i create a basic report that how open cases group by clients but when i try to download pdf i receive a white html page with the message report module html contains invalid utf character s possible fix like i mentioned before i found what caused the issue it is a spanish accent symbol ó in the description field of an case if i remove that symbol ó everything works ok
1
303,125
9,302,828,994
IssuesEvent
2019-03-24 13:04:44
netdata/netdata
https://api.github.com/repos/netdata/netdata
opened
Improve per chart configuration documentation
area/docs priority/medium
Related to #5697. https://docs.netdata.cloud/daemon/config/#per-chart-configuration shouldn't be pointing to the definition of charts. It needs a table with the supported options, explaining what each one does.
1.0
Improve per chart configuration documentation - Related to #5697. https://docs.netdata.cloud/daemon/config/#per-chart-configuration shouldn't be pointing to the definition of charts. It needs a table with the supported options, explaining what each one does.
priority
improve per chart configuration documentation related to shouldn t be pointing to the definition of charts it needs a table with the supported options explaining what each one does
1
207,198
7,125,751,215
IssuesEvent
2018-01-20 01:12:42
santospatrick/react-ts-cdk
https://api.github.com/repos/santospatrick/react-ts-cdk
closed
Danger JS
Priority: Medium Status: Available Type: Enhancement
It would be cool to enforce lint rules, small pull requests, changelog updates & lockfiles (yarn.lock/package-lock.json) updated on each pull request (integrated with CircleCI)
1.0
Danger JS - It would be cool to enforce lint rules, small pull requests, changelog updates & lockfiles (yarn.lock/package-lock.json) updated on each pull request (integrated with CircleCI)
priority
danger js it would be cool to enforce lint rules small pull requests changelog updates lockfiles yarn lock package lock json updated on each pull request integrated with circleci
1
356,548
10,594,368,762
IssuesEvent
2019-10-09 16:36:49
dotkom/design-system
https://api.github.com/repos/dotkom/design-system
opened
Storybooks is hella broken on mobile
Priority: Medium
Sometimes I want to see how the components are on mobile. It's really hard to do right now.
1.0
Storybooks is hella broken on mobile - Sometimes I want to see how the components are on mobile. It's really hard to do right now.
priority
storybooks is hella broken on mobile sometimes i want to see how the components are on mobile it s really hard to do right now
1
249,059
7,953,545,756
IssuesEvent
2018-07-12 02:15:02
hackoregon/civic-devops
https://api.github.com/repos/hackoregon/civic-devops
opened
Rename and switch ownership of legacy 2017 databases
Priority: medium help wanted legacy maintenance
Getting all the 2017 databases under one roof, I'm thinking about me or another person in the future wrangling all the databases as more come online, and thinking that names and users that don't follow a common pattern are going to be hell when it comes time to automate operations (e.g. moving database backups around, migrating, etc) - and becomes just unnecessarily hard to remember what database went with which API project. I mean to bring the 2017 databases up to the same naming conventions as we have for the 2018 databases. That means renaming databases and swapping owners - historically not something that most of our developers have found easy, and I don't know the proper procedures off the top of my head either. So here goes writing down the plan and how it gets executed. ## Inventory of legacy databases - Budget project: - db owner: budget_user - databases: budget - Emergency Response project: - db owner: eradmin - databases: disaster, fire, police, test-disaster - Homelessness project - db owner: homelessness_user - databases: homelessness - Housing project: - db owner: housing - databases: housing_user - Transportation project: - db owner: transportation_user - databases: transportation The major change I'd like to make is for the Emergency Response project: - db owner: emergency-response - databases: emergency-response-disaster, emergency-response-fire, emergency-response-police And it would be helpful to rename all the {budget_user, homelessness_user, housing_user, transportation_user} to {budget, homelessness, housing, transportation} or even {budget-readonly, homelessness-readonly, housing-readonly, transportation-readonly}. Then every database operation can do something like `for database in databases` and just equate "project name" to "database owner". Makes it as easy as possible to equate "user" with "databases associated with that user" too.
1.0
Rename and switch ownership of legacy 2017 databases - Getting all the 2017 databases under one roof, I'm thinking about me or another person in the future wrangling all the databases as more come online, and thinking that names and users that don't follow a common pattern are going to be hell when it comes time to automate operations (e.g. moving database backups around, migrating, etc) - and becomes just unnecessarily hard to remember what database went with which API project. I mean to bring the 2017 databases up to the same naming conventions as we have for the 2018 databases. That means renaming databases and swapping owners - historically not something that most of our developers have found easy, and I don't know the proper procedures off the top of my head either. So here goes writing down the plan and how it gets executed. ## Inventory of legacy databases - Budget project: - db owner: budget_user - databases: budget - Emergency Response project: - db owner: eradmin - databases: disaster, fire, police, test-disaster - Homelessness project - db owner: homelessness_user - databases: homelessness - Housing project: - db owner: housing - databases: housing_user - Transportation project: - db owner: transportation_user - databases: transportation The major change I'd like to make is for the Emergency Response project: - db owner: emergency-response - databases: emergency-response-disaster, emergency-response-fire, emergency-response-police And it would be helpful to rename all the {budget_user, homelessness_user, housing_user, transportation_user} to {budget, homelessness, housing, transportation} or even {budget-readonly, homelessness-readonly, housing-readonly, transportation-readonly}. Then every database operation can do something like `for database in databases` and just equate "project name" to "database owner". Makes it as easy as possible to equate "user" with "databases associated with that user" too.
priority
rename and switch ownership of legacy databases getting all the databases under one roof i m thinking about me or another person in the future wrangling all the databases as more come online and thinking that names and users that don t follow a common pattern are going to be hell when it comes time to automate operations e g moving database backups around migrating etc and becomes just unnecessarily hard to remember what database went with which api project i mean to bring the databases up to the same naming conventions as we have for the databases that means renaming databases and swapping owners historically not something that most of our developers have found easy and i don t know the proper procedures off the top of my head either so here goes writing down the plan and how it gets executed inventory of legacy databases budget project db owner budget user databases budget emergency response project db owner eradmin databases disaster fire police test disaster homelessness project db owner homelessness user databases homelessness housing project db owner housing databases housing user transportation project db owner transportation user databases transportation the major change i d like to make is for the emergency response project db owner emergency response databases emergency response disaster emergency response fire emergency response police and it would be helpful to rename all the budget user homelessness user housing user transportation user to budget homelessness housing transportation or even budget readonly homelessness readonly housing readonly transportation readonly then every database operation can do something like for database in databases and just equate project name to database owner makes it as easy as possible to equate user with databases associated with that user too
1
814,421
30,506,788,918
IssuesEvent
2023-07-18 17:29:43
DARPA-ASKEM/simulation-service
https://api.github.com/repos/DARPA-ASKEM/simulation-service
closed
Timestep interval not always being respected
medium-priority
@joshday I think with the intermediate hook some of addiotional steps are being injected that are not intervals. I could do some post processing on the dataset to strip them out (filter rows using `isinteger`) but it might better to fix this in the operation itself. Either way works for me.
1.0
Timestep interval not always being respected - @joshday I think with the intermediate hook some of addiotional steps are being injected that are not intervals. I could do some post processing on the dataset to strip them out (filter rows using `isinteger`) but it might better to fix this in the operation itself. Either way works for me.
priority
timestep interval not always being respected joshday i think with the intermediate hook some of addiotional steps are being injected that are not intervals i could do some post processing on the dataset to strip them out filter rows using isinteger but it might better to fix this in the operation itself either way works for me
1
43,441
2,889,784,748
IssuesEvent
2015-06-13 19:10:55
damonkohler/android-scripting
https://api.github.com/repos/damonkohler/android-scripting
closed
Add Bluetooth support for older Android platforms
auto-migrated Priority-Medium Type-Enhancement
``` Some devices can't use Bluetooth, Including http://code.google.com/p/backport-android-bluetooth/ may solve the issue for Android 1.5 and 1.6. ``` Original issue reported on code.google.com by `paul.se...@free.fr` on 28 Sep 2010 at 9:23
1.0
Add Bluetooth support for older Android platforms - ``` Some devices can't use Bluetooth, Including http://code.google.com/p/backport-android-bluetooth/ may solve the issue for Android 1.5 and 1.6. ``` Original issue reported on code.google.com by `paul.se...@free.fr` on 28 Sep 2010 at 9:23
priority
add bluetooth support for older android platforms some devices can t use bluetooth including may solve the issue for android and original issue reported on code google com by paul se free fr on sep at
1
36,250
2,797,417,790
IssuesEvent
2015-05-12 13:44:42
twogee/ant-http
https://api.github.com/repos/twogee/ant-http
closed
[CLOSED] Add an attribute to output response entity to a file
auto-migrated Milestone-1.1 Priority-Medium Project-ant-http Type-Enhancement
<a href="https://github.com/GoogleCodeExporter"><img src="https://avatars.githubusercontent.com/u/9614759?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [GoogleCodeExporter](https://github.com/GoogleCodeExporter)** _Monday May 11, 2015 at 22:05 GMT_ _Originally opened as https://github.com/twogee/missing-link/issues/10_ ---- ``` Add an attribute to output response entity to a file ``` Original issue reported on code.google.com by `alex.she...@gmail.com` on 19 Mar 2011 at 12:05
1.0
[CLOSED] Add an attribute to output response entity to a file - <a href="https://github.com/GoogleCodeExporter"><img src="https://avatars.githubusercontent.com/u/9614759?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [GoogleCodeExporter](https://github.com/GoogleCodeExporter)** _Monday May 11, 2015 at 22:05 GMT_ _Originally opened as https://github.com/twogee/missing-link/issues/10_ ---- ``` Add an attribute to output response entity to a file ``` Original issue reported on code.google.com by `alex.she...@gmail.com` on 19 Mar 2011 at 12:05
priority
add an attribute to output response entity to a file issue by monday may at gmt originally opened as add an attribute to output response entity to a file original issue reported on code google com by alex she gmail com on mar at
1
151,737
5,826,645,085
IssuesEvent
2017-05-08 06:01:15
urfu-2016/team2
https://api.github.com/repos/urfu-2016/team2
opened
Странно работает проверка аутентифицированности
bug Priority: Medium Type: Back-end
На главной странице: ![image](https://cloud.githubusercontent.com/assets/11522574/25791742/bc53a0d4-33db-11e7-9991-72d0877fc0ae.png) На странице квестов: ![image](https://cloud.githubusercontent.com/assets/11522574/25791721/9890d7de-33db-11e7-8a4f-3d69b3f0ce4e.png)
1.0
Странно работает проверка аутентифицированности - На главной странице: ![image](https://cloud.githubusercontent.com/assets/11522574/25791742/bc53a0d4-33db-11e7-9991-72d0877fc0ae.png) На странице квестов: ![image](https://cloud.githubusercontent.com/assets/11522574/25791721/9890d7de-33db-11e7-8a4f-3d69b3f0ce4e.png)
priority
странно работает проверка аутентифицированности на главной странице на странице квестов
1
25,913
2,684,042,036
IssuesEvent
2015-03-28 16:05:14
ConEmu/old-issues
https://api.github.com/repos/ConEmu/old-issues
closed
отображение с растровыми шрифтами
1 star bug imported Priority-Medium
_From [vga...@gmail.com](https://code.google.com/u/112474121628973010974/) on January 14, 2012 13:37:23_ OS version: Win7 x64 ConEmu version: 120110 x64 Far version: 2/3 при выборе шрифта [Raster Fonts 10x18] часть рамок отображаются нечитаемыми символами, при этом в обычной консоли с таким шрифтом все символы читаемы http://s018.radikal.ru/i528/1201/e7/2e49f2fbd054.jpg **Attachment:** [far.JPG](http://code.google.com/p/conemu-maximus5/issues/detail?id=478) _Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=478_
1.0
отображение с растровыми шрифтами - _From [vga...@gmail.com](https://code.google.com/u/112474121628973010974/) on January 14, 2012 13:37:23_ OS version: Win7 x64 ConEmu version: 120110 x64 Far version: 2/3 при выборе шрифта [Raster Fonts 10x18] часть рамок отображаются нечитаемыми символами, при этом в обычной консоли с таким шрифтом все символы читаемы http://s018.radikal.ru/i528/1201/e7/2e49f2fbd054.jpg **Attachment:** [far.JPG](http://code.google.com/p/conemu-maximus5/issues/detail?id=478) _Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=478_
priority
отображение с растровыми шрифтами from on january os version conemu version far version при выборе шрифта часть рамок отображаются нечитаемыми символами при этом в обычной консоли с таким шрифтом все символы читаемы attachment original issue
1
640,439
20,783,215,624
IssuesEvent
2022-03-16 16:28:59
zeyneplervesarp/swe574-javagang
https://api.github.com/repos/zeyneplervesarp/swe574-javagang
closed
the list of participants should be on the service page - frontend
enhancement frontend low priority difficulty-medium
#15 this is the frontend issue for the participant list requirement
1.0
the list of participants should be on the service page - frontend - #15 this is the frontend issue for the participant list requirement
priority
the list of participants should be on the service page frontend this is the frontend issue for the participant list requirement
1