id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
250272470 | subsys: bluetooth: shell: gatt_read: Reset offset to 0
Unless offset was specified, it should default to 0, whereas
previously, value from the last command was used.
Signed-off-by: Paul Sokolovsky paul.sokolovsky@linaro.org
Btw, somehow gatt-read command didn't work for me from shell. Maybe a timing issue.
Fixed commit message too.
| gharchive/pull-request | 2017-08-15T09:59:34 | 2025-04-01T06:46:21.049819 | {
"authors": [
"pfalcon"
],
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/1126",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
489828426 | soc : arm : microchip_mec Move custom busy wait config to MEC1501 SoC
Configuration item custom busy wait is required for all MEC1501 series
if its RTOS timer driver is enabled. We moved the configuration logic
from board level to MEC1501 SoC level.
Signed-off-by: Scott Worley scott.worley@microchip.com
This PR addresses issue #18880 boards: mec15xxevb_assy6853: consider moving ARCH_HAS_CUSTOM_BUSY_WAIT to SoC definition.
Hi,
I pushed changes to keep frequency selection at the board level and only timer selection defaults are in SoC level.
Rebase to fix GitLint issue.
| gharchive/pull-request | 2019-09-05T15:11:21 | 2025-04-01T06:46:21.052093 | {
"authors": [
"scottwcpg"
],
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/18942",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
562921755 | Fix 22563
Fixes: #22563
Currently, CMake cannot set common west flash arguments like --bin-file using the board_runner_args() function or related functions.
This is because internally, those common arguments are handled differently than the runner-specific ones which board_runner_args() has power over. This is inconsistent and causing problems for users, so fix it by:
creating a new file in the build directory, runners.yaml, which saves all the state which was formerly used to populate the default --bin-file value as a true command line argument instead of using CMake cache variables for this purpose.
using runners.yaml from the west flash, etc. commands to put all the arguments on equal footing.
This is a fairly significant refactor to run_common.py, but it's the cleanest way to remove the artificial distinction between the types of arguments.
Since we're close to a release, let's keep the CMake cache variables exactly where they are. They're still used by 'west sign' for now, anyway.
After the release I'll get rid of all the redundant caching and update west sign as well.
[scripts: fix west sign when ZEPHYR_BASE is unset](/zephyrproject-rtos/zephyr/pull/22707/commits/5f7777719d634dc9b95c857da109bdec227fcafd)
I tacked this on at the end; hope it's OK since it also fixes an issue.
I'm getting this error when trying to build using west, and NOT setting ZEPHYR_BASE
$ west build -bnrf52840_pca10056
WARNING: can't find the zephyr repository
- no --zephyr-base given
- ZEPHYR_BASE is unset
- west config contains no zephyr.base setting
- no manifest project has name or path "zephyr"
If this isn't a Zephyr workspace, you can silence this warning with something like this:
west config zephyr.base not-using-zephyr
-- west build: build configuration:
source directory: /tmp/pr-test-ncs/22707/zephyr/zephyr/samples/hello_world
build directory: /tmp/pr-test-ncs/22707/zephyr/zephyr/samples/hello_world/build (created)
BOARD: nrf52840_pca10056 (origin: command line)
-- west build: generating a build system
CMake Error at CMakeLists.txt:5 (include):
include could not find load file:
/cmake/app/boilerplate.cmake
This is caused by bug in west 0.7.1
https://github.com/zephyrproject-rtos/west/issues/378
This is also a step along the way to https://github.com/zephyrproject-rtos/zephyr/issues/16205.
Setting --bin-file foo.bin for the pyocd runner without a foo.bin file present now results in a more or less cryptic message:
FATAL ERROR: Cannot flash; no hex (/home/brix/Projects/zephyrproject/zephyr/tests/boards/board_shell/build/zephyr/zephyr.hex) or bin (foo.bin) files found.
Setting --bin-file foo.bin for the pyocd runner without a foo.bin file present now results in a more or less cryptic message:
FATAL ERROR: Cannot flash; no hex (/home/brix/Projects/zephyrproject/zephyr/tests/boards/board_shell/build/zephyr/zephyr.hex) or bin (foo.bin) files found.
Nevermind. The same happens when calling west flash --bin-file foo.bin with a pyocd runner on master. We can address that in a separate issue.
Nevermind. The same happens when calling west flash --bin-file foo.bin with a pyocd runner on master. We can address that in a separate issue.
What's the issue? It sounds like you're saying that calling the script on a nonexistent file returns an error, but that makes sense to me.
Nevermind. The same happens when calling west flash --bin-file foo.bin with a pyocd runner on master. We can address that in a separate issue.
What's the issue? It sounds like you're saying that calling the script on a nonexistent file returns an error, but that makes sense to me.
It does make sense to error out on a non-existent file, of course. It is just confusing that the pyocd runner attempts to use a zephyr.hex file when I explicitly parsed --bin-file as argument.
It is not related to this issue, however.
It does make sense to error out on a non-existent file, of course. It is just confusing that the pyocd runner attempts to use a zephyr.hex file when I explicitly parsed --bin-file as argument.
Yes, that is a longstanding issue.
Eventually I want to deprecate users directly using --bin-file --hex-file etc. and just have a single --file argument. If set, it means "use this exact file", and if unset the runner should pick its favorite among the preconfigured --bin-file, --hex-file, etc.
@jhedberg this is good to go if you want it for 2.2
| gharchive/pull-request | 2020-02-11T01:10:32 | 2025-04-01T06:46:21.062857 | {
"authors": [
"henrikbrixandersen",
"mbolivar",
"tejlmand"
],
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/22707",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
597403419 | drivers: nrf: uarte: check if RXSTARTED before issue STOPRX
Only issue STOPRX is RX has started otherwise we will get stuck in while loop.
@carlescufi FYI. Looks like something stalled with the checks. Is there a way to restart?
@rerickson1 actually not sure. Try rebasing onto the latest master and force-pushing see if that fixes it.
@anangl Could use another review here.
| gharchive/pull-request | 2020-04-09T16:29:14 | 2025-04-01T06:46:21.064808 | {
"authors": [
"carlescufi",
"rerickson1"
],
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/24240",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
674849369 | Bluetooth: Mesh: Heartbeat period starts at tx
Starts the periodic heartbeat publish period at the end of the
publication, instead of at the ordering time. This ensures that the
heartbeat period doesn't get shortened by other enqueued messages.
Signed-off-by: Trond Einar Snekvik Trond.Einar.Snekvik@nordicsemi.no
cc @michal-narajowski
| gharchive/pull-request | 2020-08-07T08:20:23 | 2025-04-01T06:46:21.066579 | {
"authors": [
"trond-snekvik"
],
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/27424",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
791201344 | Implement null-pointer de-referencing detection for Cortex-M
Fixes #3634
The PR implements null-pointer dereferencing (R/W) detection for Cortex-M architecture
using the DWT peripheral where available
using the MPU peripheral when DWT is not available
It is recommended (and implemented) as a debug feature only: enabling null-pointer detection will:
increase flash footprint, as it will un-map some area starting from address 0x0 (1kB by default)
reserve 1 or 2 DWT comparator(s), depending on the architecture variant, or,
consume 1 MPU region, if DWT is not used/present.
Null pointer exception detection is enforced in the test suite, to be able to catch accidental null pointer dereferencing
It is emulated in QEMU (mps2_an385).
The feature is not recommended in production code.
The feature is useful for privileged threads/ ISRs, which normally have access to all flash / sram
It's also useful for user threads, as normally reading flash area is allowed in nPRIV mode.
The feature is implemented based on the same principle as the solution for x86 (see #3634)
Missing/Future-work
documentation [release note entry is now added] of the feature
possibly a new error code specific to null-pointer exception (e.g. K_ERROR_NULL_POINTER_EXCEPTION) (CC @andrewboie)
This is very cool, thanks @ioannisg . I tried out the test on mimxrt1050_evk, but unfortunately the exception doesn't fire and the test fails. I haven't debugged why yet.
This is very cool, thanks @ioannisg . I tried out the test on mimxrt1050_evk, but unfortunately the exception doesn't fire and the test fails. I haven't debugged why yet.
Thanks @MaureenHelm ! Please share some log of the test output, so I can take a look. I could also try to debug on mimxrt1015 (that has to wait until next week, though).
One thing that might help here is whether the flashing of the board leaves the Debug mode on (then, it is true that the DWT won't be able to raise exceptions). I've seen this at least with openOCD on SAME70 platform.
One thing that might help here is whether the flashing of the board leaves the Debug mode on (then, it is true that the DWT won't be able to raise exceptions). I've seen this at least with openOCD on SAME70 platform.
That appears to have been the problem. The test passes after POR via SW4.
One thing that might help here is whether the flashing of the board leaves the Debug mode on (then, it is true that the DWT won't be able to raise exceptions). I've seen this at least with openOCD on SAME70 platform.
That appears to have been the problem. The test passes after POR via SW4.
I will make a comment in the Kconfig option, stressing that the feature requires Normal mode. @MaureenHelm thanks.
One thing that might help here is whether the flashing of the board leaves the Debug mode on (then, it is true that the DWT won't be able to raise exceptions). I've seen this at least with openOCD on SAME70 platform.
That appears to have been the problem. The test passes after POR via SW4.
I will make a comment in the Kconfig option, stressing that the feature requires Normal mode. @MaureenHelm thanks.
@hakehuang Do you see any issues with this for board farm testing?
One thing that might help here is whether the flashing of the board leaves the Debug mode on (then, it is true that the DWT won't be able to raise exceptions). I've seen this at least with openOCD on SAME70 platform.
That appears to have been the problem. The test passes after POR via SW4.
I will make a comment in the Kconfig option, stressing that the feature requires Normal mode. @MaureenHelm thanks.
@hakehuang Do you see any issues with this for board farm testing?
Thanks Maureen. I see this problem but not find the root cause befer, because I can always run pass, when I go check which looks like I push the reset key when I check.
@MaureenHelm @hakehuang I added a comment in the Kconfig definition stressing that we need the core to be in Normal mode, to use this software debug feature (the null pointer exception detection), if we're using the DWT peripheral. I also added a run-time assert for this. Let me know if you have more concerns around it.
@MaureenHelm @hakehuang I added a comment in the Kconfig definition stressing that we need the core to be in Normal mode, to use this software debug feature (the null pointer exception detection), if we're using the DWT peripheral. I also added a run-time assert for this. Let me know if you have more concerns around it.
@ioannisg , I am ok with it, thanks a lot.
@carlescufi @galak would you like to take a quick look as well?
| gharchive/pull-request | 2021-01-21T15:12:02 | 2025-04-01T06:46:21.078559 | {
"authors": [
"MaureenHelm",
"hakehuang",
"ioannisg"
],
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/31481",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1072977713 | drivers: uart: npcx: use PM constraint for the receiving
The driver sets the PM constraint to prevent the system
from entering the suspend state for a CONFIG_UART_CONSOLE_INPUT_EXPIRED
period when data come in. The constraint releases after the
CONFIG_UART_CONSOLE_INPUT_EXPIRED time expires, .
With this PR, the PM policy doesn't have to check the timestamp
by calling npcx_power_console_is_in_use() explictly. So the related
npcx_power_console_is_in_use*() functions can be removed.
This PR also fixes the coding style of the driver.
Signed-off-by: Jun Lin CHLin56@nuvoton.com
Signed-off-by: Wealian Liao WHLIAO@nuvoton.com
@keith-zephyr Do you have suggestion on this PR? Any comment is appreciated.
Could we get the PR merged?
| gharchive/pull-request | 2021-12-07T06:45:04 | 2025-04-01T06:46:21.082126 | {
"authors": [
"ChiHuaL"
],
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/40947",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1074140040 | Intel VT-D support
Finally got this convoluted piece of hardware to work.
Main goal, at the moment at least, is to get multi-vector MSI working on x86_64 (idea being to avoid allocating x86 vectors contiguously, which is the same strategy followed by Linux actually).
But in the long term, this could also prove to be useful when Zephyr will get an affinity API (moving one irq from one core to another is much simpler then).
The PR is heavy. It's built on top of #40929
@jhedberg @superna9999 @dcpleung @nashif @andyross Please review, I really would like to see this PR merged for the next release.
@andyross @nashif can you take a look please?
And also verified no additional failure test cases in up2, acrn_ehl_crb and ehl_crb, and adsp,
| gharchive/pull-request | 2021-12-08T08:27:41 | 2025-04-01T06:46:21.084558 | {
"authors": [
"enjiamai",
"tbursztyka"
],
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/40976",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1114974403 | dts: pwm : flag to enable a complementary output of a pwm channel pin
Depending on the mcus and the timer instance, several channels
can enable the complementary output for the PWM signal
This flag completes the PWM_POLARITY for a PWM channel.
Signed-off-by: Francois Ramu francois.ramu@st.com
As this is rather SoC specific (and not generically PWM related), have you considered if this could be configured using devicetree instead? Isn't this for routing of the generated PWM signal?
As this is rather SoC specific (and not generically PWM related), have you considered if this could be configured using devicetree instead? Isn't this for routing of the generated PWM signal?
For reference, there's another proposal I made that is vendor-specific: https://github.com/zephyrproject-rtos/zephyr/pull/42119#issuecomment-1021226073
API meeting:
@henrikbrixandersen proposes using a similar approach to the one suggested here
@FRASTM Do you plan on reworking this with an approach similar as to that for GPIO in https://github.com/zephyrproject-rtos/zephyr/pull/39767?
@FRASTM Do you plan on reworking this with an approach similar as to that for GPIO in #39767?
Yes, I guess this means : abandon this one and updating the https://github.com/zephyrproject-rtos/zephyr/pull/42119
with a PWM_STM32_COMPLEMENTARY flag in a include/dt-bindings/pwm/stm32_pwm.h
reabse on 44fdeb1dd89dc6c0ba9cd0bb2cb936f166fb8560
closing after the merge of https://github.com/zephyrproject-rtos/zephyr/pull/42119/
| gharchive/pull-request | 2022-01-26T12:39:28 | 2025-04-01T06:46:21.090777 | {
"authors": [
"FRASTM",
"carlescufi",
"gmarull",
"henrikbrixandersen"
],
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/42174",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
276574639 | tests/drivers/uart/uart_basic_api : Added build only tag
Testcase yaml requires some interactive inputs to be provided for
execution and hence fails on automation. Hence making it as build
only
Signed-off-by: ravishankar karkala Mallikarjunayya ravix.shankar.km@intel.com
Hi @kumarvikash1, could you please review the code.
| gharchive/pull-request | 2017-11-24T10:27:56 | 2025-04-01T06:46:21.092448 | {
"authors": [
"shankarkm"
],
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/5147",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1796335324 | sam: fix ENABLE bit blocking ADC channel setup
This should close:
https://github.com/zephyrproject-rtos/zephyr/issues/59806
Thanks to @attie-argentum for helping fix this issue.
Compliance checks should pass.
@maksimdrachov please fix the compliance check
@maksimdrachov - please could you fix the compliance check? It seems you committed with a different email to the Signed-off-by line.
| gharchive/pull-request | 2023-07-10T09:15:38 | 2025-04-01T06:46:21.094626 | {
"authors": [
"MaureenHelm",
"attie-argentum",
"maksimdrachov",
"nandojve"
],
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/60197",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2540683472 | drivers: eth_xlnx_gem: Align pll divisor settings with reference manual
The reference manual states that for an IOPLL of 1GHz, divisor0 shall be 8 and divisor1 shall be 1. The existing implementation swaps them and make cross validation with the register setting made by the Xilinx ecosystem more difficult.
Please fix compliance issues.
@blofeld80 do you think you will be able to address the compliance issues?
@blofeld80 ping?
@kartben In case this PR and the author's other PR get closed due to inactivity, I'll take care of the issue from #78811. I had stumbled across that issue a few days before that PR was opened and it's a significant issue, so I'd definitely like to fix that. The issue of this PR is basically deprecated at least on the Zynq (not yet the Ultrascale) if clock configuration gets moved over to the clock control driver I submitted which is currently still in review by Henrik.
@ibirnbaum Great, thanks! I would suggest you go ahead as @blofeld80 has unfortunately never followed up.
| gharchive/pull-request | 2024-09-22T06:39:57 | 2025-04-01T06:46:21.097792 | {
"authors": [
"blofeld80",
"ibirnbaum",
"kartben",
"pdgendt"
],
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/78808",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2597138342 | doc: boards: Migrate -many- boards to new zephyr:board directive/role
Examples of the PR applied to some boards:
ST board example: https://builds.zephyrproject.io/zephyr/pr/80048/docs/boards/st/nucleo_c031c6/doc/index.html
Espressif board example: https://builds.zephyrproject.io/zephyr/pr/80048/docs/boards/espressif/esp32_ethernet_kit/doc/index.html
ADI board example: https://builds.zephyrproject.io/zephyr/pr/80048/docs/boards/adi/max32666evkit/doc/index.html
NXP board example: https://builds.zephyrproject.io/zephyr/pr/80048/docs/boards/nxp/frdm_k64f/doc/index.html
Infineon board example: https://builds.zephyrproject.io/zephyr/pr/80048/docs/boards/infineon/cy8ckit_062s4/doc/index.html
Kind of a big PR, but it's split in commits per vendor -- as some of the documentation pages (ex. samples) that now reference boards using the new :zephyr:board: directive sometimes mention multiple boards on the same line, this would lead to many conflicts based on the order in which PR are merged if I am to split the changes in multiple PRs.
There are still quite a few boards/vendors to migrate over but thought I would get the ball rolling :)
Some of the documentation pages that haven't been migrated yet are typically:
documentation pages that correspond to multiple boards, as the .. zephyr:board:: directive is designed for only one board per doc page at the moment (ex. https://docs.zephyrproject.org/latest/boards/intel/adl/doc/index.html which targets both intel_adl_crb and intel_adl_rvp)
boards that have multiple SoCs and for which original author split the docs in multiple pages. (ex. https://docs.zephyrproject.org/latest/boards/arm/fvp_baser_aemv8r/doc/aarch32.html and https://docs.zephyrproject.org/latest/boards/arm/fvp_baser_aemv8r/doc/aarch64.html)
boards with doc page that is incorrectly structured in the first place (ex., multiple titles, such as https://docs.zephyrproject.org/latest/boards/sparkfun/micromod/doc/index.html)
When it comes to deciding whether the main image in a board documentation page should be removed to only appear in the new "card", the thought process has been that whenever the image really only was there for illustration purposes, it's been removed (ex. https://builds.zephyrproject.io/zephyr/pr/80048/docs/boards/m5stack/m5stack_atoms3/doc/index.html) but when it is part of a narrative where being able to see the image right next to the text mentioning it, it's been preserved (but also appears in the card); ex. https://builds.zephyrproject.io/zephyr/pr/80048/docs/boards/particle/nrf51_blenano/doc/index.html#connections-and-ios
Rebased to solve conflicts
Updated the beagleboard commit to ensure the new BeagleBone AI-64 was covered
Hi @kartben ,
No image from Atmel / GD should be removed. They are images from the development kit and should be kept.
Hi @kartben ,
No image from Atmel / GD should be removed. They are images from the development kit and should be kept.
Can you elaborate? The images are still there, only they're showing in the "side card" now.
https://builds.zephyrproject.io/zephyr/pr/80048/docs/boards/gd/gd32a503v_eval/doc/index.html
Hi @kartben ,
No image from Atmel / GD should be removed. They are images from the development kit and should be kept.
@nandojve Can you elaborate? The images are still there, only they're showing in the "side card" now. https://builds.zephyrproject.io/zephyr/pr/80048/docs/boards/gd/gd32a503v_eval/doc/index.html
I understood now. The below don't affect the the outcome.
-.. image:: img/gd32a503v_eval.jpg
- :align: center
- :alt: gd32a503v_eval
Sorry about the noise.
Sorry about the noise.
That's no problem at all, really! I understand how the change can be confusing at first sight so no worries :)
For others wondering the same, this is something I have tried to explain in the initial comment in this PR ("When it comes to deciding whether the main image in a board documentation page [...]).
| gharchive/pull-request | 2024-10-18T10:27:48 | 2025-04-01T06:46:21.110851 | {
"authors": [
"kartben",
"nandojve"
],
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/80048",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
464297330 | New Create2 Guide
Guide to help people create upgradable contracts using zos create2
I think we're good to go! Left one suggestion only, and then we can merge.
@buddies2705 looks good, but why did you close the PR? I think it was good to merge, right?
How to reopen this?
I'll reopen and merge, no worries
| gharchive/pull-request | 2019-07-04T14:31:57 | 2025-04-01T06:46:21.117495 | {
"authors": [
"buddies2705",
"spalladino"
],
"repo": "zeppelinos/zos",
"url": "https://github.com/zeppelinos/zos/pull/1070",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1102752888 | Melhorias - Titulos das Telas
Melhorias nos titulos das telas de experiências profissionais
Re abrindo
Finalizado
| gharchive/pull-request | 2022-01-14T01:51:29 | 2025-04-01T06:46:21.119347 | {
"authors": [
"zergiocunha"
],
"repo": "zergiocunha/CandidateManager",
"url": "https://github.com/zergiocunha/CandidateManager/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1781292766 | Definir modelo de dados
Objetivo é definir qual é modelo de dados para o projeto.
Desenhar um MER contendo todas as entidades envolvidas no projetos.
As entidades até então são:
Professor
Estudante
Exame
Questão
A Ideia inicial pensada para o MER foi a seguinte:
Será Necessário fazer uma alteração no banco de dados do projeto, no qual esta sendo utilizado como base, para a implementação das novas entidades e seus respectivos relacionamentos.
Segundo o modelo, as novas entidades terão estas caracteristicas:
Questao
id
id_professor
enunciado
resposta
tipo_questao (verdade/falso, multipla_escolha, dissertativa e numerica)
QuestaoMultiplaEscolha
id_questao
opcao a
opcao b
opcao c
opcao d
Exame
id
id_professor
id_turma
data_inicio
data_fim
nome
nota_exame (soma da quantidade de nota que cada questao tem dentro do exame)
QuestaoExame (Entidade Associativa)
id_questao
id_exame
nota_questao
NotasExames (armazena nota que um aluno tirou em um exame)
id_estudante
id_exame
nota_exame
RespostaQuestaoExame (resposta de um aluno para uma determinada questao dentro um exame especifico)
id_estudante (PK FK)
id_exame (PK FK)
id_questao (PK FK)
resposta_aluno
nota_aluno_questao
| gharchive/issue | 2023-06-29T18:28:48 | 2025-04-01T06:46:21.133746 | {
"authors": [
"maydMoon",
"zeroCass"
],
"repo": "zeroCass/unb-web",
"url": "https://github.com/zeroCass/unb-web/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
384806680 | Migration from Smalltalkhub
I migrated the whole history from Smalltalkhub.
I also added the documentation and an installation guide. Also added integration against Travis CI and Coveralls (this last two features once merged against your repo will need some work. I can do it if you grant me permission).
It would be great to create at least the last release (2.6.0). I encourage you to start using semantic versioning as described in https://semver.org
I left ConfigurationOf and the Example package untouched.
@fortizpenaloza, awesome, thank you very much. I'll take a look at it after work and merge it.
| gharchive/pull-request | 2018-11-27T13:57:07 | 2025-04-01T06:46:21.145526 | {
"authors": [
"fortizpenaloza",
"zeroflag"
],
"repo": "zeroflag/Teapot",
"url": "https://github.com/zeroflag/Teapot/pull/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1323343214 | How to bulk update if the model was generated with cache?
如题:
goctl 生成带缓存的时候,怎么批量更新,是一条条for循环更新,还是先批量查出来id集合,是更新完在删除id集合?
This still depends on the scenario, if the performance requirements are not so extreme, in fact, for loop one update is not a big problem, if the amount is very large also have performance problems batch may be better, usually the first is easier, the second will be some trouble, because if you only have a primary key cache okay, if there are multiple secondary index cache to delete the corresponding secondary index cache, usually the first situation is enough to use
| gharchive/issue | 2022-07-31T02:24:05 | 2025-04-01T06:46:21.146822 | {
"authors": [
"Mikaelemmmm",
"xhochipe"
],
"repo": "zeromicro/go-zero",
"url": "https://github.com/zeromicro/go-zero/issues/2211",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1863458 | pyzmq auto reconnect fails
If I understood this right, ZeroMQ automatically reconnects a broken pub/sub connection.
But in my case, if the connection breaks, the subscriber does not recover after a reconnect. (client ip changed)
https://gist.github.com/1224274d8ab5a997682f
zmq_version = 2.1.7
pyzmq_version = 2.1.7
This question is probably better directed to zeromq-dev, as I don't think there's anything pyzmq specific here.
Ok thanks.
Got it to work with an interleaved heartbeat and manual reconnect.
| gharchive/issue | 2011-10-10T12:02:18 | 2025-04-01T06:46:21.157649 | {
"authors": [
"minrk",
"vindolin"
],
"repo": "zeromq/pyzmq",
"url": "https://github.com/zeromq/pyzmq/issues/145",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
85706601 | zmq.tests.test_socket.TestSocket.test_tracker() hangs
zmq.tests.test_socket.TestSocket.test_tracker() almost always hangs with all versions of Python (at least CPython 2.6, 2.7, 3.2, 3.3, 3.4) in both PyZMQ 14.6.0 and git trunk.
I use ZeroMQ 4.1.1.
Result of test suite of PyZMQ git trunk with CPython 2.7:
$ PYTHONPATH="build/lib.linux-x86_64-2.7" nosetests-2.7 -s -v -w build/lib.linux-x86_64-2.7/zmq/tests
...
test sending unicode objects ... ok
test_shadow (zmq.tests.test_socket.TestSocket) ... ok
test_shadow_pyczmq (zmq.tests.test_socket.TestSocket) ... SKIP: Requires pyczmq
test set/getsockopt roundtrip. ... ok
subclasses can assign attributes ... ok
test the MessageTracker object for tracking when zmq is done with a buffer ...
I believe this is a change in libzmq-4.1.1, which is not yet supported. I'll try to find some time to figure out the various things libzmq 4.1 has broken. Thanks for testing.
Are you using zmq-4.1.1 and not libzmq master? There is a bug in libzmq master that causes these tests to fail, but this test definitely passes with 4.1 and all those CPythons on travis and my machines. In those cases, the tests is behaving correctly, and it is libzmq where the behavior is incorrect.
This was a bug in libzmq, and is fixed.
| gharchive/issue | 2015-06-06T06:07:27 | 2025-04-01T06:46:21.161001 | {
"authors": [
"Arfrever",
"minrk"
],
"repo": "zeromq/pyzmq",
"url": "https://github.com/zeromq/pyzmq/issues/682",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1977447024 | Add On_Item_Expanding and On_Item_Expanded and associated stuff.
Add On_Item_Expanding and On_Item_Expanded and associated stuff to Common_Controls
Great!
May I suggest a little change?
move the definitions from "type NMTREEVIEW is..." to "function Message_To_NmTreeView_Ptr is..." to the private part of the spec (just before "end GWindows.Common_Controls;")
rename NMTREEVIEW_Ptr as Pointer_To_NMTREEVIEW_Type (if GNAT Studio is in a good mood, the Refactoring context menu entry will do the job)
rename Message_To_NmTreeView_Ptr as Message_To_NmTreeView_Pointer
The bonus there is that the corresponding lines in gwindows-common_controls-ex_tv.adb and gwindows-common_controls-ex_tv_generic.adb can be removed. Further, some stuff from those extensions could be moved to Common_Controls.
Possibly it is simpler if I accept the PR and make those small changes on master. Would it be better for you?
I'm working on it.
Done.
| gharchive/pull-request | 2023-11-04T17:52:56 | 2025-04-01T06:46:21.178392 | {
"authors": [
"NicoPy",
"zertovitch"
],
"repo": "zertovitch/gwindows",
"url": "https://github.com/zertovitch/gwindows/pull/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
165992195 | Nouvelle balise : "encart" ou "annexe"
De mon point de vue, il nous manque une balise "encart" ou "annexe" dans le markdown, de la même manière qu'on a les attention / question / etc.
Elle servirait à mettre du contenu annexe au texte principal du tuto ou de l'article, tels que des résumés, des preuves, des explications complémentaire, etc.
On les retrouve souvent dans les publications de vulgarisation scientifique dans des encarts (avec le cadre autour du texte et tout) ; je ne sais pas trop comment on pourrait rendre ça sur le web.
Source : Renard de l'espace
J'ai rien contre le faire mais c'est une évolution du markdown : A quel moment on décide de faire ce genre d'ajouts sans en discuter publiquement. Sans forcément avoir une zep pur et dur, même si je comprend le besoin, pourquoi accepter cet ajout plutot que d'autres demandes ? (on m'a déja demandé des blocs théorèmes, définitions, personnalisés, etc.)
| gharchive/issue | 2016-07-17T19:43:38 | 2025-04-01T06:46:21.181318 | {
"authors": [
"GerardPaligot",
"cgabard"
],
"repo": "zestedesavoir/Python-ZMarkdown",
"url": "https://github.com/zestedesavoir/Python-ZMarkdown/issues/104",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
417131922 | Coding style check
As title, I notice that the coding style is not united on this project.
We can find that they have the camel-case and underline-based variable declaration.
Do you consider the PSR-2 coding style?
The style being used something like
Classes
Classes name first letter is uppercase
class Logger {}
if class name has more then one word then
class AliesLoader {}
Methods
Methods are camel case like
public isLogin() {}
or
public getUserById() {}
functions
mostly functions name with underscore like
function is_login() {}
so i think those above follow psr-2 standard
about control structures and iteration thats also according to psr standard
yes you are right the variable name is not according to psr-2 standard
How about the variable declaration on methods?
@peter279k yes need to fix this in codes
As we followed the PSR-2 Standard so this issue is closing now...
| gharchive/issue | 2019-03-05T06:02:02 | 2025-04-01T06:46:21.188836 | {
"authors": [
"Lablnet",
"peter279k"
],
"repo": "zestframework/Zest_Framework",
"url": "https://github.com/zestframework/Zest_Framework/issues/156",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1915038327 | Improve ZetaClient tests and coverage
General improvement for ZetaClient testing and coverage.
### Tasks
- [ ] https://github.com/zeta-chain/node/issues/1154
- [ ] https://github.com/zeta-chain/node/issues/1155
- [ ] https://github.com/zeta-chain/node/issues/1156
- [ ] https://github.com/zeta-chain/node/issues/1157
- [ ] https://github.com/zeta-chain/node/issues/1158
- [ ] https://github.com/zeta-chain/node/issues/1196
- [ ] https://github.com/zeta-chain/node/issues/1197
- [ ] https://github.com/zeta-chain/node/issues/1198
- [ ] https://github.com/zeta-chain/node/issues/1199
- [ ] https://github.com/zeta-chain/node/issues/1200
- [ ] https://github.com/zeta-chain/node/issues/1201
- [ ] https://github.com/zeta-chain/node/issues/1202
- [ ] https://github.com/zeta-chain/node/issues/1918
Closing for more described task: https://github.com/zeta-chain/node/issues/2028
| gharchive/issue | 2023-09-27T09:02:05 | 2025-04-01T06:46:21.190430 | {
"authors": [
"lumtis"
],
"repo": "zeta-chain/node",
"url": "https://github.com/zeta-chain/node/issues/1203",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2079618422 | fix: zetaclient crash due to out of bound int conversion
Description
Please include a summary of the changes and the related issue. Please also include relevant motivation and context. List any dependencies that are required for this change.
Closes:
Type of change
[ ] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] This change requires a documentation update
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Include instructions and any relevant details so others can reproduce.
[ ] Tested CCTX in localnet
[ ] Tested in development environment
[ ] Go unit tests
[ ] Go integration tests
[ ] Tested via GitHub Actions
Checklist:
[ ] I have added unit tests that prove my fix feature works
What's the status on this @brewmaster012 , still draft?
What's the status on this @brewmaster012 , still draft?
Ready for review. Let's quickly merge this as it's required on mockmain, even though some of the issues are not solved optimally.
| gharchive/pull-request | 2024-01-12T20:02:15 | 2025-04-01T06:46:21.195297 | {
"authors": [
"brewmaster012",
"lumtis"
],
"repo": "zeta-chain/node",
"url": "https://github.com/zeta-chain/node/pull/1576",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2560134462 | fix(zetaclient): tolerate priorityFee > gasFee
Description
Force priorityFee=gasPrice if priorityFee > gasPrice (EIP-1559)
Fixes Polygon outbound 5963 cctx: https://zetachain.blockpi.network/lcd/v1/public/zeta-chain/crosschain/cctx/137/5963
Summary by CodeRabbit
New Features
Introduced a streamlined input for end-to-end testing, allowing users to specify multiple test targets.
Added support for new user accounts in the local network configuration.
Implemented version 2 migration tests for enhanced testing capabilities.
Bug Fixes
Improved clarity in release workflow descriptions and logging for better traceability.
Refactor
Simplified condition checks and removed deprecated code in various workflows and handlers.
Documentation
Updated changelog to reflect the release of version 20.0.0 with new features and fixes.
Restoring branch for cherry picking
| gharchive/pull-request | 2024-10-01T20:17:06 | 2025-04-01T06:46:21.200171 | {
"authors": [
"lumtis",
"swift1337"
],
"repo": "zeta-chain/node",
"url": "https://github.com/zeta-chain/node/pull/2950",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
54667732 | Make advanced .nyagos example
This is requested on #41.
zetamatta is going to provide the example.
But others ones are welcome, too.
(Example) %USERPROFILE%.nyagos · zetamatta/nyagos Wiki
| gharchive/issue | 2015-01-17T17:08:39 | 2025-04-01T06:46:21.201662 | {
"authors": [
"zetamatta"
],
"repo": "zetamatta/nyagos",
"url": "https://github.com/zetamatta/nyagos/issues/42",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
1748912693 | Is it possible to add 'Mirror to TV'
Mirror to TVIt is a very useful feature,Looking forward to increasing.
Sadly I wasn't able to do this, since I don't understand the nature of chrome cast and other tv mirroring logics,
If someone can provide a dart code for handling it I may be Able to implement it in the app
I made a modified version of a previous Chromecast package. It supports passing some metadata and few other things.
https://github.com/sommye-ctr/video_cast
It definitely has few bugs but maybe this would give a start?
Sadly I wasn't able to do this, since I don't understand the nature of chrome cast and other tv mirroring logics,
If someone can provide a dart code for handling it I may be Able to implement it in the app
https://pub.dev/packages/dlna_dart
I think you can take a look at this
Sadly I wasn't able to do this, since I don't understand the nature of chrome cast and other tv mirroring logics,
If someone can provide a dart code for handling it I may be Able to implement it in the app
https://pub.dev/packages/dlna_dart
I think you can take a look at this,This package scans TV devices within the local area network and broadcasts videos to the TV.
I checked dlna before and couldn't understand it fully that I am able to integrate it
But when I have some free time I will check it again
I made a modified version of a previous Chromecast package. It supports passing some metadata and few other things.
https://github.com/sommye-ctr/video_cast
It definitely has few bugs but maybe this would give a start?
This only works on Android?
I made a modified version of a previous Chromecast package. It supports passing some metadata and few other things.
https://github.com/sommye-ctr/video_cast
It definitely has few bugs but maybe this would give a start?
This only works on Android?
Yes it is only for android as of now
I made a modified version of a previous Chromecast package. It supports passing some metadata and few other things.
https://github.com/sommye-ctr/video_cast
It definitely has few bugs but maybe this would give a start?
This only works on Android?
Yes it is only for android as of now
That's a very limiting factor 😢 I guess a dart solution will be the best
Yeah that's true. I did try to look into the dart ones but weren't sure of them. You could take a look at some of those.
Yeah that's true. I did try to look into the dart ones but weren't sure of them. You could take a look at some of those.
i will try, but no promises since i dont have any knowledge in mirroring
| gharchive/issue | 2023-06-09T01:13:30 | 2025-04-01T06:46:21.214140 | {
"authors": [
"JieNengi",
"sommye-ctr",
"zezo357"
],
"repo": "zezo357/flutter_meedu_videoplayer",
"url": "https://github.com/zezo357/flutter_meedu_videoplayer/issues/116",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
92601322 | Improve error reporting
I run deployer as external process in my application.
I track process exit code value to find out whether deployment process succeeded or failed. The issue is that in case of any problem deployer always return 1 regardless of the real cause of the problem. Could the deployer return more descriptive values?
Currently error messages are reported to a console. Could they be reported to application error stream too?
I prepared a fork with changes I would like to introduce.
We return a non-zero exit status. *nix systems use non-zero to indicate any failure, but those no real standardization on what different values mean. At most, we could add verbosity flags and logging based on those flags.
Currently error messages are reported to a console. Could they be reported to application error stream too?
IIRC, we write to STDERR whenever possible for error situations; if you can find places where we are not, please file separate issues and/or pull requests to fix them.
I prepared a fork with changes I would like to introduce.
Then please submit a pull request if you haven't already; if you have, link the two (by dropping a comment in this one referencing the pull request, or in the pull request referencing this issue).
We return a non-zero exit status. *nix systems use non-zero to indicate any failure, but those no real standardization on what different values mean. At most, we could add verbosity flags and logging based on those flags.
My problem was that I was running the deployer and I could not get the result file. The only information I had was exit code value 1 and empty error stream. I did not know what went wrong and why.
My first attempt was to define more exit codes for each problematic situation. Then I made use of deloyer output, instead of error, stream in my application to get know what went wrong. Finally I noticed that there was problem with running the composer. To investigate it I had to attach composer output to deployer error message. All those changes allowed me to find out that final package file was not created because of corrupted composer cache.
Currently I see that for my purposes single non-zero exit code would be enough but it must be supported by extensive information in output/error stream.
IIRC, we write to STDERR whenever possible for error situations; if you can find places where we are not, please file separate issues and/or pull requests to fix them.
I am not good at PHP but my impression is that you do not write anything to error stream.
Then please submit a pull request if you haven't already; if you have, link the two (by dropping a comment in this one referencing the pull request, or in the pull request referencing this issue).
I added link to my pull request to my comment above.
As I wrote above my fork contains all changes that helped me to investigate my problem. Currently I see that only one change is required: composer output attached to error message in case composer run fail.
| gharchive/issue | 2015-07-02T10:07:57 | 2025-04-01T06:46:21.220249 | {
"authors": [
"dariusz-michura",
"weierophinney"
],
"repo": "zfcampus/zf-deploy",
"url": "https://github.com/zfcampus/zf-deploy/issues/38",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1726822359 | fix: build --html add rootId
ssr build --html 需要带上 rootId,否则生成的静态 html 在 mount 时会报错
thanks for pr
| gharchive/pull-request | 2023-05-26T03:01:53 | 2025-04-01T06:46:21.241018 | {
"authors": [
"MervynFang",
"zhangyuang"
],
"repo": "zhangyuang/ssr",
"url": "https://github.com/zhangyuang/ssr/pull/291",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1195746636 | Add PORTABLE_WHEEL option to CMakeLists.txt
Fix #4. Add PORTABLE_WHEEL option to CMakeLists.txt.
python -m build: build optimized ngtpy for local installation
SKBUILD_CONFIGURE_OPTIONS="-DPORTABLE_WHEEL=ON" python -m build: build ngtpy for portable wheels
log
$ python -m build
Configuring Project
Working directory:
/private/var/folders/1t/67zx481n1l1ckxzm9fzrxfy40000gn/T/build-via-sdist-pzzl3sn5/ngtpy-0.1.1/_skbuild/macosx-12.0-x86_64-3.9/cmake-build
Command:
cmake /private/var/folders/1t/67zx481n1l1ckxzm9fzrxfy40000gn/T/build-via-sdist-pzzl3sn5/ngtpy-0.1.1 -G Ninja -DCMAKE_INSTALL_PREFIX:PATH=/private/var/folders/1t/67zx481n1l1ckxzm9fzrxfy40000gn/T/build-via-sdist-pzzl3sn5/ngtpy-0.1.1/_skbuild/macosx-12.0-x86_64-3.9/cmake-install/src/ngtpy -DPYTHON_EXECUTABLE:FILEPATH=/private/var/folders/1t/67zx481n1l1ckxzm9fzrxfy40000gn/T/build-env-5dg2hriy/bin/python -DPYTHON_VERSION_STRING:STRING=3.9.10 -DPYTHON_INCLUDE_DIR:PATH=/usr/local/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9 -DPYTHON_LIBRARY:FILEPATH=/usr/local/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/lib/libpython3.9.dylib -DSKBUILD:INTERNAL=TRUE -DCMAKE_MODULE_PATH:PATH=/private/var/folders/1t/67zx481n1l1ckxzm9fzrxfy40000gn/T/build-env-5dg2hriy/lib/python3.9/site-packages/skbuild/resources/cmake -DCMAKE_BUILD_TYPE:STRING=Release -DCMAKE_OSX_DEPLOYMENT_TARGET:STRING=12.0 -DCMAKE_OSX_ARCHITECTURES:STRING=x86_64
-- The C compiler identification is AppleClang 13.1.6.13160021
-- The CXX compiler identification is AppleClang 13.1.6.13160021
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Library/Developer/CommandLineTools/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Library/Developer/CommandLineTools/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
'/private/var/folders/1t/67zx481n1l1ckxzm9fzrxfy40000gn/T/build-env-5dg2hriy/bin/python' '-c' 'import pybind11; print(pybind11.get_cmake_dir())'
-- Building ngtpy for local installation
-- VERSION: 1.14.3
-- CMAKE_BUILD_TYPE: Release
-- CMAKE_BUILD_TYPE_LOWER: release
-- Found OpenMP_C: -Xclang -fopenmp (found version "5.0")
-- Found OpenMP_CXX: -Xclang -fopenmp (found version "5.0")
-- Found OpenMP: TRUE (found version "5.0")
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Found PythonInterp: /private/var/folders/1t/67zx481n1l1ckxzm9fzrxfy40000gn/T/build-env-5dg2hriy/bin/python (found version "3.9.10")
-- Found PythonLibs: /usr/local/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/lib/libpython3.9.dylib
-- Performing Test HAS_FLTO
-- Performing Test HAS_FLTO - Success
-- Performing Test HAS_FLTO_THIN
-- Performing Test HAS_FLTO_THIN - Success
-- Found pybind11: /private/var/folders/1t/67zx481n1l1ckxzm9fzrxfy40000gn/T/build-env-5dg2hriy/lib/python3.9/site-packages/pybind11/include (found version "2.9.2")
-- Configuring done
-- Generating done
-- Build files have been written to: /private/var/folders/1t/67zx481n1l1ckxzm9fzrxfy40000gn/T/build-via-sdist-pzzl3sn5/ngtpy-0.1.1/_skbuild/macosx-12.0-x86_64-3.9/cmake-build
$ SKBUILD_CONFIGURE_OPTIONS="-DPORTABLE_WHEEL=ON" pyproject-build
Configuring Project
Working directory:
/private/var/folders/1t/67zx481n1l1ckxzm9fzrxfy40000gn/T/build-via-sdist-gdr0oa_n/ngtpy-0.1.1/_skbuild/macosx-12.0-x86_64-3.9/cmake-build
Command:
cmake /private/var/folders/1t/67zx481n1l1ckxzm9fzrxfy40000gn/T/build-via-sdist-gdr0oa_n/ngtpy-0.1.1 -G Ninja -DCMAKE_INSTALL_PREFIX:PATH=/private/var/folders/1t/67zx481n1l1ckxzm9fzrxfy40000gn/T/build-via-sdist-gdr0oa_n/ngtpy-0.1.1/_skbuild/macosx-12.0-x86_64-3.9/cmake-install/src/ngtpy -DPYTHON_EXECUTABLE:FILEPATH=/private/var/folders/1t/67zx481n1l1ckxzm9fzrxfy40000gn/T/build-env-bfo9b1hs/bin/python -DPYTHON_VERSION_STRING:STRING=3.9.10 -DPYTHON_INCLUDE_DIR:PATH=/usr/local/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9 -DPYTHON_LIBRARY:FILEPATH=/usr/local/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/lib/libpython3.9.dylib -DSKBUILD:INTERNAL=TRUE -DCMAKE_MODULE_PATH:PATH=/private/var/folders/1t/67zx481n1l1ckxzm9fzrxfy40000gn/T/build-env-bfo9b1hs/lib/python3.9/site-packages/skbuild/resources/cmake -DCMAKE_BUILD_TYPE:STRING=Release -DCMAKE_OSX_DEPLOYMENT_TARGET:STRING=12.0 -DCMAKE_OSX_ARCHITECTURES:STRING=x86_64 -DPORTABLE_WHEEL=ON
-- The C compiler identification is AppleClang 13.1.6.13160021
-- The CXX compiler identification is AppleClang 13.1.6.13160021
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Library/Developer/CommandLineTools/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Library/Developer/CommandLineTools/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
'/private/var/folders/1t/67zx481n1l1ckxzm9fzrxfy40000gn/T/build-env-bfo9b1hs/bin/python' '-c' 'import pybind11; print(pybind11.get_cmake_dir())'
-- Building ngtpy for portable wheels
-- VERSION: 1.14.3
-- CMAKE_BUILD_TYPE: Release
-- CMAKE_BUILD_TYPE_LOWER: release
-- AVX will not be used to compute distances.
-- Found OpenMP_C: -Xclang -fopenmp (found version "5.0")
-- Found OpenMP_CXX: -Xclang -fopenmp (found version "5.0")
-- Found OpenMP: TRUE (found version "5.0")
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Found PythonInterp: /private/var/folders/1t/67zx481n1l1ckxzm9fzrxfy40000gn/T/build-env-bfo9b1hs/bin/python (found version "3.9.10")
-- Found PythonLibs: /usr/local/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/lib/libpython3.9.dylib
-- Performing Test HAS_FLTO
-- Performing Test HAS_FLTO - Success
-- Performing Test HAS_FLTO_THIN
-- Performing Test HAS_FLTO_THIN - Success
-- Found pybind11: /private/var/folders/1t/67zx481n1l1ckxzm9fzrxfy40000gn/T/build-env-bfo9b1hs/lib/python3.9/site-packages/pybind11/include (found version "2.9.2")
-- Configuring done
-- Generating done
-- Build files have been written to: /private/var/folders/1t/67zx481n1l1ckxzm9fzrxfy40000gn/T/build-via-sdist-gdr0oa_n/ngtpy-0.1.1/_skbuild/macosx-12.0-x86_64-3.9/cmake-build
@masajiro Would you take a look at this PR?
It seems to be fine.
@masajiro Thanks!
| gharchive/pull-request | 2022-04-07T09:08:22 | 2025-04-01T06:46:21.247104 | {
"authors": [
"masajiro",
"zhanpon"
],
"repo": "zhanpon/ngtpy",
"url": "https://github.com/zhanpon/ngtpy/pull/8",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1630825257 | 🛑 ZY is down
In 23180a8, ZY (https://ziyuan.baidu.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: ZY is back up in 110907c.
| gharchive/issue | 2023-03-19T08:02:36 | 2025-04-01T06:46:21.249769 | {
"authors": [
"zhaodylan"
],
"repo": "zhaodylan/uptime-ziyuan",
"url": "https://github.com/zhaodylan/uptime-ziyuan/issues/1341",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1427463987 | 🛑 ZY is down
In d0f7f73, ZY (https://ziyuan.baidu.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: ZY is back up in 5b51b7c.
| gharchive/issue | 2022-10-28T15:59:11 | 2025-04-01T06:46:21.252130 | {
"authors": [
"zhaodylan"
],
"repo": "zhaodylan/uptime-ziyuan",
"url": "https://github.com/zhaodylan/uptime-ziyuan/issues/818",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
617950375 | 本地化策略的路径怎么填写呢
我填的是~/zfile/
初始化异常, 请检查配置是否正确.
从根目录/开始填写,简单来选择就是进入你要设置的目录,输入pwd,将得到的结果填入即可
| gharchive/issue | 2020-05-14T06:06:34 | 2025-04-01T06:46:21.253099 | {
"authors": [
"hiie",
"iwayen"
],
"repo": "zhaojun1998/zfile",
"url": "https://github.com/zhaojun1998/zfile/issues/113",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
268955655 | add python suport
in my python file when i save the file the format is for javasctipt file for python we need hash
javscript format
/*
* @Author: Georgios X
* @Date: 2017-10-27 03:45:48
* @Last Modified by: George X
* @Last Modified time: 2017-10-27 03:45:48
*/
python.py file
#################################
# @Author: Georgios X
# @Date: 2017-10-27 03:45:48
# @Last Modified by: George X
# @Last Modified time: 2017-10-27 03:45:48
#################################
python file Save and did not change the modification time
https://github.com/zhaopengme/vscode-fileheader/issues/16#issuecomment-542943053
https://github.com/zhaopengme/vscode-fileheader/issues/16#issuecomment-542943536
| gharchive/issue | 2017-10-27T00:49:58 | 2025-04-01T06:46:21.255641 | {
"authors": [
"George35mk",
"lamborghini1993",
"motraor3"
],
"repo": "zhaopengme/vscode-fileheader",
"url": "https://github.com/zhaopengme/vscode-fileheader/issues/7",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1619485474 | 请问如何限定gpt根据特定关键词回答特定内容?
如题,好像没有记忆,除了一开始config.json里面做的角色定义以外
同问,是否有方法实现提供某个主题下的回答
| gharchive/issue | 2023-03-10T19:06:32 | 2025-04-01T06:46:21.263100 | {
"authors": [
"le0zh0u",
"rogermzy"
],
"repo": "zhayujie/bot-on-anything",
"url": "https://github.com/zhayujie/bot-on-anything/issues/111",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
96918063 | alternative labels part_of and has_participant
From mcour...@gmail.com on June 27, 2012 14:15:36
Melanie is worried about the disappearance of part_of and has_participant from the spec and impact on users. Alan suggests to give the "part of at all times" the label "part of" and "has participant at some time" the label "has
participant".
see https://groups.google.com/forum/?fromgroups#!topic/bfo-owl-devel/k7KkoOn6DY0
Original issue: http://code.google.com/p/bfo/issues/detail?id=71
From dosu...@gmail.com on June 28, 2012 15:22:35
> Alan suggests to give the "part of at all times" the label "part of" and "has participant at some time" the label "has participant"."
I think this is a good solution.
From alanruttenberg@gmail.com on June 29, 2012 13:31:32
After discussion at meeting of 2012-06-29 we decided to close this issue. We don't want to make the change now because it is unclear whether the assumptions of the old relation will carry to the new relations - there are four candidates for part-of (c,o x all/some times). So we will document this in the release notes and take it up again after we get wider comment from the community.
Status: WontFix
| gharchive/issue | 2015-07-23T22:28:36 | 2025-04-01T06:46:21.272416 | {
"authors": [
"zhengj2007"
],
"repo": "zhengj2007/BFO-test",
"url": "https://github.com/zhengj2007/BFO-test/issues/71",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
241959119 | Error when parsing the test set
Hello,
The model was finally done with training on the dev set but when it was about to start with the test set, it gave a really big error:
Validation Data Eval:
Current accuracy is 87.35
Best accuracy on dev set is 87.35
Decoding on the test set:
WARNING:tensorflow:From SentenceMatchTrainer.py:330 in main.: all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Please use tf.global_variables instead.
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
Traceback (most recent call last):
File "SentenceMatchTrainer.py", line 391, in
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "/Users/dc-hadil/tensorflow/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 43, in run
sys.exit(main(sys.argv[:1] + flags_passthrough))
File "SentenceMatchTrainer.py", line 339, in main
saver.restore(sess, best_path)
File "/Users/dc-hadil/tensorflow/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1388, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/Users/dc-hadil/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 766, in run
run_metadata_ptr)
File "/Users/dc-hadil/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 964, in _run
feed_dict_string, options, run_metadata)
File "/Users/dc-hadil/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1014, in _do_run
target_list, options, run_metadata)
File "/Users/dc-hadil/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1034, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
[[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]
Caused by op u'save/RestoreV2', defined at:
File "SentenceMatchTrainer.py", line 391, in
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "/Users/dc-hadil/tensorflow/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 43, in run
sys.exit(main(sys.argv[:1] + flags_passthrough))
File "SentenceMatchTrainer.py", line 334, in main
saver = tf.train.Saver(vars_)
File "/Users/dc-hadil/tensorflow/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1000, in init
self.build()
File "/Users/dc-hadil/tensorflow/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1030, in build
restore_sequentially=self._restore_sequentially)
File "/Users/dc-hadil/tensorflow/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 624, in build
restore_sequentially, reshape)
File "/Users/dc-hadil/tensorflow/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 361, in _AddRestoreOps
tensors = self.restore_op(filename_tensor, saveable, preferred_shard)
File "/Users/dc-hadil/tensorflow/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 200, in restore_op
[spec.tensor.dtype])[0])
File "/Users/dc-hadil/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/gen_io_ops.py", line 441, in restore_v2
dtypes=dtypes, name=name)
File "/Users/dc-hadil/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 759, in apply_op
op_def=op_def)
File "/Users/dc-hadil/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2240, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/Users/dc-hadil/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1128, in init
self._traceback = _extract_stack()
NotFoundError (see above for traceback): Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models//SentenceMatch.sample.best.model
[[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]
Is there a way to pick up the process from the test set? or do I have to start with the training all over again? Also, why was there an error?
You can fix this problem easily by modifying the path from "models//SentenceMatch.sample.best.model" to "models/SentenceMatch.sample.best.model".
(Just replace the double slash into a single slash)
The new version fo tensorflow doesn't support double slash in the path.
Ok I ran the command you shared and I was able to get results but I didn't understand the output. I thought I would get a 0 or 1 output in the case of "prediction" and a probability in the case of "probs". Can you please explain to me the output so I can make sense of it?
In the case of "prediction" I got the below sample:
0 0 hh what is hh ?
0 1 what me worry ? what is worrying ?
1 0 where is armenia ? what is armenia ?
0 0 what is e-hrm ? what is hrm ?
1 1 is/was quora down ? was quora down ?
What is the difference between the values in the first column and the second? Which one is the prediction?
In the case of "probs" I got the below sample:
0 1:0.182387366891 0:0.817612707615
0 1:0.663396537304 0:0.336603462696
1 1:0.211678221822 0:0.788321793079
0 1:0.00674327090383 0:0.993256688118
1 1:0.86587280035 0:0.134127125144
I thought that the one with the highest probability would be put in the first column but that's not the case. If you look at the second row, it has a label 0 (in the first column) even though the label 1 has a higher probability.
Apologies for the constant questions but I don't want to assume wrong information :/
For the "prediction" file, the first column is the ground-truth, and the second column is the predictions.
You can refer to https://github.com/zhiguowang/BiMPM/blob/master/src/SentenceMatchTrainer.py#L93
For the "probs" file, the first column is the ground-truth, and the following items are the probs for all possible labels.
You can refer to https://github.com/zhiguowang/BiMPM/blob/master/src/SentenceMatchTrainer.py#L98
Ok great! Thank you very much for the explanation! 👍
| gharchive/issue | 2017-07-11T07:56:14 | 2025-04-01T06:46:21.344254 | {
"authors": [
"hadilnc01",
"zhiguowang"
],
"repo": "zhiguowang/BiMPM",
"url": "https://github.com/zhiguowang/BiMPM/issues/14",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
876091089 | pip3 install -r requirements.txt错误
ERROR: Could not find a version that satisfies the requirement pyqtgraph==0.11.0.dev0+g96be1bd
ERROR: No matching distribution found for pyqtgraph==0.11.0.dev0+g96be1bd
dev版本好像要在github下载安装?
梅开二度
ERROR: Could not find a version that satisfies the requirement spleeter==1.5.3
ERROR: No matching distribution found for spleeter==1.5.3
| gharchive/issue | 2021-05-05T06:03:31 | 2025-04-01T06:46:21.347507 | {
"authors": [
"LYF511"
],
"repo": "zhimingshenjun/DD_KaoRou2",
"url": "https://github.com/zhimingshenjun/DD_KaoRou2/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
839458965 | Docs: Fixed cli section and main help text
See commit message.
No review needed.
Coverage remained the same at 37.042% when pulling 2969278f1d3ce82ec94a77993414f6d3634746f8 on andy/fix-docs-cli into a193803a4d75bfa08de6d5f0d386d1e0b72f48a6 on master.
| gharchive/pull-request | 2021-03-24T08:01:07 | 2025-04-01T06:46:21.367910 | {
"authors": [
"andy-maier",
"coveralls"
],
"repo": "zhmcclient/zhmccli",
"url": "https://github.com/zhmcclient/zhmccli/pull/159",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1709745943 | 🛑 腾讯云 COS 节点 is down
In 6465e2f, 腾讯云 COS 节点 (https://lisz.ink) was down:
HTTP code: 0
Response time: 0 ms
Resolved: 腾讯云 COS 节点 is back up in 922cd4c.
| gharchive/issue | 2023-05-15T09:59:55 | 2025-04-01T06:46:21.370580 | {
"authors": [
"zhonger"
],
"repo": "zhonger/upptime",
"url": "https://github.com/zhonger/upptime/issues/2653",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
169837079 | 写错字
npm instal gulp-livereload gulp-webserver --save-dev
感谢提醒 已更正
| gharchive/issue | 2016-08-08T03:29:57 | 2025-04-01T06:46:21.375948 | {
"authors": [
"sxgc",
"zhonglimh"
],
"repo": "zhonglimh/Gulp",
"url": "https://github.com/zhonglimh/Gulp/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2158320580 | 🛑 T90 (WP) Japanese home page is down
In 3ae2f2a, T90 (WP) Japanese home page (https://www.ibm.com/community/z-and-cloud/ja/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: T90 (WP) Japanese home page is back up in 1051408 after 8 minutes.
| gharchive/issue | 2024-02-28T07:46:08 | 2025-04-01T06:46:21.396157 | {
"authors": [
"zhujunyong"
],
"repo": "zhujunyong/upptime",
"url": "https://github.com/zhujunyong/upptime/issues/54",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2353143880 | 🛑 Z Community (WP) Home page is down
In 7979707, Z Community (WP) Home page (https://community.ibm.com/zsystems/home/) was down:
HTTP code: 503
Response time: 22454 ms
Resolved: Z Community (WP) Home page is back up in 53142d8 after 32 minutes.
| gharchive/issue | 2024-06-14T11:14:52 | 2025-04-01T06:46:21.398657 | {
"authors": [
"zhujunyong"
],
"repo": "zhujunyong/upptime",
"url": "https://github.com/zhujunyong/upptime/issues/84",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1229647266 | libgstvideo-1.0.so.0: undefined reference to `c_RkRgaInit'
使用老版本的linux-rga库没有问题,使用最新版本的linux-rga库报这个错误。
如果不使用新版本的linux-rga库,wrapbuffer_virtualaddr这个方法又无法使用!麻烦老师看看可以怎么处理呀!谢谢了
不知道你用的是那个头文件库,是不是新版的需要初始化?
RockchipRga.h / im2d_api/im2d.hpp 调用的哪个头文件呢?
请问我这个怎么无法连接,直接用playbin是可以的
已解决
这是来自QQ邮箱的假期自动回复邮件。
您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。
| gharchive/issue | 2022-05-09T12:41:14 | 2025-04-01T06:46:21.440846 | {
"authors": [
"kalujee",
"zhensang",
"zhuyuliang"
],
"repo": "zhuyuliang/gst_rtsp_client",
"url": "https://github.com/zhuyuliang/gst_rtsp_client/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1356825670 | Electron 20 has support HEVC hw decoding
Accroding to https://github.com/StaZhu/enable-chromium-hevc-hardware-decoding
Electron 20 has support HEVC hw decoding now. Upgrade it and videos from Bilibili no longer need to be re-encoded. It will save a log of time.
better-sqlite fails to compile with Electron 20.
Refer to https://github.com/WiseLibs/better-sqlite3/issues/858
There is a fixing commit works: https://github.com/signalapp/better-sqlite3/commit/65d5e336e2e76b0a8398baea787638c803d20c41
| gharchive/issue | 2022-08-31T05:58:37 | 2025-04-01T06:46:21.443516 | {
"authors": [
"maoqxxmm"
],
"repo": "zhxie/anioke",
"url": "https://github.com/zhxie/anioke/issues/83",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
160147230 | cc-znp package not working properly on my BeagleBoneBlack.
I have a CC2531 USB Evaluation Kit connected to the USB port on my BeagleBoneBlack (BBB). I also have two zigbee end-devices within the surroundings of my BBB which are expected to join the network once opened. When the following code
const ccznp = require('cc-znp');
const spCfg = {
path: '/dev/ttyACM0',
options: {
baudrate: 115200,
flowControl: true,
rtscts: true
}
};
ccznp.on('ready', () => {
console.log('Device is ready.');
ccznp.sapiRequest('startRequest', {}, () => {
});
ccznp.sapiRequest('permitJoiningRequest', {destination: 0xFFFC, timeout: 0xAA}, (error, result) => {
if (error) {
throw error;
}
// Verify the results
console.log(result);
});
});
ccznp.on('AREQ', (data) => {
console.dir(data);
});
ccznp.init(spCfg, (error) => {
console.log(error);
});
ccznp.on('close', function() {
console.log('Serial port closed.');
});
is run, I get the following error:
Error: request timeout
at ccznp.sapiRequest (.../test.js:82:23)
at CcZnp.<anonymous> (.../node_modules/cc-znp/lib/ccznp.js:190:13)
at CcZnp.g (events.js:286:16)
at emitOne (events.js:96:13)
at CcZnp.emit (events.js:188:7)
at Timeout.<anonymous> (.../node_modules/cc-znp/lib/ccznp.js:170:18)
at tryOnTimeout (timers.js:224:11)
at Timer.listOnTimeout (timers.js:198:5)
After running the script again several times (at least 6 times), the error suddenly disappears and I get the following results:
{
sof: 254,
len: 1,
type: 'SRSP',
subsys: 'SAPI',
cmd: 'permitJoiningRequest',
payload: { status: 0 },
fcs: 111,
csum: 111
}
After getting the results finally, if I run the same script again it just hangs until I stop it with CRTL+C!
It is worth noting that this procedure happens with all the other commands (e.g. UTIL_GET_DEVICE_INFO etc.) I try to call i.e. I must run the script several times, get the 'Error: request timeout' message above during each run, before it finally runs without any errors and yield the results.
Am I perhaps doing something wrong or is this a bug within the cc-znp package? I would be grateful if you can investigate this issue as soon as possible and let me know your feedback/solution.
Many thanks in advance.
Hi kilianyusinyu,
Thanks for your feedback. We encountered an issue of messages processing performance in RaspPi-2 before, so I was wondering if BBB has the same problem. I just extend the request timeout from 3 seconds to 20 seconds, and npm publish to version 0.0.5. Would you please try it again?
By the way, did you call reset in your script? Since reset will break the connection from cc2531 to your host, and many requests will be pended until they are timed out. (Just a notice for you if you are using cc2531)
Hi simenkid,
Many thanks for your quick response! The request timeout change from 3 to 20 seconds is quite noticeable with the new version 0.0.5 and I have tested again several times. Unfortunately, the problem still persists and I only get good results after several runs, whereby each run yields a timeout error after which one (i.e. the last run) suddenly yields expected results without any timeout errors - exactly as already described above. I guess the problem lies elsewhere inside the cc-znp package or inside one of its dependencies?
Just to answer your question regarding reset. No, I did not call reset in my script. But thanks all the same for the tip!
Hi, kilianyusinyu,
We are now tracing down the dependencies, it seems that there is a stream pipe broken at low-layer transportation. We will fix this issue and let you know soon. :-)
Hi, kilianyusinyu,
It seems that something goes wrong with node.js event loop. We just fix the problem. Would you please try it again? And let us know if the problem is solved. ^^
(The package is now updated to v0.0.6.)
Hi simenkid,
I have just tested again and the problem is still persisting. After several runs (as already described above) it suddenly works but this is not user-friendly! This implies that the cause of this request timeout problem is still elsewhere in some package.
Hi kilianyusinyu,
Sorry for the inconvenience. There is not event loop problem, the problem is stream broken in lower layer. We already fixed it, and tested. It should be ok. Would you please try it again, thanks :-)
(The package is now updated to v0.0.7)
Hi kilianyusinyu,
The following code is test ok.
const ccznp = require('cc-znp');
const spCfg = {
path: '/dev/ttyACM0',
options: {
baudrate: 115200,
flowControl: true,
rtscts: true
}
};
ccznp.on('ready', () => {
console.log('Device is ready.');
ccznp.sapiRequest('startRequest', {}, () => {
});
});
ccznp.on('AREQ', (Ind) => {
// { subsys: 'ZDO', ind: 'stateChangeInd', data: { state: 8 } }
console.dir(Ind);
var data = Ind.data;
if (data.state === 9) {
ccznp.sapiRequest('permitJoiningRequest', {destination: 0xFFFC, timeout: 0xAA}, (error, result) => {
if (error) {
throw error;
}
// Verify the results
console.log(result);
});
}
});
ccznp.init(spCfg, (error) => {
console.log(error);
});
ccznp.on('close', function() {
console.log('Serial port closed.');
});
and result:
Hi jackchased,
I have tested several times with the updated version 0.0.7 and each time the AREQ event is not fired and the script keeps waiting for it for quite a very long time after which I simply stop it with CTRL+C. Unfortunately, no errors are thrown to inform me about the cause of the problem and this is not helping me either.
Hi kilianyusinyu,
What do you mean "each time the AREQ event is not fired and the script keeps waiting for it for quite a very long time"?
Is that possible to give us your code under test?
Hi jackchased,
What I mean is that the code above that worked for you did not work for me. In my case, I have a CC2531 USB Evaluation Kit connected to the USB port on my BeagleBoneBlack which in turn is connected to the USB port of my MacBook.
As you can see, the AREQ event is never fired and the script keeps waiting for it for quite a very long time.
Hello kilianyusinyu,
I think I found the problem. The npm module did not direct to its repository properly. I just fix the link to the correct repository. You may like npm install cc-znp@0.0.8 to ensure the lastest release is installed on your host.
Sorry, it's our fault.
| gharchive/issue | 2016-06-14T10:25:21 | 2025-04-01T06:46:21.458595 | {
"authors": [
"jackchased",
"kilianyusinyu",
"simenkid"
],
"repo": "zigbeer/cc-znp",
"url": "https://github.com/zigbeer/cc-znp/issues/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1163420372 | Add device support for TS110E Model QS-Zigbee-D02-TRIAC-LN
This device uses a non-default set level command.
This device looks exactly the same as TS110F but seems to have different zigbee command IDs. This is TS110F on blakadder: https://zigbee.blakadder.com/Lonsonho_QS-Zigbee-D02-TRIAC-LN.html
Tested in local docker environment.
Thanks to @javicalle this was possible.
Fix #1415
Codecov Report
Merging #1422 (25ab689) into dev (e3fa034) will decrease coverage by 0.23%.
The diff coverage is 72.50%.
@@ Coverage Diff @@
## dev #1422 +/- ##
==========================================
- Coverage 80.52% 80.29% -0.24%
==========================================
Files 222 233 +11
Lines 6414 7018 +604
==========================================
+ Hits 5165 5635 +470
- Misses 1249 1383 +134
Impacted Files
Coverage Δ
zhaquirks/tuya/ts110e.py
72.50% <72.50%> (ø)
zhaquirks/tuya/ts004f.py
80.64% <0.00%> (-19.36%)
:arrow_down:
zhaquirks/tuya/ts0601_siren.py
93.96% <0.00%> (-6.04%)
:arrow_down:
zhaquirks/xiaomi/aqara/tvoc.py
73.80% <0.00%> (-2.67%)
:arrow_down:
zhaquirks/xiaomi/aqara/roller_curtain_e1.py
59.70% <0.00%> (-2.57%)
:arrow_down:
zhaquirks/__init__.py
77.89% <0.00%> (-1.48%)
:arrow_down:
zhaquirks/tuya/mcu/__init__.py
98.77% <0.00%> (-1.23%)
:arrow_down:
zhaquirks/xiaomi/aqara/plug_mmeu01.py
78.12% <0.00%> (-0.83%)
:arrow_down:
zhaquirks/xiaomi/__init__.py
82.37% <0.00%> (-0.81%)
:arrow_down:
zhaquirks/xiaomi/mija/smoke.py
89.47% <0.00%> (-0.53%)
:arrow_down:
... and 62 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e3fa034...25ab689. Read the comment docs.
Pull Request Test Coverage Report for Build 2485564661
29 of 40 (72.5%) changed or added relevant lines in 1 file are covered.
604 unchanged lines in 29 files lost coverage.
Overall coverage decreased (-0.2%) to 80.294%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
zhaquirks/tuya/ts110e.py
29
40
72.5%
Files with Coverage Reduction
New Missed Lines
%
zhaquirks/tuya/mcu/init.py
2
98.17%
zhaquirks/xiaomi/aqara/ctrl_ln.py
2
88.89%
zhaquirks/xiaomi/mija/smoke.py
2
89.47%
zhaquirks/aurora/aurora_dimmer.py
3
82.61%
zhaquirks/tuya/ts0601_electric_heating.py
3
57.58%
zhaquirks/danfoss/thermostat.py
4
82.76%
zhaquirks/tuya/ts130f.py
4
83.02%
zhaquirks/xiaomi/aqara/opple_remote.py
5
88.89%
zhaquirks/konke/init.py
6
75.68%
zhaquirks/tuya/ts004f.py
6
80.65%
Totals
Change from base Build 1947732782:
-0.2%
Covered Lines:
5635
Relevant Lines:
7018
💛 - Coveralls
After lots of testing I'm pretty confident this is working code now. All the checks are succeeding too. I've also updated it to work with the new attributes and server_commands (vs the old manufacturer_server_attributes and manufacturer_attributes). @javicalle should we merge it?
| gharchive/pull-request | 2022-03-09T02:35:54 | 2025-04-01T06:46:21.557369 | {
"authors": [
"codecov-commenter",
"coveralls",
"jonnylangefeld"
],
"repo": "zigpy/zha-device-handlers",
"url": "https://github.com/zigpy/zha-device-handlers/pull/1422",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
538473840 | Create LICENSE.MD
Added the Apache license
:tada: This PR is included in version 1.0.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2019-12-16T15:13:52 | 2025-04-01T06:46:21.602906 | {
"authors": [
"siemvaessen",
"zz-github-machine"
],
"repo": "zimmerman-zimmerman/IATI-Humanitarian-Data-Portal-CMS",
"url": "https://github.com/zimmerman-zimmerman/IATI-Humanitarian-Data-Portal-CMS/pull/33",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1275845574 | Increase Hackability: zingo-cli regtest mode
Issues that add friction to development on this repo:
It's not obvious where the wallet file is stored/what it is named.
It's not obvious where the log files are stored
There's no obvious pattern for a "Test Setup"
User Story:
A developer wants to work on this repo. They read a README, execute a minimum number of commands and have a "test cli" setup.
The test cli setup includes:
* A concise README that lets the developer know what they need to know (including):
- location/name/format of log files (an example line is sufficient to understand format)
- `zcashd` config/datadirs that are in locations that are clearly described
- `lightwalletd` config/datadirs that are clearly named/located
* A locally running `zcashd`, in regtest mode, with:
- easily reproduced config/datadirs
* A locally running `lightwalletd` built from zcash main that is exclusively communicating with the local regtest-zcashd, and the test `zingo-cli`
* a running `zingo-cli` attached exclusively to the locally running `lightwalletd` as described above.
Once I have read the README, I understand the zingo-cli regtest mode setup.
Moving forward I can git worktree add ../FRESHCHECKOUT, and in that repo run a single command, the result is a running zingo-cli as described above.
Updating the README seems straightforward enough, but I don't understand, specifcally, how you would want to handle the last part of your ask.
Moving forward I can git worktree add ../FRESHCHECKOUT, and in that repo run a single command, the result is a running zingo-cli as described above.
Would this involve a zcashd (and lightwalletd) as submodules? I would think the sync time with such a setup would be really very significant.
If not, I would think instructions in the README would be opinionated as to where these repos would exist in a filesystem, but that individual adjustments could be taken by developers with the understanding they might have to tweak specific commands, which seems to run a bit counter to your ask here.
I like the more flexible option of providing a recipe but not dictating or turning this into a larger repo (I also dislike submodules).
Or am I missing another possibilty here?
I am -1 on submodules for this work. Submodules could be part of nice solution, but are unnecessary for the basic tool.
The correct answer is an opinionated README, and a script that fails-on-start if the expected locations are unavailable.
Alright, sounds good, thanks for clarifying.
I think we should focus on understanding the current logging systems/patterns so we correctly migrate all logs when running in regtest mode.
Also, note that the code currently supports mainnet and testnet only. Adding support for a regtest network would be a new feature.
To refine the scope here. We intend to add a --regtest flag to zingo-cli.
Probably that flag will require the user to provide zcashd and lightwalletd binaries as additional arguments, and fail helpfully in their absence.
I worked with this for a while today, and interestingly enough, I could not get zcashd's most recent commit on main to run in regtest mode at all!
For me, testnet and main net work fine, other flags to the cli work fine, but when I either run --regtest on the cli invocation, or point to a config containing regtest=1 I run into a problem.
Several times it gave me an error and suggested running -reindex and reindex-chain. Trying these flags, they would both hang. After running those and cancelling, now when running the regtest versions no error prints, and the process just hangs. (I've let it run for several minutes, ps aux reports the process running, using some smaller amount of CPU [about 6%] and a possibly reasonable seeming amout of virtual memory [3.3 GB - zcashd running on main net is showing me 6.5 GB], but cumulative CPU time does not increase [I see stoppage at 0:15]).
After a clean and rebuild, I am seeing the same.
Can anyone else confirm or deny this behavior by trying locally to run regtest? I could open an issue at the zcash repo but ideally I don't want to do that if I'm the only one for whom this is happening.
What flags are you using? Try the command that hangs, but add -printtoconsole and see if that tells you anything?
Please paste the exact hanging command, and the exact error here.
I confirmed that it wasn't running the way I had it set up before ("custom" config file name, passed with the zcashd invocation to point the daemon at it) - I was getting error: couldn't connect to server: unknown (code -1) from the cli side.
With @AloeareV 's help I was able to get it to a working state. One of the things I did was moved aside my normal wallet.dat and zcash.conf files, and moved in my regtest specific file. The other thing I did was add nuparam lines.
For future reference my config file looks like this now.
1 regtest=1
2 nuparams=5ba81b19:1 # Overwinter
3 nuparams=76b809bb:1 # Sapling
4 nuparams=2bb40e60:1 # Blossom
5 nuparams=f5b9230b:1 # Heartwood
6 nuparams=e9ff75a6:1 # Canopy
7 nuparams=c2d6d0b4:1 # NU5
8
9 #testnet=1
10 txindex=1
11 insightexplorer=1
12 experimentalfeatures=1
13 rpcuser=xxxxxx
14 rpcpassword=xxxxxx
Everything below #testnet=1 is there because Aloe suggested I'll need that for lightwalletd connectivity.
The lightwalletd params listed above can be found here:
https://github.com/zcash/lightwalletd/#zcashd
This page also has a lot of useful content:
https://zcash.readthedocs.io/en/latest/rtd_pages/lightwalletd.html
As I mentioned elsewhere, I think that a sane way to establish a base state with respect to zcashd is this:
Assuming a local clone and that you're starting in the root of the clone, with the commit of interest (master) checked out:
(1) git worktree add ../$EXPT_NAME
(2) cd ../$EXPT_NAME
(3) paru -S python-simplejson
(4) ./zcutil/clean.sh && ./zcutil/build.sh -j$(nproc) && ./qa/zcash/full_test_suite.py --nozmq
The downsides are that this takes a while and consumes a bit of bandwidth and disk-space, but it also "restates-your-assumptions" to speak.
It's a way of establishing a clean work-bench.
Unfortunately, I am not able to pass all tests (by running ./qa/zcash/full_test_suite.py) on a clean-built zcash/master or zingolabs/NO_zmq branches (a minor, targeted, modification from zcash/main), with or without flags.
Running the tests without flags seems to pass most tests, but I see
Running stage rpc
=================
ERROR: "import zmq" failed. Use --nozmq to run without the ZMQ tests.To run zmq tests, see dependency info in /qa/README.md.
Traceback (most recent call last):
File "/home/ha/zcashes/zcash/qa/pull-tester/rpc-tests.py", line 455, in <module>
main()
File "/home/ha/zcashes/zcash/qa/pull-tester/rpc-tests.py", line 223, in main
import zmq
ModuleNotFoundError: No module named 'zmq'
and then quits with !!! One or more test stages failed !!!
Running the command above fails and prints full_test_suite.py: error: unrecognized arguments: --nozmq, likewise for a single hyphen version, following the NO_zmq branch.
Done with #26 over on new-name repo.
| gharchive/issue | 2022-06-18T17:49:05 | 2025-04-01T06:46:21.623348 | {
"authors": [
"AloeareV",
"dannasessha",
"zancas"
],
"repo": "zingolabs/zingolib",
"url": "https://github.com/zingolabs/zingolib/issues/85",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
607113022 | Cache is not working on app start
Describe the bug
When I use the Query() widget right at the start of the app, the cache is not working. So I get an exception if my internet is off. But after i press on a button to execute a refetch, everything works... So the cache is available, but somehow not right at the start of the app.
Expected behavior
The cache should be available right at the start of the app.
Maybe if there are reasons why it takes some time for the cache to come up, the Query() widget should wait for it?
Desktop (please complete the following information):
OS: Windows
Version 10
Smartphone (please complete the following information):
Device: Xiaomi Mi 9T Pro
OS: Android
Version 10
Actually I found out, that it is kind of working in release mode, but I am not sure if I can say that for every device.. because clearly something is off with this and its still not working perfectly..
So when I open the app without internet its working fine in release mode, but if I open it with network on I don't see any cache but a progress bar until the network request is finished..
Thats not how it should be.
Same issue for me. It gives host look up error rather than loading data from cache itself
You can try to overcome this by applying the cache only fetch policy in case the internet connection is missing
https://github.com/zino-app/graphql-flutter/blob/42d010c177f7fa752b9a0b5ca57548dd1cd3ad8f/packages/graphql/lib/src/core/query_options.dart#L21
There could also be the case when cache hydrates without awaiting to finish...
This requires additional investigation...
if I use cache only I get CacheMissException
Okay so in my case cacheOnly works. Thats kind of interesting 🤔
Now the question is, why is the cache working if you use cacheOnly, but not if you use cacheAndNetwork when starting the app?
Actually I have to take back what I said.
So after playing around a bit more it turned out, that also cacheOnly isn't working all the time.
It's definitely better than cacheAndNetwork but its far from perfekt.
When I use a FutureBuilder to wait until getApplicationDocumentsDirectory() succeeds I get it working all the time on my device, but thats a weird hack and I have no idea if that would work on every device as this seems to be some kind of a performance thing? "If you wait long enough you might just get the cache " or something ...
So this really still is a huge bug and the seems to be no real workaround right now.
It seems the built in approach to launch 'getApplicationDocumentsDirectory()' is not working as expected)
If you could prepare the reproducible github repo I'll take a look and may be will be able to create PR
Thats very nice of you thanks.
My current project is closed source, but I will try to build a small version for demonstration purposes ASAP :)
Just as a small update:
Even with the FutureBuilder waiting for getApplicationDocumentsDirectory()
I get a CacheMissException 1 out of 10 tries or something like that.
Thanks for your active help in investigating this issue!
You are welcome :) But my intentions are purely egoistic, as I need this bug to be fixed :D
So I created a simple app that showcases my issue:
https://github.com/lazylazyllama/graphql_flutter-cache-on-start-issue
By default I configured the cache to be cacheAndNetwork so that you can see something by launching the app first. As the cache needs to be created anyways at some point.
You can easily change that to cacheOnly here
Thanks I’ll take a look!
That's cool thanks you a lot :)
While playing around I also noticed, that there is a similar issue that happens at runtime and it might be related.
So what I did was:
final result = await graphql.GraphQLProvider.of(context).value.query(graphql.QueryOptions(
documentNode: SchoolQuery().document,
));
And right after that I tried to read that value with a Query() Widget and cacheOnly as the FetchPolicy. But what I get is a CacheMissException. But after a restart its working except on those 1 out of 10 or 20 or 30 .. :D times
Context: This is part of my logging in process. I do the first fetch to put the School into the cache and after that I navigate to the main page, which uses cacheOnly.
You are welcome :) But my intentions are purely egoistic, as I need this bug to be fixed :D
So I created a simple app that showcases my issue:
https://github.com/lazylazyllama/graphql_flutter-cache-on-start-issue
By default I configured the cache to be cacheAndNetwork so that you can see something by launching the app first. As the cache needs to be created anyways at some point.
You can easily change that to cacheOnly here
Tried to run your example. Seems to be throwing CacheMissException every single time.
Ran the app with cacheAndNetwork policy, and then changed it to cacheOnly but no success
I don't get this issue anymore using Hive Storage in 4.0.0 alpha
| gharchive/issue | 2020-04-26T21:04:40 | 2025-04-01T06:46:21.638948 | {
"authors": [
"Faiyyaz",
"aseef17",
"lazylazyllama",
"vasilich6107"
],
"repo": "zino-app/graphql-flutter",
"url": "https://github.com/zino-app/graphql-flutter/issues/617",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2239771084 | Please add support for connectivity_plus 6.0.2
Problem statement: As app store introduced mandatory privacy manifest files for some of packages being used in the application. connectivity_plus 6.0.2 is one of them and current version of connectivity_plus 6.0.2 has added support for privacy manifest but graphql_flutter still using connectivity_plus 5.0.0 which leads into compatibility issue. App store is very specific about the deadline of these as well which is 1st of may.
https://developer.apple.com/support/third-party-SDK-requirements/
Please update the connectivity_plus package to the latest
I have created a PR with changes
https://github.com/zino-hofmann/graphql-flutter/pull/1424
Please check and do the needful.
much needed ^
Still crashes ios app
any update about this ?
Please see graphql_flutter-v5.2.0-beta.7
| gharchive/issue | 2024-04-12T10:39:11 | 2025-04-01T06:46:21.643455 | {
"authors": [
"Avinash-dhiraj",
"AymanElarian",
"SachinCodeMatrix",
"callmephil",
"vincenzopalazzo"
],
"repo": "zino-hofmann/graphql-flutter",
"url": "https://github.com/zino-hofmann/graphql-flutter/issues/1423",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1095206763 | update README.md
Docs
Updated graphql_flutter/README.md
I think I've accepted your suggestions but I'm not sure it worked, Sorry I'm new to this
I think I've accepted your suggestions but I'm not sure it worked, Sorry I'm new to this
you need to change your code, and make another commit after that you change are reflected on the PR
| gharchive/pull-request | 2022-01-06T11:02:34 | 2025-04-01T06:46:21.645412 | {
"authors": [
"adnanjpg",
"vincenzopalazzo"
],
"repo": "zino-hofmann/graphql-flutter",
"url": "https://github.com/zino-hofmann/graphql-flutter/pull/1003",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2177063926 | Update aws-java-sdk-core to 1.12.676
About this PR
📦 Updates com.amazonaws:aws-java-sdk-core from 1.12.459 to 1.12.676
Usage
✅ Please merge!
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
⚙ Adjust future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "com.amazonaws", artifactId = "aws-java-sdk-core" } ]
Or, add this to slow down future updates of this dependency:
dependencyOverrides = [{
pullRequests = { frequency = "30 days" },
dependency = { groupId = "com.amazonaws", artifactId = "aws-java-sdk-core" }
}]
labels: library-update, early-semver-patch, semver-spec-patch, commit-count:1
Superseded by #722.
| gharchive/pull-request | 2024-03-09T05:39:23 | 2025-04-01T06:46:21.649242 | {
"authors": [
"scala-steward"
],
"repo": "zio/zio-flow",
"url": "https://github.com/zio/zio-flow/pull/721",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
963115539 | Update google-cloud-pubsub to 1.113.6
Updates com.google.cloud:google-cloud-pubsub from 1.107.0 to 1.113.6.
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "com.google.cloud", artifactId = "google-cloud-pubsub" } ]
labels: library-update, semver-minor
Superseded by #251.
| gharchive/pull-request | 2021-08-07T00:24:01 | 2025-04-01T06:46:21.652399 | {
"authors": [
"scala-steward"
],
"repo": "zio/zio-gcp",
"url": "https://github.com/zio/zio-gcp/pull/249",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2030268987 | [Bug]: id_token_hint invalid for user session after some idle time
Preflight Checklist
[X] I could not find a solution in the documentation, the existing issues or discussions
[X] I have joined the ZITADEL chat
Environment
Self-hosted
Version
2.42.0
Database
PostgreSQL
Database Version
15
Describe the problem caused by this bug
https://discord.com/channels/927474939156643850/1178715038136148028
SPA OIDC client integration
I caught this error with my client that integrates with zitadel. The client is a JS library oidc-client-js (ts) with token update functionality in the background.
SilentRenewService._tokenExpiring: Error from signinSilent: Unfortunately, the user may not be logged in and/or additional interaction is required.
Apparently the bug is on the zitadel side as stated in this issue. Could you give me a hint, is it fixed?
https://github.com/dotnet/aspnetcore/issues/50812
Logs 1:
2023/11/28 12:54:49 WARN auth request oidc_error.description="Unfortunately, the user may be not logged in and/or additional interaction is required." oidc_error.type=interaction_required
2023/11/28 12:54:56 WARN auth request oidc_error.description="Unfortunately, the user may be not logged in and/or additional interaction is required." oidc_error.type=interaction_required
2023/11/28 12:55:02 WARN request error oidc_error.parent="token has expired" oidc_error.description="id_token_hint invalid" oidc_error.type=invalid_request
Logs 2 (I'm not sure if this is related, but it may well be):
2023/12/06 07:29:04 WARN request error oidc_error.parent="invalid signature (invalid signature: no possible keys matches)" oidc_error.description="id_token_hint invalid" oidc_error.type=invalid_request
time="2023-12-06T07:56:13Z" level=warning msg="token verifier repo: decrypt access token" caller="/home/runner/work/zitadel/zitadel/internal/authz/repository/eventsourcing/eventstore/token_verifier.go:266" error="ID=APP-ASdgg Message=invalid token"
time="2023-12-06T07:56:13Z" level=warning msg="token verifier repo: decrypt access token" caller="/home/runner/work/zitadel/zitadel/internal/authz/repository/eventsourcing/eventstore/token_verifier.go:266" error="ID=APP-ASdgg Message=invalid token"
time="2023-12-06T07:56:13Z" level=warning msg="token verifier repo: decrypt access token" caller="/home/runner/work/zitadel/zitadel/internal/authz/repository/eventsourcing/eventstore/token_verifier.go:266" error="ID=APP-ASdgg Message=invalid token"
time="2023-12-06T07:56:15Z" level=warning msg="token verifier repo: decrypt access token" caller="/home/runner/work/zitadel/zitadel/internal/authz/repository/eventsourcing/eventstore/token_verifier.go:266" error="ID=APP-ASdgg Message=invalid token"
time="2023-12-06T07:56:15Z" level=warning msg="token verifier repo: decrypt access token" caller="/home/runner/work/zitadel/zitadel/internal/authz/repository/eventsourcing/eventstore/token_verifier.go:266" error="ID=APP-ASdgg Message=invalid token"
2023/12/06 08:39:09 WARN request error oidc_error.parent="invalid signature (invalid signature: no possible keys matches)" oidc_error.description="id_token_hint invalid" oidc_error.type=invalid_request
2023/12/06 08:39:12 WARN request error oidc_error.parent="invalid signature (invalid signature: no possible keys matches)" oidc_error.description="id_token_hint invalid" oidc_error.type=invalid_request
To reproduce
Create project, create app with user-agent SPA settings
Redirects URIs
http://localhost:8080/auth/signinpop/main
http://localhost:8080/auth/signinwin/main
http://localhost:8080/auth/signinsilent/main
then copy client_id
Create a sample vue app and integrate it with package: https://github.com/soukoku/vue-oidc-client
Type login & password, close tab
Wait a some time, may be 1-3-5 hours
Open a new tab with vue sample project and catch an error:
Screenshots
No response
Expected behavior
working normally
Operating System
OS: Mac 13.2.1
Browser: Chrome Version 121.0.6140.0 (Official Build) canary (arm64) or Firefox or any browser
Relevant Configuration
No response
Additional Context
No response
@muhlemmer @livio-a can you help here?
Looks like we just forgot about it: https://github.com/zitadel/zitadel/issues/5232
There are two reasons for this:
Either the token has expired (bug in our OIDC library)
or the the token was signed with a key, that is not available anymore.
The session (incl. access_token) will therefore not be valid anymore.
@hifabienne But looks like more and more people run into this and we should try to find a solution. With the integration of v3 we should now be able to handle it (custom implementation of EndSession)
hei @roquie
I would close this issue to not have to handle the duplicates, i just asked for estimation for the other issue, so we can plan it for a future sprint.
still getting this error , is it solved? i wonder why it has been closed when there is no solution , everyone is just assuming here what might be the problem
This was closed as duplicate. Refer to the linked issue to see the resolutions we took. #5232
| gharchive/issue | 2023-12-07T09:21:07 | 2025-04-01T06:46:21.682975 | {
"authors": [
"codersaadi",
"hifabienne",
"livio-a",
"muhlemmer",
"roquie"
],
"repo": "zitadel/zitadel",
"url": "https://github.com/zitadel/zitadel/issues/7035",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
266243209 | TypeError when running the codes
I tried to run the library with nodejs and error occured.
Symbol.asyncIterator = Symbol.asyncIterator || Symbol.for('Symbol.asyncIterator');
^
TypeError: Cannot assign to read only property 'asyncIterator' of function 'function Symbol() { [native code] }'
at /Users/hhhung/Desktop/face_feature/node_modules/flash-store/bundles/flash-store.umd.js:101:26
at rimrafProxy__default (/Users/hhhung/Desktop/face_feature/node_modules/flash-store/bundles/flash-store.umd.js:40:68)
at Object. (/Users/hhhung/Desktop/face_feature/node_modules/flash-store/bundles/flash-store.umd.js:43:2)
at Module._compile (module.js:624:30)
at Object.Module._extensions..js (module.js:635:10)
at Module.load (module.js:545:32)
at tryModuleLoad (module.js:508:12)
at Function.Module._load (module.js:500:3)
at Module.require (module.js:568:17)
at require (internal/module.js:11:18)
flash-store version: 0.1.2
How could I fix the problem? Thanks
Which node version did you use?
Did you tired node v8? I believe it would work with problem under the latest version of node.
I used node v8 at first and I changed to v7.0.0 but same error appears.
Could you please post your code which can reproduce this issue?
import 'babel-polyfill'
import { tmp } from './module'
tmp()
import { Facenet } from 'facenet'
export async function tmp(){
const facenet = new Facenet()
const imageFile = ./test.png
const faceList = await facenet.align(imageFile)
for (const face of faceList) {
const embedding = await facenet.embedding(face)
console.log('embedding: ', embedding)
}
}
Then I use babel to convert the two codes into es5.
Thank you for providing the re-produce code.
Could try to run them by babel-node and see if it will produce the same result?
Errors occur when running by babel-node
For index.js(upper one):
throw new Error("only one instance of babel-polyfill is allowed");
^
Error: only one instance of babel-polyfill is allowed
For test.js(lower one):
throw err;
^
SyntaxError: /Users/hhhung/Desktop/face_feature/test.js: await is a reserved word (6:17)
Could you be able to run the demo from this repository?
It uses ts-node and should work without any problem.
Try run npm run demo
I have never tried babel, will try it when I have time.
Thanks for your advices. Now I am able to run the codes and I have one more question. Is there any requirement for the input photo? I use some face photos as input but no face feature vector is returned.
There's two steps: alignment and embedding.
Which one did you tried, alignment or embedding?
I suggest you to read the source code to get to know the reason.
If you still has problem, please provide the reproduce procedure for your issue.
Just tried embedding function with size 256x256, 128x128 and 96x96 face photos. Only the one with size 256x256 returns a feature vector.
Actually, I believe all the input image will be resized to 160x160 before feed to the neural network.
So the facenet.embedding() should return the 128 dim vector no matter than what size the image is.
You can check the source code, and let me know your result.
Thanks.
I ran the demo with replacing the input,
const imageFile = ${__dirname}/../tests/fixtures/two-faces.jpg
with a 96x96 jpg.
No result is returned.
But when the photo is enlarged to higher dimension, like 256x256, an embedding vector is returned.
| gharchive/issue | 2017-10-17T19:08:04 | 2025-04-01T06:46:21.694363 | {
"authors": [
"henrych4",
"zixia"
],
"repo": "zixia/node-facenet",
"url": "https://github.com/zixia/node-facenet/issues/68",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2521253707 | 试玩了一下,打开白屏,控制台有报错
目前在调试状态,个别游戏或者软件会报错,等后面重构就好了。
另外可以勾上兼容模式,应该就不报错了,除了Anyview可能会报错
最新版本已经修复此问题,可能需要刷新浏览器缓存
| gharchive/issue | 2024-09-12T03:30:29 | 2025-04-01T06:46:21.699218 | {
"authors": [
"zixing131",
"zuisong"
],
"repo": "zixing131/myjump",
"url": "https://github.com/zixing131/myjump/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2204820092 | 运行TransAE.py出现的问题
尊敬的作者,您好!
当我在运行TransAE.py文件时,显示FileNotFoundError: [Errno 2] No such file or directory: 'embedding_weights/textembed_11292_100_2_40.pkl'. 我没有找到生成文本嵌入pkl格式的文件,请问您提供的代码中是否有生成文本嵌入的文件?如果您能指导,我将不胜感激。
此致
敬礼
bjut-zhangdabao
2024.3.25
附加问题截图:
这个是在保存的时候出现的错误,原因应该是没有创新embedding_weights文件夹,你可以尝试创建一个embedding_weights空文件夹再运行。
尊敬的作者,感谢您的指导指导,在您的指导下,问题已解决。今后的日子里,祝您披荆斩棘,科研顺利。我非常期待在这充满挑战与机遇的科研道路上与您相逢交流。
作者对科研的的态度,吾辈楷模。
| gharchive/issue | 2024-03-25T03:43:00 | 2025-04-01T06:46:21.710291 | {
"authors": [
"bjut-zhangdabao",
"flow3rdown"
],
"repo": "zjunlp/MKG_Analogy",
"url": "https://github.com/zjunlp/MKG_Analogy/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
104103929 | More realistic turning under ILS capture
The current path taken once ILS is locked results in the aircraft being a long way off the centerline even after travelling a long distance. This is an attempt to make the turning more realistic. (@tedrek - would appreciate your opinion on this).
Before:
After:
@glangford, this is a great improvement. The suggestion I would make for this approach would be replacing the calculation based on model.rate.turn with a fixed value (or in the next line defining the resultant range). model.rate.turn isn't in use anymore since #118, I really should have removed it completely at that time.
I actually took a look at this the other night, if you look at my branch t-ils-intercept you can see the approach I was looking at. The drawback is that it's a fairly accurate representation of an ILS intercept, but the game doesn't currently differentiate between visual approaches and ILS (or other) approaches. My feeling is that until visual approaches are implemented this is too restrictive to be fun.
As reference on ILS this is what I had found:
The primary thing aspect of ILS which is of interest is the instrument in the cockpit only shows +-2.5 degrees lateral from the localizer (the signal is received further out but the needles simply show maximum), therefore heading adjustments will generally only happen within this central cone. The limitations which ATC operate under are the heading must be within 30 degrees of the localizer heading, altitude must be below the glideslope at intercept and distance is 2 miles outside of the final approach fix by the time the aircraft is on the center line.
@tedrek - this is great. My change was just a quick hack to try and force capture to be quicker, but you have a much better fix in the works!
If I play test the t-ils-capture changes, in the scenario I was testing (Modesto Five arrival at SFO with no radar vectors) it won't capture at all. I think this is good though - the aircraft is coming in at too sharp and heading and trying to land on 28R, it probably shouldn't be able to:
For a second test, I give vectors (heading 270) - see here how late it captures (around WETOR):
I'm not sure if the capture is too late here, or if it is correct.
@tedrek - I think I see the cause of late intercept. From the t-ils-intercept branch:
// lock ILS if at the right angle and altitude
if ((abs(this.altitude - glideslope_altitude) < glideslope_window)
&& (abs(offset_angle) < radians(3))
&& (abs(this.heading - angle) < radians(30))
&& (offset[1] < ils)) {
Not 100% sure what offset_angle is, but is radians(3) too stringent here? I don't have a better reference than Wikipedia which describes the localizer coverage like this:
Coverage: at 25 nmi (46 km) within 10° from CL; at 17 nmi (31 km) within 10° and 35° from CL; at 10 nmi (19 km) outside 35° if coverage is required.
https://en.wikipedia.org/wiki/Localizer
Also - I don't know how the glide slope window is arrived at in general, but it could use this:
The centre of the glide slope signal is arranged to define a glide path of approximately 3° above horizontal (ground level). The beam is 1.4° deep (0.7° below the glide-path centre and 0.7° above).
@glangford I'm glad you like what you see in general.
In the first case the relevant restriction is that ATC must give a vector within 30 degrees of the localizer heading for an aircraft to intercept the localizer. However the aircraft would be capable of intercepting at a higher angle, it's just likely to overshoot in the process. This should probably be implemented as a negative score for too sharp of intercepts instead of not intercepting at all.
In the second case the intercept is fine, in real life for an ILS approach you need to supply vectors to be established halfway between AXMUL and CEPIN. The normal procedure as I understand it is to fly the ATC assigned heading until within 2.5 degrees of the center line and then turn onto the center line.
For the late intercept, this is intentional. offset_angle is the angle between the aircraft's current position and the centerline. The localizer coverage is significantly wider than 3 degrees and the aircraft can receive the signal outside of the 3 degrees used there. However, the instruments in the cockpit show a maximum of 2.5 degrees either side of the center line and pilots consequently will fly ATC heading until the instruments show less than maximum deflection. I should have used 2.5 to be completely accurate.
The glideslope window is derived from the glideslope at 3 degrees. It encompasses the space below the glideslope as well to allow approaching from underneath.
As you found, the approach comes across as quite restrictive compared to what has been in place. I will try to re-work for a wider capture angle with scoring to discourage high-angle intercepts and we'll see how that plays. Hopefully I'll have something for you tomorrow.
| gharchive/pull-request | 2015-08-31T18:11:33 | 2025-04-01T06:46:21.751226 | {
"authors": [
"glangford",
"tedrek"
],
"repo": "zlsa/atc",
"url": "https://github.com/zlsa/atc/pull/158",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2149363798 | Add lint to enforce SMIME BRs: 7.1.4.2.1 requirement for mailbox addr…
Hello, I'm going to be attempting to continue the work Rob Lee had been doing and would like to contribute lints from GlobalSign to the zlint repository. We mostly have smime lints to contribute developed in the past year and my goal is to continue contributing them one at a time. This is my first contribution beyond a documentation update so let me know if there is anything missing.
Description:
All mailbox addresses appearing in subjectDN or dirName must be repeated in san:rfc822Name or san:otherName. This lint does its best to detect mailbox address values in the subjectDN or dirName and if any are detected ensures they are repeated.
I missed the integration tests before opening the PR. I have run them for this lint and identified 6 expected errors:
>make integration INT_FLAGS="-lintSummary -fingerprintSummary -lintFilter='e_mailbox_address_shall_contain_an_rfc822_name'" PARALLELISM=24
...
summary of result type by certificate fingerprint:
3087f97b6cff020b5320e18d3e326074cbaa128142660f2debe4564ab1ab0179 fatals: 0 errs: 1 warns: 0 infos: 0
5f3fcccca91a7b39e8995f79c35cb5e604d4ee0487ea1a41993c84304c0a5c99 fatals: 0 errs: 1 warns: 0 infos: 0
63d23132c2511f33bb947f27c398bb824109ccf2d6a2037e3713fe9f7a43b15d fatals: 0 errs: 1 warns: 0 infos: 0
b034fa1aa9e501dc14b43d43dfe2210de3e5551744494b55d5f0abd865c67efc fatals: 0 errs: 1 warns: 0 infos: 0
c6ac841c78191101725ca7d5ed499be47c15ebeece7d74e6d095e2925e7bb404 fatals: 0 errs: 1 warns: 0 infos: 0
e4dbfc94e616ffb59904e394d9dcdd3ab55c26c5586440f37c058eecb907a344 fatals: 0 errs: 1 warns: 0 infos: 0
summary of result type by lint name:
e_mailbox_address_shall_contain_an_rfc822_name fatals: 0 errs: 6 warns: 0 infos: 0
corpus_test.go:164: expected lint "e_mailbox_address_shall_contain_an_rfc822_name" to have result fatals: 0 errs: 0 warns: 0 infos: 0 got fatals: 0 errs: 6 warns: 0 infos: 0
1 lint(s) failed--- FAIL: TestCorpus (71.55s)
FAIL
FAIL github.com/zmap/zlint/v3/integration 71.555s
FAIL
make: *** [makefile:36: integration] Error 1
Thank you for joining in on the contributions on behalf of GlobalSign! Help from the CAs is always appreciated.
I took a quick smoke check on the failed certificates, and their failures seem to make sense.
https://crt.sh/?id=36746425&opt=x509lint,cablint https://crt.sh/?id=2369907900&opt=cablint,x509lint https://crt.sh/?id=2380565472&opt=cablint,x509lint https://crt.sh/?id=2369908278&opt=cablint,x509lint https://crt.sh/?id=2370069568&opt=cablint,x509lint https://crt.sh/?id=2370050635&opt=cablint,x509lint
If you happen to ever have any feedback on the workflow and tooling in the repository then I would love to hear them! Writing lints can be redundant, but also error prone. So I try to make it as low friction as reasonably possible for contributors to hop right in.
The main thing I can think of is improving genTestCerts to add support for some unsupported fields at the moment, and potentially a way to save certificate generation configs but that may be wandering uncomfortably close to re-implementing openssl. Otherwise the tooling is very good once I figured it out, and I found it quite easy to fix my mistakes based on the github checks
@bitlux
The main thing I can think of is improving genTestCerts to add support for some unsupported fields at the moment
This is indeed quite difficult. We rely on the ZMap's fork of x509 in order to do some custom, filthy, parsing that helps lints out so that may be an avenue. However, that fork has not been merged with its upstream in...like...7 years. I shutter to think what that could possibly look like.
and potentially a way to save certificate generation configs
I was actually thinking on this last week because I was having trouble generating a cert that even applied for an SMIME lint (I always forget which magical fields and which magical before/after make it all work). So perhaps at least a template or two would help.
| gharchive/pull-request | 2024-02-22T15:43:48 | 2025-04-01T06:46:21.774524 | {
"authors": [
"christopher-henderson",
"toddgaunt-gs"
],
"repo": "zmap/zlint",
"url": "https://github.com/zmap/zlint/pull/800",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
342014042 | Crash caused by launching to another planet from a starship
This issue originally detailed a crash which took place during every startup following the original crash. It is now evident that these crashes were caused by a corrupted ElectricArcFurnace.xml configuration file. The crashes listed below are still a valid issue, however.
Version of Advanced Rocketry
1.12.2-1.4.0-89-UNSTABLE (and 1.12.2-1.4.0-88-UNSTABLE)
Have you verified this is an issue in the latest unstable build?
Y
Version of LibVulpes
1.12.2-0.2.8-32-UNSTABLE
Version of Minecraft
1.12.2 (forge-1.12.2-14.23.4.2705-universal – recommended build)
Crash report or log (if applicable)
https://pastebin.com/Hu811LtM (build 88 – Terra Nova)
https://pastebin.com/Bcd9JGBc (build 89 – Terra Nova)
https://pastebin.com/gfs4nvWD (build 89 – Stella)
Other potentially useful files
planetDefs.xml: https://pastebin.com/FqktJPC8
I can provide other files from the server upon request if necessary.
Description of the problem
Our issue arose during our first expedition with our first starship. Initially orbiting around Earth, our expedition took the following order of warps:
Earth -> Uranus
Uranus -> KELT-1b
KELT-1b -> Novus (landed rockets here briefly)
Novus -> Terra Nova (attempted to land rockets here before crash)
The first player to have their rocket teleport from the starship's dimension to that of Terra Nova was yalnif. Upon this teleportation, the server closed with the first listed crash report.
The second player to have their rocket teleport was myself, GarethPW. This occurred using build 89 rather than build 88. The same exact error occurred, detailed by the second crash report above.
A third crash has occurred following the launch to the planet, Stella. Its report can also be found above.
Upon each of these crashes, the rocket used ceases to exist and the player inside floats back to their starship.
It looks like it may be having trouble loading the traverse biomes, can you try swapping those out for AR or vanilla biomes and see if the crash occurs with those?
Is this still a issue? Please comment if it is and i will reopen if if you are unable? We are closing old issues to see what is still relevant and needs to be fixed.
| gharchive/issue | 2018-07-17T17:29:19 | 2025-04-01T06:46:21.781938 | {
"authors": [
"GarethPW",
"dmodoomsirius",
"zmaster587"
],
"repo": "zmaster587/AdvancedRocketry",
"url": "https://github.com/zmaster587/AdvancedRocketry/issues/1356",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
203529434 | [Suggestion] Command to reload recipes ingame
So it's no longer necessary to restart the game in order to find out if the recipe is working
I definitely need this too.
I've got it to the point where the recipe actually works ingame, but have yet to figure out how to actually update JEI
Iirc MineTweaker/CraftTweaker's code is open source and I know that /mt
reload triggers a JEI update. Maybe look and see how they did it in case it
could apply here too?
On Tue, Mar 14, 2017, 3:41 PM zmaster587 notifications@github.com wrote:
I've got it to the point where the recipe actually works ingame, but have
yet to figure out how to actually update JEI
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/zmaster587/AdvancedRocketry/issues/367#issuecomment-286552856,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AJzt6lMkyRvTCzGBKEDtEXT7ogsfxwO3ks5rlvtmgaJpZM4LvT9o
.
I ended up just digging through JEI to find it, ironically it did explicitly mention MineTweaker in the method description. I'll probably end up doing that for NEI though since I'm backporting to the 1.7 builds.
Amusingly enough I just did the same thing as was typing up a reply to
share my findings.
On Tue, Mar 14, 2017, 3:55 PM Richard Lassiter richard4betaclub@gmail.com
wrote:
Iirc MineTweaker/CraftTweaker's code is open source and I know that /mt
reload triggers a JEI update. Maybe look and see how they did it in case it
could apply here too?
On Tue, Mar 14, 2017, 3:41 PM zmaster587 notifications@github.com wrote:
I've got it to the point where the recipe actually works ingame, but have
yet to figure out how to actually update JEI
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/zmaster587/AdvancedRocketry/issues/367#issuecomment-286552856,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AJzt6lMkyRvTCzGBKEDtEXT7ogsfxwO3ks5rlvtmgaJpZM4LvT9o
.
| gharchive/issue | 2017-01-27T00:26:26 | 2025-04-01T06:46:21.789646 | {
"authors": [
"GazbonicAcid",
"TheGuyRunningSouth",
"wormzjl",
"zmaster587"
],
"repo": "zmaster587/AdvancedRocketry",
"url": "https://github.com/zmaster587/AdvancedRocketry/issues/367",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
236754943 | Space stations not appearing
Been having an issue lately where if you make a space station, send it off in the rocket to go to the proper dimension to be created... it will not appear or even place the space station at all. If you go up in a rocket and try to descend it will take you down to the minimum level and spawn you back at the top.
Only way I've gotten this to work without losing your rocket is hopping in creative, hopping out of the rocket as it is descending, making a huge platform to catch the rocket and when the rocket is spawned back up to max height it will descend down onto the platform.
Any idea on how to fix this?
On latest release for 1.7.10 mc: 1.7.1.0-1.0.19b.jar
When it takes you to the space dimension, what are your coordinates?
Also can you send a copy of the advRocketry/temp.dat file?
Not modified at all on multiplayer.
I'm not too sure on the ID number of the station.. we've had this error pop up only once that I know of.
This is the crash report (I believe this may or may not be related to this or not) of a player who crashes when tping between dimensions from planet to overworld... might give some insight?
https://pastebin.com/1MSCN4M2
temp.zip
Is this still a issue? Please comment if it is and i will reopen if if you are unable? We are closing old issues to see what is still relevant and needs to be fixed.
| gharchive/issue | 2017-06-19T00:16:04 | 2025-04-01T06:46:21.793675 | {
"authors": [
"RussianSanity",
"dmodoomsirius",
"zmaster587"
],
"repo": "zmaster587/AdvancedRocketry",
"url": "https://github.com/zmaster587/AdvancedRocketry/issues/738",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
191838625 | Strip Control Codes from various settings.
The controlpanel module and /znc commands do not strip control codes from essential ZNC and IRC server settings. This commit strips color codes on important settings.
Password (A hash is generated with the control code characters allowing login with a colored password.)
Nick/ident (Most IRCds do not allow special characters in nick and ident.)
Integers (When setting an integer value, the new value is set to 0 rather than the intended value.)
Addchan (ZNC will add the channel with the color code, allowing a color code channel with the same name as a non-color coded channel.)
Addserver (ZNC will add the server which ZNC will fail to resolve.)
Bindhost (ZNC cannot bind to colored addresses as they do not exist.)
Current coverage is 39.52% (diff: 54.54%)
Merging #1345 into master will not change coverage
@@ master #1345 diff @@
==========================================
Files 109 109
Lines 21193 21193
Methods 2903 2903
Messages 0 0
Branches 0 0
==========================================
Hits 8376 8376
Misses 12817 12817
Partials 0 0
Powered by Codecov. Last update 2e6c8d9...f4020e4
Problems with pesky mIRC users that have scripts that always send in colours? :P
Been gone for a while and never got the chance to respond. I'll take another look at this.
| gharchive/pull-request | 2016-11-27T02:46:36 | 2025-04-01T06:46:21.798349 | {
"authors": [
"ChasedSpade",
"Zarthus",
"codecov-io"
],
"repo": "znc/znc",
"url": "https://github.com/znc/znc/pull/1345",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
220783952 | Improve quit module hooks
This fixes the misleading documentation for OnUserQuitMessage() and adds a new module hook (OnQuitIRCMessage()) that is called when ZNC quits an IRC network.
It might be a good idea to rename OnUserQuit()/OnUserQuitMessage() to OnClientQuit()/OnClientQuitMessage(), especially since they haven't been part of an actual release yet. OnQuitIRCMessage() could then potentially be created as OnUserQuitMessage() or similar. I held off because I wasn't certain that was the right path forward. If that's desired, let me know and I'll make the changes here.
Well... In general OnFoo is called when we get FOO from the IRC network and OnUserFoo is called when we get FOO from a client, so OnUserQuit is consistent with the existing naming (we get a QUIT from a user/client).
The Message suffix is then just the special CMessage-based version of that hook.
The only existing module hooks for "we are sending something to the IRC server" are OnJoining, OnIRCRegistration and OnSendToIRC. No clear naming pattern there...
So much for the naming. No idea if we need a special module hook for this, nor what to call it. I guess that's for @DarthGandalf to decide.
I don't think this hook is very useful. Especially, as ZNC is usually staying online without quitting ever.
Also then we'll need to add "sending PRIVMSG", "sending PART", "sending TOPIC", etc.
OnSendToIRC is enough for log module... Side comment: there should be OnSendToIRCMessage, but currently every thing in ZNC which sends anything, sends a raw line, not a structured message.
That's fair. I'll remove the hook, then, and restrict this PR to just fixing the documentation of OnUserQuitMessage().
| gharchive/pull-request | 2017-04-10T22:02:54 | 2025-04-01T06:46:21.803300 | {
"authors": [
"DarthGandalf",
"elyscape",
"psychon"
],
"repo": "znc/znc",
"url": "https://github.com/znc/znc/pull/1396",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1903027016 | ch.12: Fig 12-8 and 12-7 not described (pg. 398)
These 2 figures appear and nothing is spoken about them. Looks like very good information might have been cropped out by mistake?
pg. 398:
pg. 401:
I've seen you've fixed in the latest ed, thanks.
| gharchive/issue | 2023-09-19T13:43:02 | 2025-04-01T06:46:21.806927 | {
"authors": [
"profnandaa"
],
"repo": "zodiacon/windowskernelprogrammingbook2e",
"url": "https://github.com/zodiacon/windowskernelprogrammingbook2e/issues/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
268104888 | Update URL on the repo
http://zoltantothcom.github.io/vanilla-js-dropdown/
Good catch 👍 I was searching through the code before realized where the issue is :)
lol yeah, i was showing the plugin to a friend but he told me about the issue :P
| gharchive/issue | 2017-10-24T16:42:49 | 2025-04-01T06:46:21.821730 | {
"authors": [
"mike3run",
"zoltantothcom"
],
"repo": "zoltantothcom/vanilla-js-dropdown",
"url": "https://github.com/zoltantothcom/vanilla-js-dropdown/issues/3",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
189115771 | reduce staging puma threads
reduce the number of db connections for staging
Review checklist
[ ] First, the most important one: is this PR small enough that you can actually review it? Feel free to just reject a branch if the changes are hard to review due to the length of the diff.
[ ] If there are any migrations, will they the previous version of the app work correctly after they've been run (e.g. the don't remove columns still known about by ActiveRecord).
[ ] If anything changed with regards to the public API, are those changes also documented in the apiary.apib file?
[ ] Are all the changes covered by tests? Think about any possible edge cases that might be left untested.
speak now or i'm going to merge this one as the staging connections are at their limits.
| gharchive/pull-request | 2016-11-14T13:45:27 | 2025-04-01T06:46:21.858171 | {
"authors": [
"camallen"
],
"repo": "zooniverse/Panoptes",
"url": "https://github.com/zooniverse/Panoptes/pull/2041",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
637099314 | Incorrect wrapping of strings with parentheses
The following line prevents a string with parentheses to be wrapped correctly:
https://github.com/zorro-del-caribe/ship-hold-querybuilder/blob/6b496808e6637d586f7323c0046cfb20c992ba92/src/lib/nodes.ts#L25
For example:
wrap("string with (parentheses)") results in:
'string with (parentheses)'
instead of:
'"string with (parentheses)"'
I don't know what you have considered to skip this case (avoiding to wrap a function maybe), but my case is that I'm importing data from a file with all sorts of texts, including texts in parentheses.
To fix that I've changed it to:
const isWrappedRegexp = /'(.*)'|"(.*)"|^\$/;
Thanks for the great job on this tool, it's very helpful.
Another change I had to make was on this line:
from:
https://github.com/zorro-del-caribe/ship-hold-querybuilder/blob/6b496808e6637d586f7323c0046cfb20c992ba92/src/lib/nodes.ts#L31
to:
const x = wrap(value, `$$`);
Using the dollar-quoted string avoids having to escape the single quote '.
Thanks fro your report.
Can you please provide a way to reproduce the bugs:
give or you call the builder
What you would expect the generated sql to be
This would help to create regression tests.
Thanks a lot
Hi @lorenzofox3, I'm reading data from an Excel file, parsing all rows and for each of them I run a code similar to this one (consider the row object below as a row read from that file):
const { shiphold } = require("ship-hold");
const sh = shiphold();
const row = {
a: "text",
b: 123,
c: "with (parentheses)",
d: "John's",
e: "single 'quotation mark'",
f: 'double "quotation mark"',
};
(async function (obj) {
const { text } = await sh.insert(obj).into("table").build();
console.log(text);
})(row);
That results into this:
INSERT INTO "table" ( "a", "b", "c", "d", "e", "f" ) VALUES ( 'text', 123, with (parentheses), 'John's', single 'quotation mark', double "quotation mark" )
Note the missing single quotes around some values and the possessive apostrophe is note escaped.
When I changed isWrappedRegexp to /'(.*)'|"(.*)"|^\$/;, it fixed the parentheses case:
INSERT INTO "table" ( "a", "b", "c", "d", "e", "f" ) VALUES ( 'text', 123, 'with (parentheses)', 'John's', single 'quotation mark', double "quotation mark" )
Changing from return wrap(value, `'`); to return wrap(value, `$$`); fixed the possessive apostrophe:
INSERT INTO "table" ( "a", "b", "c", "d", "e", "f" ) VALUES ( $$text$$, 123, $$with (parentheses)$$, $$John's$$, single 'quotation mark', double "quotation mark" )
Then this last case I've discovered during this demonstration, when you have quotation marks inside the string: I changed the isWrappedRegexp to const isWrappedRegexp = /^'(.*)'$|^"(.*)"$|^\$/;, generating the expected SQL:
INSERT INTO "table" ( "a", "b", "c", "d", "e", "f" ) VALUES ( $$text$$, 123, $$with (parentheses)$$, $$John's$$, $$single 'quotation mark'$$, $$double "quotation mark"$$ )
I see thanks !
In that case, I would rather recommend using parametrized query:
sh.insert({a:'$a', b:'$b'}).build({a:`it doesn't matter whether there are quote`, b:`or (parenthesis)`});
The idea of using dollar quotes seems very interesting. I am just a bit concerned it would break current behaviour. Have you tried building and running the tests ?
Second point: If I am not wrong dollar quotes is specific to Postgres. In theory it is not a problem, but if I am ok to have dialect specific to Postgres, I am a bit reluctant to introduce non standard SQL as default behaviour.
I will try your suggestion, but I've found another issue:
sh.select({ value: "to_char(SUM(charges_usd), '999G999G999G990D00')", as: "total (usd)" }).from("mytable")
The total (usd) rises and error when executed as it'll render to this:
SELECT to_char(SUM(charges_usd), '999G999G999G990D00') AS total (usd) FROM "mytable"
The total (usd) is not quoted if I executed without my proposed changes.
The dollar-quoted strings is very handy as it avoids the character escaping nightmare, but unfortunately it's not part of the SQL standard and would be a block for other sql dialects.
| gharchive/issue | 2020-06-11T15:14:29 | 2025-04-01T06:46:21.906419 | {
"authors": [
"dude333",
"lorenzofox3"
],
"repo": "zorro-del-caribe/ship-hold-querybuilder",
"url": "https://github.com/zorro-del-caribe/ship-hold-querybuilder/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
119526581 | Todo List
[ ] 重构
[x] 初始版本的代码重构
[ ] params 单独提取出来成为一个属性
[ ] 测试
[x] 功能测试
[ ] 错误测试
[ ] 服务器端的错误处理
[ ] 缺少参数
[ ] 错误的 API
[ ] 不存在的 API 版本
[ ] 浏览器端 JS
[ ] 缓存
[x] 对于自动拼接的长度超过 1024 的 hk.page.get 请求进行切割
[ ] 错误处理
[ ] 超长的请求(1024)
[x] 实现多种读取数据的方式(标签)
[x] 应用于博客
[ ] 调整 top area 的样式
[x] 部署到 SAE 上,并实现本地测试和 SAE 两套配置的快速切换
[ ] 文档
[ ] tutorial
[ ] 简要的 Contribute 指南
不支持相对页面嘛,我用某hexo发现了你们得代码
c = {
elements: {},
scan: function() {
var e = this.elements,
t = document.querySelectorAll("[data-hk-page]");
e.current = document.querySelectorAll("[data-hk-page=current]"),
e.topArea = document.querySelectorAll("[data-hk-top-pages]"),
e.pages = {};
for (var n = 0; n < t.length; ++n) {
var r = t[n].attributes["data-hk-page"].value,
a = e.pages[r];//不等于就? t是 pages ,n是当前页面,
"current" != r && (a ? a.push(t[n]) : e.pages[r] = [t[n]])
}
},
相关讨论移步:https://github.com/zry656565/Hit-Kounter-LC/issues/3
| gharchive/issue | 2015-11-30T16:23:00 | 2025-04-01T06:46:22.032204 | {
"authors": [
"qssq",
"zry656565"
],
"repo": "zry656565/Hit-Kounter",
"url": "https://github.com/zry656565/Hit-Kounter/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
392634401 | link to Fusion in README.md not working
The repository https://github.com/ErgoDox-EZ/fusion is no longer visible on github, but referenced in this project as parent project.
Still not visible!!! :)
| gharchive/issue | 2018-12-19T14:47:31 | 2025-04-01T06:46:22.034652 | {
"authors": [
"ghost",
"pcesar22"
],
"repo": "zsa/reactor",
"url": "https://github.com/zsa/reactor/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
109775586 | how to parse more complex type (array & objects)
I am just loving with new json parser. It makes embedded designing a lot easier. I have one question. Would it be possible to show an example of how to parse complex tokens? For example:
js = ""x": "value1", "y": "value2", "dsensor": ["s1": 1, "s2": 0], "asensor": ["a1": 1.5, "a2": 0.8] ";
How to check the values?
check(TOKEN_STRING(js, ??????));
check(TOKEN_STRING(js, ??????));
check(TOKEN_STRING(js, ??????));
check(TOKEN_STRING(js, ??????));
Bitbucket: https://bitbucket.org/zserge/jsmn/issue/4
Originally reported by: Anonymous
Originally created at: 2012-04-25T17:13:29.729
Hi,
I am assuming your json data stream is from a HTTP POST RESPOSE. It doesn't seem like valid json data. In any case, parsing should be easy just follow the test examples (jsmn_test.c) for the following functions:
test_string() and test_partial_array()
I think the valid json data stream should be:
{
"x" : "value1",
"y" : "value2",
"dsensor" : {
"s1" : 1,
"s2" : 0
},
"asensor" : {
"a1" : 1.5,
"a2" : 0.8
}
}
Hope that information was helpful.
Original comment by: tcousins
I added some examples that should give the idea of how to parse complex JSON strings in the commit 84cb5790d973. Hope it helps.
Original comment by: Serge Zaitsev
tcousins.. your assumption is correct. The json data stream is from http post response's body. sorry for the json error. I check the test example and its still isn't clear on how to handle/parse complex data such as { "x" : "value1", "y" : "value2", "dsensor" : { "s1" : 1, "s2" : 0 }, "asensor" : { "a1" : 1.5, "a2" : 0.8 } }
Does any have any answer for parsing complex data stream?
Original comment by: Anonymous
Have you found a solution to your question? I came jsmn and I was extremely excited to finally find a json parser for embedded system. However, I've read the documents and non of the given examples show how to parse a complex json string so I am wondering if you have gotten a solution?
Original comment by: Anonymous
| gharchive/issue | 2015-10-05T11:14:46 | 2025-04-01T06:46:22.042603 | {
"authors": [
"zserge"
],
"repo": "zserge/jsmn",
"url": "https://github.com/zserge/jsmn/issues/25",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
339585780 | [iOS] Navigation to Lightbox hangs in the between.
On iOS when image in the message list is clicked, navigations hangs in the between (half screen is pushed & has screen is occupied by chat screen).
Most prob this is due to PhotoView component we are using from react-native-photo-view.
This looks like the issue:
https://github.com/alwx/react-native-photo-view/issues/141
I notice that before the screen hangs, the Lightbox background is black, and afterwards, the photo is shown in the Lightbox.
The delay seems to occur while, in some sense, the image data is being loaded.
i have the same problem
| gharchive/issue | 2018-07-09T19:56:53 | 2025-04-01T06:46:22.052315 | {
"authors": [
"borisyankov",
"jackrzhang",
"jainkuniya",
"tmjordan"
],
"repo": "zulip/zulip-mobile",
"url": "https://github.com/zulip/zulip-mobile/issues/2781",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
328078552 | developer guide: Add debugging section.
@gnprice it'd be cool if you could add your adb shell debugging workflowto this section once this is merged.
Also just pushed a commit which adds those adb logcat instructions. Please try it out! And mention in the stream if it works for you, or if it doesn't, or send a PR if something's unclear.
I'm not sure it has any particular advantages over the JS console, once you get the latter going. Maybe if you're using a physical Android device it would be hard to get the JS console working? Not sure how it'd work exactly. If so, adb logcat could be helpful for that.
(As I reread my ADB instructions, I'm not sure they help at all if you haven't already used ADB for something. I think standard ADB setup instructions on the web should fill in the gap, but if you find yourself searching for and using those, I'd be glad to take a PR to add appropriate links and/or other advice.)
Thanks for the doc clarifications! I agree with every change but this one:
To use it, start your app.
To use it, start your app in the emulator.
I haven't tried it out myself yet, but the react docs suggest that you can just as easily use the remote js debugger with a physical device. Maybe we can make the line more explicit by saying smt like
To use it, start your app in the emulator or on your physical device.
I find the adb instructions helpful, especially the adb logcat -T 100 | grep ReactNativeJS.
For my part, I was familiar with adb, but didn't know about logcat and that you could also use it to debug console.log messages. Also, things break, so it's probably better two point out two tools than just one.
Thanks for the fact-check!
I haven't tried it out myself yet, but the react docs suggest that you can just as easily use the remote js debugger with a physical device.
Huh, I didn't see that, and it's not obvious to me how that'd work (how would the device talk to the browser on the host?)
But then I searched in the page for "device", and there's this other section at the bottom, with instructions. I'll edit to point at that.
| gharchive/pull-request | 2018-05-31T10:13:05 | 2025-04-01T06:46:22.057707 | {
"authors": [
"gnprice",
"roberthoenig"
],
"repo": "zulip/zulip-mobile",
"url": "https://github.com/zulip/zulip-mobile/pull/2597",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
549117256 | docs/dev server: link to mobile development doc
Something that could improve the first-time setup process for using a dev server for mobile development is a link from the dev server setup docs back to here: https://github.com/zulip/zulip-mobile/blob/master/docs/howto/dev-server.md.
I found that page first, and it sent me to https://zulip.readthedocs.io/en/latest/development/overview.html which sent me to https://zulip.readthedocs.io/en/latest/development/using.html. But by the time I'd read those and completed their steps, I'd momentarily forgotten about the mobile doc, which contains some crucial setup steps. So I'd like to link back to it (from the latter page probably makes the most sense) just to close the loop.
This is an issue instead of a PR because https://zulip.readthedocs.io/en/latest/development/using.html (the page I propose to edit) contains some important server-specific development info that mobile developers may want to be aware of. But this info is commingled with info on the web app, which mobile developers may skim past, especially if they see an inviting link back to the mobile docs. The concern is that a reader might also disregard the server-specific info because it's mixed up with the web app stuff.
Would it be effective to restructure that page (https://zulip.readthedocs.io/en/latest/development/using.html) in something like this way, so people can get where they're going more quickly, but also not accidentally skip past important info:
Server info/development (live reloading, Django/Tornado processes, possible need to restart the server and how, etc.)
Web app development (CSS changes, hot module replacement, etc.)
Mobile app development (just a link back to https://github.com/zulip/zulip-mobile/blob/master/docs/howto/dev-server.md)
Or might simply appending a link at the bottom be enough?
I like the idea of adjusting using.html to have headings for server/web/mobile; that organization would likely make it better for the webapp use case as well as simplifying the mobile setup. Feel free to make a PR doing that sort of reorganization.
| gharchive/issue | 2020-01-13T18:53:01 | 2025-04-01T06:46:22.063058 | {
"authors": [
"chrisbobbe",
"timabbott"
],
"repo": "zulip/zulip",
"url": "https://github.com/zulip/zulip/issues/13655",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1388314786 | Organize three-dot message menu
To make the three-dot message menu easier to understand, we should organize it into sections, separated by dividers. This builds on the work in #22537 / #22652.
Logic changes in what options to show
[ ] Show "Move message" whenever the user has permissions to change the stream and/or topic, regardless of whether the "Edit message" option is also shown.
Sections
Quote and reply or forward
Edit message
Move message
Delete message
Add emoji reaction [only for the user's own messages, same as now]
Mark as unread from here
Collapse message
View message source [only when Edit message is not available, same as now]
View edit history
View read receipts
Copy link to message
CZO discussion threads:
https://chat.zulip.org/#narrow/stream/101-design/topic/three-dot.20message.20menu
https://chat.zulip.org/#narrow/stream/101-design/topic/Message.20actions.20per.20role.2Fvisibility
@rebeccasodereng we want to get this done quickly as it is part of our release goal. It should not be much amount of work and I can work on this if you haven't started yet.
@alya there are a couple of options that are not mentioned in the issue -
"Hide muted message again" is shown when the sender is muted.
"Remind me about this" is shown if reminder feature is enabled. I am not sure what is the current status of this.
Also we show some text when seeing a message which was sent when user was not subscribed to the stream, this can probably be at the end only in the last section.
@rebeccasodereng we want to get this done quickly as it is part of our release goal. It should not be much amount of work and I can work on this if you haven't started yet.
@sahil839 I'm sorry I didn't realise it was part of the release goal and was planning to work on it in a few days, so you can work on it instead!
Cool, feel free to take another issue to work on. Thanks
Also we show some text when seeing a message which was sent when user was not subscribed to the stream, this can probably be at the end only in the last section.
Yeah, that position sounds fine, though I'm not sure whether we need that notice at all... (I didn't realize we had it. ;)) I started a thread on CZO.
| gharchive/issue | 2022-09-27T20:14:19 | 2025-04-01T06:46:22.071567 | {
"authors": [
"alya",
"rebeccasodereng",
"sahil839"
],
"repo": "zulip/zulip",
"url": "https://github.com/zulip/zulip/issues/23076",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1477775946 | Help center: Add instructions to find "server URL", needed to log in on mobile/desktop/terminal
We sometimes get reports from users who are understandably frustrated when they can't log in to their Zulip org via the mobile app because they don't know where to find the "server URL" that we ask for.
In the mobile app, we currently say
Enter your Zulip server URL:
We'd like to add a link saying something like "What's this?" or "Where can I find this?", pointing at the new doc, to help people find their way.
@alya, I've assigned @drrosa; either one of you should please let me know if this is wrong. 🙂
@chrisbobbe This is documented under https://zulip.com/help/logging-in#find-the-zulip-log-in-url.
Ah, great, thanks!!
| gharchive/issue | 2022-12-06T00:07:55 | 2025-04-01T06:46:22.074498 | {
"authors": [
"alya",
"chrisbobbe"
],
"repo": "zulip/zulip",
"url": "https://github.com/zulip/zulip/issues/23753",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
214311765 | subscriptions: No way to change a stream's privacy without subscribing admin.
If a private stream is created with no admin subscribed then there is no way to change its privacy without subscribing the admin.
I don't think we have a good approach for implementing this that doesn't begin with implementing a "stream administrator" concept.
I expect this to be fixed in #19525.
| gharchive/issue | 2017-03-15T08:03:36 | 2025-04-01T06:46:22.075798 | {
"authors": [
"HarshitOnGitHub",
"timabbott"
],
"repo": "zulip/zulip",
"url": "https://github.com/zulip/zulip/issues/4075",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
250226848 | right sidebar: Add inactive users to user typeahead.
If someone is gone for three weeks, we remove them from the right sidebar. We should still include them at the bottom of the user search typeahead. (They do show up in the compose box typeahead, already.)
It feels like something is broken when you know someone is part of a realm and you can't find them in the user search bar.
@rishig this is a duplicate of #5775.
| gharchive/issue | 2017-08-15T05:46:02 | 2025-04-01T06:46:22.077086 | {
"authors": [
"rishig",
"timabbott"
],
"repo": "zulip/zulip",
"url": "https://github.com/zulip/zulip/issues/6124",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
938078138 | tests: Remove duplicated tests from test_realm.py.
This PR removes the duplicated individual tests in test_realm.py as we test all those
values already in do_test_realm_update_api.
Looks great, merged, thanks @sahil839! It's nice to get to delete 200 lines of redundant tests.
| gharchive/pull-request | 2021-07-06T16:41:24 | 2025-04-01T06:46:22.078698 | {
"authors": [
"sahil839",
"timabbott"
],
"repo": "zulip/zulip",
"url": "https://github.com/zulip/zulip/pull/19140",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
947911008 | billing: Edit helper text on sponsorship application
nonprofit -> non-profit, since we were being inconsistent about it. Cf. this reference for choosing the version with a dash. Our landing page uses the dashed form.
Edit pass on default helper text to use more user-oriented language.
Testing plan:
Manual testing.
GIFs or screenshots:
looks good, we can merge once CI passes.
Merged, thanks @alya!
| gharchive/pull-request | 2021-07-19T18:22:23 | 2025-04-01T06:46:22.081769 | {
"authors": [
"alya",
"timabbott"
],
"repo": "zulip/zulip",
"url": "https://github.com/zulip/zulip/pull/19318",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1001181503 | portico: Revamp /for/open-source.
Pushing for my convenience only; do not review.
Testing plan:
GIFs or screenshots:
It looks like we have some broken links, but otherwise this looks great; fixed and merged. Thanks @alya and @eeshangarg!
| gharchive/pull-request | 2021-09-20T16:45:55 | 2025-04-01T06:46:22.083769 | {
"authors": [
"alya",
"timabbott"
],
"repo": "zulip/zulip",
"url": "https://github.com/zulip/zulip/pull/19793",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2082764985 | narrow.js: Remove redundant else if block from deactivate function.
it's okay to remove because the else if condition
narrow_state.filter() === undefined && has_visited_all_messages is always false narrow_state.filter() === undefined is true when default view is all messages. has_visited_all_messages is true when coming from a different view that isn't all messages. Both cannot be true simultaneously.
Self-review checklist
[X] Self-reviewed the changes for clarity and maintainability
(variable names, code reuse, readability, etc.).
Communicate decisions, questions, and potential concerns.
[X] Explains differences from previous plans (e.g., issue description).
[ ] Highlights technical choices and bugs encountered.
[X] Calls out remaining decisions and concerns.
[ ] Automated tests verify logic where appropriate.
Individual commits are ready for review (see commit discipline).
[X] Each commit is a coherent idea.
[X] Commit message(s) explain reasoning and motivation for changes.
Completed manual review and testing of the following:
[X] Visual appearance of the changes.
[ ] Responsiveness and internationalization.
[ ] Strings and tooltips.
[X] End-to-end functionality of buttons, interactions and flows.
[X] Corner cases, error conditions, and easily imagined bugs.
Isn't this the code path for trying to visit "all messages" when already displaying it?
(Please close this if indeed that is in error)
| gharchive/pull-request | 2024-01-15T22:44:55 | 2025-04-01T06:46:22.089479 | {
"authors": [
"Rinwaoluwa",
"timabbott"
],
"repo": "zulip/zulip",
"url": "https://github.com/zulip/zulip/pull/28575",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2629441912 | update_ui: Standardize pencil icon color for todo and poll.
This PR aligns the color of the "edit" pencil button in polls and to-do lists with the color of the message action buttons, ensuring consistency across the interface. The changes cover both the light and dark themes and include:
Unhovered State: The pencil button color now matches the unhovered state of the message action buttons.
Hovered State: The color transition on hover for the pencil button matches that of the message action buttons.
Unhovered State
Before
After
Poll
Todo List
Hovered State
Before
After
Poll
Todo List
Fixes: #30339.
Self-review checklist
[x] Self-reviewed the changes for clarity and maintainability
(variable names, code reuse, readability, etc.).
Communicate decisions, questions, and potential concerns.
[x] Explains differences from previous plans (e.g., issue description).
[ ] Highlights technical choices and bugs encountered.
[ ] Calls out remaining decisions and concerns.
[ ] Automated tests verify logic where appropriate.
Individual commits are ready for review (see commit discipline).
[x] Each commit https://github.com/user-attachments/assets/606f2fb9-ba00-4ff4-aa58-8c93784ff3b4is a coherent idea.
[x] Commit message(s) explain reasoning and motivation for changes.
Completed manual review and testing of the following:
[x] Visual appearance of the changes.
[x] Responsiveness and internationalization.
[ ] Strings and tooltips.
[x] End-to-end functionality of buttons, interactions and flows.
[x] Corner cases, error conditions, and easily imagined bugs.
Hey @alya, the PR is ready for review. Can you take a look?
Thanks.
Nice, merged, thanks for the clean PR @whilstsomebody!
| gharchive/pull-request | 2024-11-01T16:46:11 | 2025-04-01T06:46:22.100445 | {
"authors": [
"timabbott",
"whilstsomebody"
],
"repo": "zulip/zulip",
"url": "https://github.com/zulip/zulip/pull/32204",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
200011231 | docs: Restrict user email addresses to certain domains
Added a doc to restrict user email addresses to certain domains.
@derAnfaenger Oops! You were right but I've now fixed the problem. Here is a screenshot of the working page:
@anirudhjain75 Fixed 👍
Hi @Jeffthebest1!
Thanks for working on this!
Added a few comments, could you address those?
I think the title of this topic is kind of overly technical, and the first paragraph also reflects that. We allow administrators to restrict users to certain domains, but I think it's more fluent to say something like "We allow administrators to block new users from signing up who are not in the organization. The administrator can accomplish this by restricting users to certains domains. (Ordinary user don't have this power.)"
This is a little verbose, and my wording is just a suggestion, but I feel like the current wording is too abrupt and sort of assumes the user knows the context.
I'm ok with leaving the title as it is.
@showell I left the title because I couldn't come up with anything better but I improved the first few sentences.
@showell @trueskawka Fixed! 👍
| gharchive/pull-request | 2017-01-11T06:09:55 | 2025-04-01T06:46:22.104300 | {
"authors": [
"Jeffthebest1",
"showell",
"trueskawka"
],
"repo": "zulip/zulip",
"url": "https://github.com/zulip/zulip/pull/3225",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
201858257 | Fixed and improved logrotate file
Changed the logrotate to include all .log files present in the /var/log directory
Fixed ownership and permission for new post-logrotate files
Added post-logrotate SIGHUP signal to release the file handle on *.log files
Automated message from Dropbox CLA bot
@eldios, thanks for the pull request! It looks like you haven't yet signed the Dropbox CLA. Please sign it here.
@smarx done.
Automated message from Dropbox CLA bot
@eldios, thanks for signing the CLA!
@eldios thanks for working on this! Most of that looks great.
Can you explain more detail from your testing of the SIGHUP piece? Specifically:
What failure mode does one get without it?
Does that end up restarting the entire server?
Hi @timabbott ,
without adding the SIGHUP signal, the open (old and rotated) log files simply won't be let gone by the process using them. That's because the file handle keeping the inode busy won't let it go unless you specifically say so via ( you guessed it :) ) a SIGHUP signal.
This is all pretty standard and well known in the sysadmin/devops world so that was an easy pattern to recognize.
You can see it by yourself cause after a successful logrotation (without the SIGHUP) the "new" .log file (eg: tornado.log) will remain empty (0 byte size) while content will still be written to the "old" .log file (eg: tornadlo.log.1).
Everything works great when you add the SIGHUP method.
So, the way a SIGHUP works it's that it usually just tell the process to close and reopen file and process handles, without fully restarting everything but that may vary a lot by how the SIGHUP signal reception is handled by the process who receives the message.
In our case we're doing this rotation during a low traffic moment of the day and nobody noticed any effect during usage, so far.
Let me know if that's enough of an explanation or you need more details.
Thank you.
Yeah, I'm familiar with SIGHUP and all that; I just know that we have supervisord configured to use SIGHUP when killing some of the processes in a Zulip instance, and was a bit worried about that issue. I'll test this out and make sure I'm happy with it before merging, but I think it shouldn't require any changes.
Ah, I see.
Well that didn't happen in our case, but it may also be that the process died and supervisor brought it up immediately.
Let me know what you find in your investigation and we'll tweak the logrotate file accordingly if needed.
Thanks.
If the SIGHUP works to close and re-open the log files, then we should remove the delaycompress option, which should no longer be needed at all.
From an ops perspective I see delaycompress as a hack to work around the failure of old logs to be really closed, and it burns up disk space longer, means backups take more space (yeah, I back up logs sometimes!), and generally means the rotation is a multi-step process instead of a single event.
@eldios Could you try it without delaycompress and see if it behaves well?
@jonjensen now that I validated that the handle files issue is gone I'll test that too. Will report soon, as a good side improvement.
@jonjensen @timabbott I confirm that it's working flawlessly without a snatch with the "compress" option as well.
So I guess we're all set, on my side.
@timabbott are you going to merge this pull request? Thanks :)
Yes, planning to, just have been super busy with the release engineering for Zulip 1.5 and decided I wanted to give this a slightly longer test period on some of the production servers I run before merging it. Sorry for the delay!
@timabbott no worries, I was just checking in cause I needed to clean my ToDo list from the "follow up item" :)
Thanks again!
I'm embarrassed to say that I haven't had a time to do that, so this will miss Zulip 1.6. I'll make sure it happens just after the release, though, and the plan at least is for Zulip 1.7 to be a fairly fast release. Sorry @eldios!
hey @timabbott no worries. Thanks for following up and I'm glad it'll be useful to other people in future :)
@timabbott looking at the way you changed the logrotate file, it seems that the way you changed it the file handles for {worker,manage}.log will only be released IF the server.log file gets a rotation and thus the SIGHUP signal is sent to supervisorctl's processes. Am I reading it right?
The Zulip/Django logging code is designed to transparently handle the Django-based log files (errors.log, manage.log, server.log, etc.) being renamed without the process needing to restart; it'll just open a new file if the file it was going to log to has disappeared. I tested this carefully with moving files aside both with mv and with tweaked logrotate configuration.
So there's no need for SIGHUP in the current configuration in master.
Based on the discussion so far, I think the only piece from this that we need is the create line; do you agree? If so I can proceed with extracting and merging that.
seems reasonable.
I'm sure people will squeak and complain if we missed something :)
Go for it and thanks for your efforts!
Great, merged as 05723acb3c0f05036cd0cc9f3ba390b5ddda6210, thanks for all your help with this @eldios!
supervisorctl signal restarting process, is there any other way to send signal without restart process?
@gvinodapplift can you explain a bit more about what you're trying to do? I don't really understand the question.
Supervisor process generating logs, after logrotate need to send a signal for process for the new log file.
in logrotate conf file prostrotate sending supervisorctl signal.
Anyway it got resolved.
Thanks timabbott for comment
| gharchive/pull-request | 2017-01-19T13:16:14 | 2025-04-01T06:46:22.119173 | {
"authors": [
"eldios",
"gvinodapplift",
"jonjensen",
"smarx",
"timabbott"
],
"repo": "zulip/zulip",
"url": "https://github.com/zulip/zulip/pull/3385",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1622590831 | 关于我没一会儿openAi api keys就被封了
问下大佬,这个怎么解决,升级plus?laf是香港的 是不是也是不支持的地区哦?还是要找个稳定的机场,或者找个海外的机器架一下服务~~
问下大佬,这个怎么解决,升级plus?laf是香港的 是不是也是不支持的地区哦?还是要找个稳定的机场,或者找个海外的机器架一下服务~~
封号比较玄学,和地区无关,后面 laf 会上东京节点,不知是否会好一些。
| gharchive/issue | 2023-03-14T02:02:17 | 2025-04-01T06:46:22.129038 | {
"authors": [
"sheng3233836",
"zuoFeng59556"
],
"repo": "zuoFeng59556/chatGPT",
"url": "https://github.com/zuoFeng59556/chatGPT/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
47808328 | Clearing Lightbox "data-options" for 'close_selectors' do not work
According to the documentation, specifically: http://foundation.zurb.com/docs/components/clearing.html#optional-javascript-configuration
It claims we should be able to set specific classes or ID's to close the clearing lightbox if clicked without touching any javascript, but utilizing the "data-options". For some reason these do not do anything, as seen here: http://jsfiddle.net/schwarzarbyter/o6d92y8k/
Have you tried it like this?
$(document).foundation({
clearing : {
close_selectors : '.clearing-close',
}
});
@rafibomb I think it is high time to close this issue
Thanks @rafibomb
| gharchive/issue | 2014-11-05T07:44:44 | 2025-04-01T06:46:22.131778 | {
"authors": [
"ncoden",
"rafibomb",
"toobulo"
],
"repo": "zurb/foundation-sites",
"url": "https://github.com/zurb/foundation-sites/issues/6012",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
124091063 | Fixes missing $gutter declaration in foundation-flex-grid mixin.
Added a $gutter: $grid-column-gutter declaration to the foundation-flex-grid mixin to fix https://github.com/zurb/foundation-sites/issues/7679.
The issue is that in Ruby Sass, a variable defined in an @if or @else block can't leave that scope. Initializing the variable above the @if statement resolves the issue. See 5388206ef4c6dd9e08372f599a821f4da8659c45
I did these changes, the css was created right, however my layout was broke.
To build css correct I needed to change "app.scss", when I call the function:
@include foundation-everything($flex: true);
to:
@include foundation-everything;
@include foundation-flex-grid;
Finally, Foundation work fine!
| gharchive/pull-request | 2015-12-28T16:55:57 | 2025-04-01T06:46:22.134924 | {
"authors": [
"gakimball",
"metaloha",
"thalysgs"
],
"repo": "zurb/foundation-sites",
"url": "https://github.com/zurb/foundation-sites/pull/7704",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2613962405 | 🛑 Voice Chat API is down
In da2ec1e, Voice Chat API (https://voice.roblox.com/) was down:
HTTP code: 404
Response time: 62 ms
Resolved: Voice Chat API is back up in b7d90b3 after 1 hour, 55 minutes.
| gharchive/issue | 2024-10-25T12:23:33 | 2025-04-01T06:46:22.159643 | {
"authors": [
"zv8001"
],
"repo": "zv8001/EEI-Stats",
"url": "https://github.com/zv8001/EEI-Stats/issues/516",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1584648555 | Missing device configuration: Eurotronic Comet Z-Wave
Checklist
[X] It is not in the configuration DB
[X] It was not merged recently or has a pending PR
Which device is missing?
Eurotronic Comet Z-wave
Manufacturer ID
0x0148
Product Type
0x0004
Product ID
0x0003
Firmware Version
11.26.4
Is the device listed on the Z-Wave Alliance website?
https://products.z-wavealliance.org/products/1601
Do you have a manual?
https://manual.zwave.eu/backend/make.php?lang=en&sku=EURECOMET&cert=
The device is detected but manufacturer does not show in Home Assistant, looks like a product ID increment as the config DB has product 1,2 and not 3
The parameters are enumerated in discovery, as below, but are not editable or named to edit once interview completes (attached)
2023-02-14T18:03:52.164Z CNTRLR « [Node 113] received information for parameter #1:
parameter name: LCD_invert
value format: UnsignedInteger
value size: 1 bytes
min value: 0
max value: 1
default value: 0
is read-only: false
is advanced (UI): false
has bulk support: true
alters capabilities: false
@zwave-js-bot import config 1601 from zwa
Hm, automatically importing the config was not helpful. The IDs are a bit different than what we currently have for the Comet Z, but that can happen. We also don't have any config params defined at the moment, so they should be added too.
If it helps, this is a snippet from the interview log:
"index": 0,
"installerIcon": 4608,
"userIcon": 4608,
"status": 4,
"ready": true,
"isListening": false,
"isRouting": true,
"isSecure": false,
"manufacturerId": 328,
"productId": 3,
"productType": 4,
"firmwareVersion": "11.26.4",
"zwavePlusVersion": 2
@AlCalzone Is there anything I can do to help get the config parameters added, so they can be edited in Home assistant?
I'm happy to help as I can't use these TRV's without the ability to alter some of the parameters (like invert display etc)
Yeah, see the comment above: https://github.com/zwave-js/node-zwave-js/issues/5464#issuecomment-1430241560
Thanks, I'll try to get what you need.
I don't use JS UI, just the supported non UI zwave-js, so I dont believe I have access to parameter edit.
@kpine Thanks!
great workaround until the device gets proper support :)
If I set manually in zwave-js-ui the Parameter 8 for the temperature offset, I get a “Parameter 8 set failed” in return. Is this because the device is not yet fully supported? I would have thought that setting parameters manually always works.
zwave-js-ui: 8.21.1.103c5be
zwave-js: 11.5.3
Sorry, this was a mistake on my side. While the command still seems to be ignored, I cannot reproduce the error message any more. Probably I mixed up devices in the UI.
In case this info helps anyone who can make the config for this - I moved to JSUI as I couldn't get the parameters working otherwise .
2023-09-17T17:36:10.413Z - INTERVIEW STARTED 2023-09-17T17:36:10.472Z - INTERVIEW STAGE COMPLETED Arg 0: ProtocolInfo 2023-09-17T17:36:11.772Z - INTERVIEW STAGE COMPLETED Arg 0: NodeInfo 2023-09-17T17:36:11.827Z - VALUE ADDED Arg 0: └─commandClassName: Manufacturer Specific └─commandClass: 114 └─property: manufacturerId └─endpoint: 0 └─newValue: 328 └─propertyName: manufacturerId 2023-09-17T17:36:11.830Z - VALUE ADDED Arg 0: └─commandClassName: Manufacturer Specific └─commandClass: 114 └─property: productType └─endpoint: 0 └─newValue: 4 └─propertyName: productType 2023-09-17T17:36:11.833Z - VALUE ADDED Arg 0: └─commandClassName: Manufacturer Specific └─commandClass: 114 └─property: productId └─endpoint: 0 └─newValue: 3 └─propertyName: productId 2023-09-17T17:36:11.955Z - VALUE ADDED Arg 0: └─commandClassName: Version └─commandClass: 134 └─property: libraryType └─endpoint: 0 └─newValue: 3 └─propertyName: libraryType 2023-09-17T17:36:11.959Z - VALUE ADDED Arg 0: └─commandClassName: Version └─commandClass: 134 └─property: protocolVersion └─endpoint: 0 └─newValue: 7.15 └─propertyName: protocolVersion 2023-09-17T17:36:11.962Z - VALUE ADDED Arg 0: └─commandClassName: Version └─commandClass: 134 └─property: firmwareVersions └─endpoint: 0 └─newValue: 11.26, 1.4 └─propertyName: firmwareVersions 2023-09-17T17:36:11.965Z - VALUE ADDED Arg 0: └─commandClassName: Version └─commandClass: 134 └─property: hardwareVersion └─endpoint: 0 └─newValue: 1 └─propertyName: hardwareVersion 2023-09-17T17:36:13.044Z - VALUE ADDED Arg 0: └─commandClassName: Version └─commandClass: 134 └─property: sdkVersion └─endpoint: 0 └─newValue: 7.15.4 └─propertyName: sdkVersion 2023-09-17T17:36:13.050Z - VALUE ADDED Arg 0: └─commandClassName: Version └─commandClass: 134 └─property: applicationFrameworkAPIVersion └─endpoint: 0 └─newValue: 10.15.4 └─propertyName: applicationFrameworkAPIVersion 2023-09-17T17:36:13.058Z - VALUE ADDED Arg 0: └─commandClassName: Version └─commandClass: 134 └─property: applicationFrameworkBuildNumber └─endpoint: 0 └─newValue: 256 └─propertyName: applicationFrameworkBuildNumber 2023-09-17T17:36:13.062Z - VALUE ADDED Arg 0: └─commandClassName: Version └─commandClass: 134 └─property: hostInterfaceVersion └─endpoint: 0 └─newValue: unused └─propertyName: hostInterfaceVersion 2023-09-17T17:36:13.070Z - VALUE ADDED Arg 0: └─commandClassName: Version └─commandClass: 134 └─property: hostInterfaceBuildNumber └─endpoint: 0 └─newValue: 0 └─propertyName: hostInterfaceBuildNumber 2023-09-17T17:36:13.075Z - VALUE ADDED Arg 0: └─commandClassName: Version └─commandClass: 134 └─property: zWaveProtocolVersion └─endpoint: 0 └─newValue: 7.15.4 └─propertyName: zWaveProtocolVersion 2023-09-17T17:36:13.079Z - VALUE ADDED Arg 0: └─commandClassName: Version └─commandClass: 134 └─property: zWaveProtocolBuildNumber └─endpoint: 0 └─newValue: 256 └─propertyName: zWaveProtocolBuildNumber 2023-09-17T17:36:13.084Z - VALUE ADDED Arg 0: └─commandClassName: Version └─commandClass: 134 └─property: applicationVersion └─endpoint: 0 └─newValue: 11.26.4 └─propertyName: applicationVersion 2023-09-17T17:36:13.088Z - VALUE ADDED Arg 0: └─commandClassName: Version └─commandClass: 134 └─property: applicationBuildNumber └─endpoint: 0 └─newValue: 43707 └─propertyName: applicationBuildNumber 2023-09-17T17:36:13.217Z - VALUE ADDED Arg 0: └─commandClassName: Battery └─commandClass: 128 └─property: level └─endpoint: 0 └─newValue: 85 └─propertyName: level 2023-09-17T17:36:13.220Z - VALUE ADDED Arg 0: └─commandClassName: Battery └─commandClass: 128 └─property: isLow └─endpoint: 0 └─newValue: false └─propertyName: isLow 2023-09-17T17:36:13.720Z - VALUE ADDED Arg 0: └─commandClassName: Thermostat Mode └─commandClass: 64 └─property: mode └─endpoint: 0 └─newValue: 1 └─propertyName: mode 2023-09-17T17:36:13.946Z - VALUE ADDED Arg 0: └─commandClassName: Thermostat Setpoint └─commandClass: 67 └─property: setpoint └─propertyKey: 1 └─endpoint: 0 └─newValue: 8.5 └─propertyName: setpoint └─propertyKeyName: Heating 2023-09-17T17:36:14.009Z - VALUE ADDED Arg 0: └─commandClassName: Thermostat Setpoint └─commandClass: 67 └─property: setpoint └─propertyKey: 11 └─endpoint: 0 └─newValue: 16 └─propertyName: setpoint └─propertyKeyName: Energy Save Heating 2023-09-17T17:36:14.098Z - VALUE ADDED Arg 0: └─commandClassName: Protection └─commandClass: 117 └─property: local └─endpoint: 0 └─newValue: 0 └─propertyName: local 2023-09-17T17:36:14.270Z - VALUE ADDED Arg 0: └─commandClassName: Multilevel Sensor └─commandClass: 49 └─property: Air temperature └─endpoint: 0 └─newValue: 24 └─propertyName: Air temperature 2023-09-17T17:36:15.411Z - VALUE ADDED Arg 0: └─commandClassName: Multilevel Switch └─commandClass: 38 └─property: currentValue └─endpoint: 0 └─newValue: 0 └─propertyName: currentValue 2023-09-17T17:36:16.862Z - VALUE ADDED Arg 0: └─commandClassName: Configuration └─commandClass: 112 └─property: 1 └─endpoint: 0 └─newValue: 1 └─propertyName: LCD_invert 2023-09-17T17:36:16.936Z - VALUE ADDED Arg 0: └─commandClassName: Configuration └─commandClass: 112 └─property: 2 └─endpoint: 0 └─newValue: 5 └─propertyName: LCD_timeout 2023-09-17T17:36:16.999Z - VALUE ADDED Arg 0: └─commandClassName: Configuration └─commandClass: 112 └─property: 3 └─endpoint: 0 └─newValue: 0 └─propertyName: Backlight 2023-09-17T17:36:17.060Z - VALUE ADDED Arg 0: └─commandClassName: Configuration └─commandClass: 112 └─property: 4 └─endpoint: 0 └─newValue: 0 └─propertyName: Battery_report 2023-09-17T17:36:17.143Z - VALUE ADDED Arg 0: └─commandClassName: Configuration └─commandClass: 112 └─property: 5 └─endpoint: 0 └─newValue: 5 └─propertyName: Measured_temperature_report 2023-09-17T17:36:17.208Z - VALUE ADDED Arg 0: └─commandClassName: Configuration └─commandClass: 112 └─property: 6 └─endpoint: 0 └─newValue: 0 └─propertyName: Valve_opening_percentage_report 2023-09-17T17:36:17.294Z - VALUE ADDED Arg 0: └─commandClassName: Configuration └─commandClass: 112 └─property: 7 └─endpoint: 0 └─newValue: 1 └─propertyName: window_opening_percentage_report 2023-09-17T17:36:17.375Z - VALUE ADDED Arg 0: └─commandClassName: Configuration └─commandClass: 112 └─property: 8 └─endpoint: 0 └─newValue: 0 └─propertyName: Measured_temperature_offset 2023-09-17T17:36:17.600Z - VALUE ADDED Arg 0: └─commandClassName: Notification └─commandClass: 113 └─property: alarmType └─endpoint: 0 └─newValue: 0 └─propertyName: alarmType 2023-09-17T17:36:17.603Z - VALUE ADDED Arg 0: └─commandClassName: Notification └─commandClass: 113 └─property: alarmLevel └─endpoint: 0 └─newValue: 0 └─propertyName: alarmLevel 2023-09-17T17:36:17.654Z - VALUE UPDATED Arg 0: └─commandClassName: Notification └─commandClass: 113 └─property: alarmType └─endpoint: 0 └─newValue: 0 └─prevValue: 0 └─propertyName: alarmType 2023-09-17T17:36:17.656Z - VALUE UPDATED Arg 0: └─commandClassName: Notification └─commandClass: 113 └─property: alarmLevel └─endpoint: 0 └─newValue: 0 └─prevValue: 0 └─propertyName: alarmLevel 2023-09-17T17:36:17.658Z - VALUE ADDED Arg 0: └─commandClassName: Notification └─commandClass: 113 └─property: System └─propertyKey: Hardware status └─endpoint: 0 └─newValue: 3 └─propertyName: System └─propertyKeyName: Hardware status 2023-09-17T17:36:17.660Z - INTERVIEW STAGE COMPLETED Arg 0: CommandClasses 2023-09-17T17:36:17.663Z - INTERVIEW STAGE COMPLETED Arg 0: OverwriteConfig 2023-09-17T17:36:17.665Z - INTERVIEW STAGE COMPLETED Arg 0: Complete 2023-09-17T17:36:17.738Z - READY 2023-09-17T17:36:17.758Z - INTERVIEW COMPLETED
(for some reason 'add code' isnt formatting the below like it should)
It's a bit counter-intuitive. First paste code, then select it, then press "format as code".
Product desc : [https://manual.zwave.eu/backend/make.php?lang=en&sku=EURECOMET&cert=]
Thanks to everyone that contributed to adding this ;)
@cadwizzard did you manage to tune parameters of your Comet devices ?
We fail here, so would be pleased to have your report.
Thank you 👍
Hi.
I gave up I'm afraid, and moved to JSUI.
But I'm really unhappy with the CometZ.
Max reporting detail is 0.5c, unlike the Spirit (I have set to 0.3 normally)
The freeze setpoint is higher than the spirit
Also the spirits actually report detail to 0.01 degree, unlike the comet which is 0.5
Worst of all though is that they seemingly don't report temperature change sometimes reliably. I've had to request a multilevel sensor update to get the right info. This has left my boiler running for hours because the room temperature isn't reported.
I found a re-interview seems to fix that for a while and then they report every 0.5 degree again.
I had 20+ spirits and never had these issues. Upgraded 6 to CometZ. Had the problems on every one. Now can't trust them. Have returned 2 and now am trialling the ZigBee Bosch radiator thermostat 2 in z2mqqt. So far they have been brilliant. I didn't want to move away from z wave, but wanted to modernize the radiators.
I feel like Eurotronic have cheaped out with the hardware on the CometZ compared to the spirits.
Hope you have better luck.
Thank you for both your reports.
I plan to use them in Manufacturer specific mode (hence #6583), combined with a room-temperature sensor, and a smart Home Assistant thermostat (for ex. https://github.com/KartoffelToby/better_thermostat). So fully getting rid of the builtin temperature sensor and logic. Should give better results.
But for now stuck with parameters' configuration :|
But for now stuck with parameters' configuration :|
.... So was I. The only reason I moved to JSUI was to accommodate these devices and the reporting resolution and mirror parameters.
The other 50+ devices were well supported in the native z wave js implementation
But I've found the network map and device management super useful. Especially because of the weirdness I was trying to figure out with the CometZ.
It's such a shame they don't work right.
Also if you change the display to mirrored, the up is setpoint down and down is up as you look at the TRV and it doesn't switch.
The spirits don't have that issue
OK, parameters issues fixed in https://github.com/zwave-js/node-zwave-js/pull/6583 👍
May then be usable now 😉
| gharchive/issue | 2023-02-14T18:35:47 | 2025-04-01T06:46:22.190396 | {
"authors": [
"AlCalzone",
"Mart124",
"RickieCZ",
"bronger",
"cadwizzard"
],
"repo": "zwave-js/node-zwave-js",
"url": "https://github.com/zwave-js/node-zwave-js/issues/5464",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1728694091 | Missing device configuration: sunricher SR-ZV9101SAC-HP
Checklist
[X] It is not in the configuration DB
[X] It was not merged recently or has a pending PR
Which device is missing?
sunricher SR-ZV9101SAC-HP
Manufacturer ID
0x0330
Product Type
0x0200
Product ID
0xd00c
Firmware Version
1.26
Is the device listed on the Z-Wave Alliance website?
https://products.z-wavealliance.org/products/2918
Do you have a manual?
https://www.sunricher.com/media/resources/manual/SR-ZV9101SAC-HP-V3-Instruction.pdf
@zwave-js-bot import config 2918 from zwa
We have an (older?) variant of this device, which the linked alliance DB entry refers to (product ID 0xd005).
The manual seems to have 3 new config parameters.
Let's stay conservative and:
copy https://github.com/zwave-js/node-zwave-js/blob/master/packages/config/config/devices/0x0330/zv-9101.json into a new file
change the product ID
add the 3 new config parameters
| gharchive/issue | 2023-05-27T12:25:05 | 2025-04-01T06:46:22.197024 | {
"authors": [
"AlCalzone",
"MarcHagen"
],
"repo": "zwave-js/node-zwave-js",
"url": "https://github.com/zwave-js/node-zwave-js/issues/5851",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
848969172 | feat: new node events for interviewing
As discussed in Discord, I added three new events:
interview started
interview stage completed
not ready
This will allow us to update the ready status in zwave-js-server-python as needed, and also will allow us to display information to the user about the last completed stage of interview during a re-interview or during startup
Do you have any guidance on the expected behavior of a node when it is not ready? Should we assume that the node needs to be left alone until the interview finishes, e.g. disable all interaction with the node?
Should we assume that the node needs to be left alone until the interview finishes, e.g. disable all interaction with the node?
Yes. Like during the initial interview, you should not interact with a node that is not ready. So after calling refreshInfo, you need to wait for the ready event.
| gharchive/pull-request | 2021-04-02T05:22:01 | 2025-04-01T06:46:22.200164 | {
"authors": [
"AlCalzone",
"raman325"
],
"repo": "zwave-js/node-zwave-js",
"url": "https://github.com/zwave-js/node-zwave-js/pull/2233",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1776515218 | fix: don't check for Inclusion Controller CC support when responding
fixes: #5948
@zwave-js-bot automerge
| gharchive/pull-request | 2023-06-27T09:47:39 | 2025-04-01T06:46:22.201190 | {
"authors": [
"AlCalzone"
],
"repo": "zwave-js/node-zwave-js",
"url": "https://github.com/zwave-js/node-zwave-js/pull/5949",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2579381155 | NVM Restore to SmartStick G8 800 Series not successful
Checklist
[ ] I am not using Home Assistant. Or: a developer has told me to come here.
[X] I have checked the troubleshooting section and my problem is not described there.
[X] I have read the changelog and my problem is not mentioned there.
Deploy method
Home Assistant Add-on
Z-Wave JS UI version
zwave-js-ui: 9.21.1
ZwaveJS version
zwave-js: 13.4.0
Describe the bug
I tried to integrate my new G8 stick into Home Assistant and transfer the old configuration via NVM Restore. Unfortunately, the restore is aborted with the error “error while calling restore VM: The NVM File ID must be set before serializing”.
To Reproduce
trying another restore prozess
Expected behavior
Is there a solution for that error? And then yes, what is the solution for that?
Additional context
No response
Please make a driver log, loglevel debug and attach it here as a file (drag & drop into the text field).
cc @AlCalzone
Update to the latest version.
I also have the same problem. If I have understood correctly, this should be fixed in https://github.com/zwave-js/node-zwave-js/issues/7219. When is the next release planned?
It's out now
The restore has now worked perfectly, thank you very much for the quick fix!
I can also confirm that it is now working. Thank you very much for the quick implementation.
| gharchive/issue | 2024-10-10T16:56:56 | 2025-04-01T06:46:22.207581 | {
"authors": [
"AlCalzone",
"Josef69",
"robertsLando",
"stephanseitz"
],
"repo": "zwave-js/zwave-js-ui",
"url": "https://github.com/zwave-js/zwave-js-ui/issues/3938",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1713584111 | feat
不再监听全局,添加了watchDirs和ignoreDirs
对缓存逻辑做了优化
及时关闭chokidar服务
支持ts
尽快发布npm
已合并,并发布到npm,感谢提交代码
------------------ 原始邮件 ------------------
发件人: "zxx960/vite-plugin-prettier" @.>;
发送时间: 2023年5月17日(星期三) 晚上6:21
@.>;
@.***>;
主题: [zxx960/vite-plugin-prettier] feat (PR #3)
不再监听全局,添加了watchDirs和ignoreDirs
对缓存逻辑做了优化
及时关闭chokidar服务
支持ts
尽快发布npm
You can view, comment on, or merge this pull request online at:
https://github.com/zxx960/vite-plugin-prettier/pull/3
Commit Summary
3f56f4e feat
File Changes
(2 files)
A index.d.ts (12)
M index.js (31)
Patch Links:
https://github.com/zxx960/vite-plugin-prettier/pull/3.patch
https://github.com/zxx960/vite-plugin-prettier/pull/3.diff
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.Message ID: @.***>
| gharchive/pull-request | 2023-05-17T10:20:58 | 2025-04-01T06:46:22.220952 | {
"authors": [
"meisiwan",
"zxx960"
],
"repo": "zxx960/vite-plugin-prettier",
"url": "https://github.com/zxx960/vite-plugin-prettier/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
166289270 | Crash on CtrlG
When I press Ctrl+G to see the help, micro crashes:
Micro encountered an error: runtime error: index out of range
runtime.errorString runtime error: index out of range
/usr/local/go/src/runtime/panic.go:426 (0x815b9)
gopanic: reflectcall(nil, unsafe.Pointer(d.fn), deferArgs(d), uint32(d.siz), uint32(d.siz))
/usr/local/go/src/runtime/panic.go:15 (0x7fa89)
panicindex: panic(indexError)
/Users/Stephan/mygo/src/github.com/zyedidia/micro/cmd/micro/view.go:687 (0x45db6)
(*View).DisplayView: highlightStyle = v.matches[viewLine][colN]
/Users/Stephan/mygo/src/github.com/zyedidia/micro/cmd/micro/view.go:814 (0x46b18)
(*View).Display: v.DisplayView()
/Users/Stephan/mygo/src/github.com/zyedidia/micro/cmd/micro/micro.go:182 (0x2184b)
RedrawAll: v.Display()
/Users/Stephan/mygo/src/github.com/zyedidia/micro/cmd/micro/micro.go:289 (0x225ce)
main: RedrawAll()
/usr/local/go/src/runtime/proc.go:188 (0x830d0)
main: main_main()
/usr/local/go/src/runtime/asm_amd64.s:1998 (0xb1411)
goexit: BYTE $0x90 // NOP
My go env:
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/Stephan/mygo"
GORACE=""
GOROOT="/usr/local/go"
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GO15VENDOREXPERIMENT="1"
CC="clang"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fno-common"
CXX="clang++"
CGO_ENABLED="1"
Could you give me the file that you were viewing when you pressed CtrlG, because it seems to be working fine for me.
Also what is the output of micro -version?
I just couldn't reproduce it again, now I can.
To make it interesting, this error only occurs when I connect my external display. If I disconnect it, CtrlG works. When the error occurs, it even occurs on an empty buffer. No file loaded.
To me it looks like it depends on the size of the terminal window. I'm on OSX 10.11 (El Captain) using iTerm2 2.9. When I make my terminal window smaller, the error also does not happen.
I build the version myself from the master branch of this repo. Micro version: e890c87
I can download the binary release and see if it happens there too.
I'm experiencing the same issue on Arch Linux. It always happens for me and doesn't seem to be related to terminal size or empty buffer. I only installed micro today so I don't have a lot experience with it (but loving it so far!).
Micro encountered an error: runtime error: index out of range
runtime.errorString runtime error: index out of range
/home/zbyedidia/go/src/runtime/panic.go:426 (0x480849)
/home/zbyedidia/go/src/runtime/panic.go:15 (0x47ed19)
/home/zbyedidia/micro/cmd/micro/view.go:687 (0x444d76)
/home/zbyedidia/micro/cmd/micro/view.go:814 (0x445ad8)
/home/zbyedidia/micro/cmd/micro/micro.go:182 (0x42080b)
/home/zbyedidia/micro/cmd/micro/micro.go:289 (0x42158e)
/home/zbyedidia/go/src/runtime/proc.go:188 (0x482360)
/home/zbyedidia/go/src/runtime/asm_amd64.s:1998 (0x4b1631)
Linux 4.6.4-1-ARCH x86_64 GNU/Linux
GNOME Terminal 3.20.2
Micro version: e890c87 (installed from Arch Linux AUR)
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH=""
GORACE=""
GOROOT="/usr/lib/go"
GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64"
GO15VENDOREXPERIMENT="1"
CC="gcc"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0"
CXX="g++"
CGO_ENABLED="1"
Hmm, alright it seems like the issue is caused by the viewport changing size when a split is made and the syntax highlighting not re-highlighting, causing an index out of bounds error when it goes to draw the view and looks up the color of a specific row column that the syntax highlighter doesn't think exists.
Anyway, I'm not exactly sure under what circumstances this issue happens, but I think I know how to fix it.
Also I think this is probably a more general issue regarding splits. It will probably also crash when you open a split with a file that is syntax highlighted (given the correct circumstances).
I think I may have fixed the issue, but I can't reproduce it so I'm not sure. Let me know if it's fixed.
It's fixed for me.
Alright great. I'll close this issue then. Please reopen if there is still a problem.
Ok, works for me now too. Thank you.
| gharchive/issue | 2016-07-19T09:41:31 | 2025-04-01T06:46:22.236167 | {
"authors": [
"WardL",
"sbrosinski",
"zyedidia"
],
"repo": "zyedidia/micro",
"url": "https://github.com/zyedidia/micro/issues/198",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1329391576 | How to sort TailwindCSS class without prettier?
Hi.
I see the upstream version opt-out the Prettier. The same thing with this project.
How do you sort TailwindCSS class without prettier-plugin-tailwindcss?
Thanks.
Hi.
I see the upstream version opt-out the Prettier. The same thing with this project. How do you sort TailwindCSS class without prettier-plugin-tailwindcss?
Thanks.
use this vscode ext https://marketplace.visualstudio.com/items?itemName=heybourn.headwind
Hi.
I see the upstream version opt-out the Prettier. The same thing with this project. How do you sort TailwindCSS class without prettier-plugin-tailwindcss?
Thanks.
use this vscode ext https://marketplace.visualstudio.com/items?itemName=heybourn.headwind
Oh, Unfortunately, I don't use VsCode. CLI app is preferred. I will keep using Prettier, then.
If anyone looking for CLI solution to format TailwindCSS class, use avencera/rustywind: CLI for organizing Tailwind CSS classes
| gharchive/issue | 2022-08-05T03:14:46 | 2025-04-01T06:46:22.259459 | {
"authors": [
"azzamsa",
"zynth17"
],
"repo": "zynth17/vitailse",
"url": "https://github.com/zynth17/vitailse/issues/79",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1848450926 | Create Menu, Urine Test & and Image Repo Form
add dashboard menu, image repo & urine test form
Jgn lupa pull, ada update baru.
Kau kene buat:
Patient registration form
Display specific patient
Buat ikut template yg aku sedia
| gharchive/pull-request | 2023-08-13T07:00:03 | 2025-04-01T06:46:22.265038 | {
"authors": [
"aqiefsyahmi",
"zyq-m"
],
"repo": "zyq-m/mis",
"url": "https://github.com/zyq-m/mis/pull/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.