Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
10,603
13,429,439,759
IssuesEvent
2020-09-07 01:50:31
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
Support ENUM / SET / BIT in Coprocessor
PCP-S1 difficulty/hard sig/coprocessor status/help-wanted
## Description The Coprocessor does not support the ENUM / SET / BIT type in the current implementation, which cause some expressions cannot be pushed down if the expression contains the ENUM / SET / BIT type. We can improve this situation by implementing ENUM / SET / BIT operations. ## Difficulty - Hard ## Score - 2100 ## Mentor(s) - @lonng - @breeswish ## Recommended Skills - Rust programming - Algorithm
1.0
Support ENUM / SET / BIT in Coprocessor - ## Description The Coprocessor does not support the ENUM / SET / BIT type in the current implementation, which cause some expressions cannot be pushed down if the expression contains the ENUM / SET / BIT type. We can improve this situation by implementing ENUM / SET / BIT operations. ## Difficulty - Hard ## Score - 2100 ## Mentor(s) - @lonng - @breeswish ## Recommended Skills - Rust programming - Algorithm
process
support enum set bit in coprocessor description the coprocessor does not support the enum set bit type in the current implementation which cause some expressions cannot be pushed down if the expression contains the enum set bit type we can improve this situation by implementing enum set bit operations difficulty hard score mentor s lonng breeswish recommended skills rust programming algorithm
1
15,703
19,848,413,026
IssuesEvent
2022-01-21 09:32:04
ooi-data/CE07SHSM-SBD12-05-WAVSSA000-recovered_host-wavss_a_dcl_motion_recovered
https://api.github.com/repos/ooi-data/CE07SHSM-SBD12-05-WAVSSA000-recovered_host-wavss_a_dcl_motion_recovered
opened
🛑 Processing failed: ValueError
process
## Overview `ValueError` found in `processing_task` task during run ended on 2022-01-21T09:32:03.501616. ## Details Flow name: `CE07SHSM-SBD12-05-WAVSSA000-recovered_host-wavss_a_dcl_motion_recovered` Task name: `processing_task` Error type: `ValueError` Error message: not enough values to unpack (expected 3, got 0) <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream append_to_zarr(mod_ds, final_store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr _append_zarr(store, mod_ds) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr existing_arr.append(var_data.values) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values return _as_array_or_item(self._data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item data = np.asarray(data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__ x = self.compute() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute (result,) = compute(self, traverse=False, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute results = schedule(dsk, keys, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get results = get_async( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async raise_exception(exc, tb) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise raise exc File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task result = _execute_task(task, data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task return func(*(_execute_task(a, cache) for a in args)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter c = np.asarray(c) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__ self._ensure_cached() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached self.array = NumpyIndexingAdapter(np.asarray(self.array)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__ return array[key.tuple] File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__ return self.get_basic_selection(selection, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection return self._get_basic_selection_nd(selection=selection, out=out, File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd return self._get_selection(indexer=indexer, out=out, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection lchunk_coords, lchunk_selection, lout_selection = zip(*indexer) ValueError: not enough values to unpack (expected 3, got 0) ``` </details>
1.0
🛑 Processing failed: ValueError - ## Overview `ValueError` found in `processing_task` task during run ended on 2022-01-21T09:32:03.501616. ## Details Flow name: `CE07SHSM-SBD12-05-WAVSSA000-recovered_host-wavss_a_dcl_motion_recovered` Task name: `processing_task` Error type: `ValueError` Error message: not enough values to unpack (expected 3, got 0) <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream append_to_zarr(mod_ds, final_store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr _append_zarr(store, mod_ds) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr existing_arr.append(var_data.values) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values return _as_array_or_item(self._data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item data = np.asarray(data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__ x = self.compute() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute (result,) = compute(self, traverse=False, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute results = schedule(dsk, keys, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get results = get_async( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async raise_exception(exc, tb) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise raise exc File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task result = _execute_task(task, data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task return func(*(_execute_task(a, cache) for a in args)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter c = np.asarray(c) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__ self._ensure_cached() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached self.array = NumpyIndexingAdapter(np.asarray(self.array)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__ return array[key.tuple] File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__ return self.get_basic_selection(selection, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection return self._get_basic_selection_nd(selection=selection, out=out, File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd return self._get_selection(indexer=indexer, out=out, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection lchunk_coords, lchunk_selection, lout_selection = zip(*indexer) ValueError: not enough values to unpack (expected 3, got 0) ``` </details>
process
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name recovered host wavss a dcl motion recovered task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got
1
197,602
14,935,429,046
IssuesEvent
2021-01-25 11:56:50
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
closed
test: kernel: Test kernel.common.stack_protection_arm_fpu_sharing.fatal fails on nrf52 platforms
area: Kernel area: Tests bug platform: nRF priority: medium
**Describe the bug** The test kernel.common.kernel.common.stack_protection_arm_fpu_sharing.fatal fails on nrf52 platforms. **To Reproduce** Steps to reproduce the behavior: 1. go to zephyr dir 2. have nrf52 platform connected (e.g. nrf52dk_nrf52832) 3. run `./scripts/sanitycheck --device-testing -T tests/kernel/fatal/exception/ -p nrf52dk_nrf52832 --device-serial /dev/ttyACM0 -v -v` 4. See error **Expected behavior** test passes **Impact** Not clear **Logs and console output** ``` DEBUG - DEVICE: *** Booting Zephyr OS build zephyr-v2.4.0-1784-g2ce570b03f97 *** DEBUG - DEVICE: Running test suite E: ***** MPU FAULT ***** DEBUG - DEVICE: E: Stacking error (context area might be not valid) DEBUG - DEVICE: E: Data Access Violation DEBUG - DEVICE: E: MMFAR Address: 0x20002c08 DEBUG - DEVICE: E: r0/a1: 0x05fefef9 r1/a2: 0x020b8402 r2/a3: 0x3ae7e3a5 DEBUG - DEVICE: E: r3/a4: 0x401c88f6 r12/ip: 0x3e3ef5eb r14/lr: 0x1e12a498 DEBUG - DEVICE: E: xpsr: 0x12423c00 DEBUG - DEVICE: E: s[ 0]: 0x00000000 s[ 1]: 0xffffffff s[ 2]: 0x00000000 s[ 3]: 0x00000000 DEBUG - DEVICE: E: s[ 4]: 0x00000000 s[ 5]: 0x00000000 s[ 6]: 0x00000000 s[ 7]: 0x00000000 DEBUG - DEVICE: E: s[ 8]: 0x00000000 s[ 9]: 0x00000000 s[10]: 0x00000000 s[11]: 0x00000000 DEBUG - DEVICE: E: s[12]: 0x00000000 s[13]: 0x00000000 s[14]: 0x00000000 s[15]: 0x00000000 DEBUG - DEVICE: E: fpscr: 0x00000013 DEBUG - DEVICE: E: Faulting instruction address (r15/pc): 0xec7237e9 DEBUG - DEVICE: E: >>> ZEPHYR FATAL ERROR 2: Stack overflow on CPU 0 DEBUG - DEVICE: E: Current thread: 0x20000358 (main) DEBUG - DEVICE: Caught system error -- reason 2 DEBUG - DEVICE: Was not expecting a crash ``` **Environment (please complete the following information):** - OS: Ubuntu 18.04 - Toolchain Zephyr SDK - Commit SHA or Version used zephyr-v2.4.0-1784-g2ce570b03f97
1.0
test: kernel: Test kernel.common.stack_protection_arm_fpu_sharing.fatal fails on nrf52 platforms - **Describe the bug** The test kernel.common.kernel.common.stack_protection_arm_fpu_sharing.fatal fails on nrf52 platforms. **To Reproduce** Steps to reproduce the behavior: 1. go to zephyr dir 2. have nrf52 platform connected (e.g. nrf52dk_nrf52832) 3. run `./scripts/sanitycheck --device-testing -T tests/kernel/fatal/exception/ -p nrf52dk_nrf52832 --device-serial /dev/ttyACM0 -v -v` 4. See error **Expected behavior** test passes **Impact** Not clear **Logs and console output** ``` DEBUG - DEVICE: *** Booting Zephyr OS build zephyr-v2.4.0-1784-g2ce570b03f97 *** DEBUG - DEVICE: Running test suite E: ***** MPU FAULT ***** DEBUG - DEVICE: E: Stacking error (context area might be not valid) DEBUG - DEVICE: E: Data Access Violation DEBUG - DEVICE: E: MMFAR Address: 0x20002c08 DEBUG - DEVICE: E: r0/a1: 0x05fefef9 r1/a2: 0x020b8402 r2/a3: 0x3ae7e3a5 DEBUG - DEVICE: E: r3/a4: 0x401c88f6 r12/ip: 0x3e3ef5eb r14/lr: 0x1e12a498 DEBUG - DEVICE: E: xpsr: 0x12423c00 DEBUG - DEVICE: E: s[ 0]: 0x00000000 s[ 1]: 0xffffffff s[ 2]: 0x00000000 s[ 3]: 0x00000000 DEBUG - DEVICE: E: s[ 4]: 0x00000000 s[ 5]: 0x00000000 s[ 6]: 0x00000000 s[ 7]: 0x00000000 DEBUG - DEVICE: E: s[ 8]: 0x00000000 s[ 9]: 0x00000000 s[10]: 0x00000000 s[11]: 0x00000000 DEBUG - DEVICE: E: s[12]: 0x00000000 s[13]: 0x00000000 s[14]: 0x00000000 s[15]: 0x00000000 DEBUG - DEVICE: E: fpscr: 0x00000013 DEBUG - DEVICE: E: Faulting instruction address (r15/pc): 0xec7237e9 DEBUG - DEVICE: E: >>> ZEPHYR FATAL ERROR 2: Stack overflow on CPU 0 DEBUG - DEVICE: E: Current thread: 0x20000358 (main) DEBUG - DEVICE: Caught system error -- reason 2 DEBUG - DEVICE: Was not expecting a crash ``` **Environment (please complete the following information):** - OS: Ubuntu 18.04 - Toolchain Zephyr SDK - Commit SHA or Version used zephyr-v2.4.0-1784-g2ce570b03f97
non_process
test kernel test kernel common stack protection arm fpu sharing fatal fails on platforms describe the bug the test kernel common kernel common stack protection arm fpu sharing fatal fails on platforms to reproduce steps to reproduce the behavior go to zephyr dir have platform connected e g run scripts sanitycheck device testing t tests kernel fatal exception p device serial dev v v see error expected behavior test passes impact not clear logs and console output debug device booting zephyr os build zephyr debug device running test suite e mpu fault debug device e stacking error context area might be not valid debug device e data access violation debug device e mmfar address debug device e debug device e ip lr debug device e xpsr debug device e s s s s debug device e s s s s debug device e s s s s debug device e s s s s debug device e fpscr debug device e faulting instruction address pc debug device e zephyr fatal error stack overflow on cpu debug device e current thread main debug device caught system error reason debug device was not expecting a crash environment please complete the following information os ubuntu toolchain zephyr sdk commit sha or version used zephyr
0
13,883
16,654,740,624
IssuesEvent
2021-06-05 10:07:48
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] Responsive issues in Studies and Apps tab > UI issues
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
Responsive issues in Studies tab > UI issues 1. Studies tab ![mb8](https://user-images.githubusercontent.com/71445210/115549680-8e539a00-a2c6-11eb-9acd-e04397c58fa9.png) 2. Apps app ![mba11](https://user-images.githubusercontent.com/71445210/115550158-2487c000-a2c7-11eb-9fc6-4b155f8e335a.png)
3.0
[PM] Responsive issues in Studies and Apps tab > UI issues - Responsive issues in Studies tab > UI issues 1. Studies tab ![mb8](https://user-images.githubusercontent.com/71445210/115549680-8e539a00-a2c6-11eb-9acd-e04397c58fa9.png) 2. Apps app ![mba11](https://user-images.githubusercontent.com/71445210/115550158-2487c000-a2c7-11eb-9fc6-4b155f8e335a.png)
process
responsive issues in studies and apps tab ui issues responsive issues in studies tab ui issues studies tab apps app
1
7,778
10,919,252,225
IssuesEvent
2019-11-21 18:36:56
ampproject/amp-email-viewer
https://api.github.com/repos/ampproject/amp-email-viewer
opened
AMP source checks
intent to implement preprocessing module
This module should perform the following checks to ensure the AMP code should be rendered: - Browser check - checks if the browser supports features such as iframe sandboxing, CSP and `window.crypto` - AMP size check - checks if the size of the AMP code is under a given limit - AMP Validation - runs [amphtml-validator](https://www.npmjs.com/package/amphtml-validator) on the AMP code - AMP component limits check per ampproject/wg-amp4email#4
1.0
AMP source checks - This module should perform the following checks to ensure the AMP code should be rendered: - Browser check - checks if the browser supports features such as iframe sandboxing, CSP and `window.crypto` - AMP size check - checks if the size of the AMP code is under a given limit - AMP Validation - runs [amphtml-validator](https://www.npmjs.com/package/amphtml-validator) on the AMP code - AMP component limits check per ampproject/wg-amp4email#4
process
amp source checks this module should perform the following checks to ensure the amp code should be rendered browser check checks if the browser supports features such as iframe sandboxing csp and window crypto amp size check checks if the size of the amp code is under a given limit amp validation runs on the amp code amp component limits check per ampproject wg
1
29,998
24,464,652,643
IssuesEvent
2022-10-07 14:04:33
RasaHQ/rasa
https://api.github.com/repos/RasaHQ/rasa
closed
Flaky CI: Fix `Timeouts`
type:maintenance :wrench: area:rasa-oss/infrastructure :bullettrain_front: feature:speed-up-ci :zap: effort:atom-squad/4
**Context**: Most of the flaky CI runs are caused by timeouts, these either terminate in `Failed: Timeout > ...` error message or in a `UnicodeEncodeError`. A comprehensive list of all flaky CI runs from the past 6 weeks: [https://github.com/RasaHQ/rasa/runs/4351136393?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4351136393?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4365904555?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4365904555?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4365904618?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4365904618?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4473900434?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4473900434?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4476706673?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4476706673?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4484360539?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4484360539?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4486260706?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4486260706?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4518222368?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4518222368?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4548416561?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4548416561?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4609020446?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4609020446?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4641544766?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4641544766?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4641545136?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4641545136?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4641544838?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4641544838?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4700951994?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4700951994?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4700952084?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4700952084?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4700952138?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4700952138?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4651325075?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4651325075?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4657482477?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4657482477?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4713954188?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4713954188?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4715199388?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4715199388?check_suite_focus=true) **Definition of Done**: * [ ] determine if it's possible to cut down number of steps or data used to reduce runtime in each of the flaky unit tests in the example runs above * [ ] if not possible, bump up timeouts
1.0
Flaky CI: Fix `Timeouts` - **Context**: Most of the flaky CI runs are caused by timeouts, these either terminate in `Failed: Timeout > ...` error message or in a `UnicodeEncodeError`. A comprehensive list of all flaky CI runs from the past 6 weeks: [https://github.com/RasaHQ/rasa/runs/4351136393?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4351136393?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4365904555?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4365904555?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4365904618?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4365904618?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4473900434?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4473900434?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4476706673?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4476706673?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4484360539?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4484360539?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4486260706?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4486260706?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4518222368?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4518222368?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4548416561?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4548416561?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4609020446?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4609020446?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4641544766?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4641544766?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4641545136?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4641545136?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4641544838?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4641544838?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4700951994?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4700951994?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4700952084?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4700952084?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4700952138?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4700952138?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4651325075?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4651325075?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4657482477?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4657482477?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4713954188?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4713954188?check_suite_focus=true) [https://github.com/RasaHQ/rasa/runs/4715199388?check_suite_focus=true](https://github.com/RasaHQ/rasa/runs/4715199388?check_suite_focus=true) **Definition of Done**: * [ ] determine if it's possible to cut down number of steps or data used to reduce runtime in each of the flaky unit tests in the example runs above * [ ] if not possible, bump up timeouts
non_process
flaky ci fix timeouts context most of the flaky ci runs are caused by timeouts these either terminate in failed timeout error message or in a unicodeencodeerror a comprehensive list of all flaky ci runs from the past weeks definition of done determine if it s possible to cut down number of steps or data used to reduce runtime in each of the flaky unit tests in the example runs above if not possible bump up timeouts
0
274,179
20,828,230,167
IssuesEvent
2022-03-19 02:05:02
GenericMappingTools/pygmt
https://api.github.com/repos/GenericMappingTools/pygmt
closed
Missing links to upstream GMT documentation
good first issue documentation
Some functions are missing links to the official GMT docs at https://docs.generic-mapping-tools.org/latest/modules.html. Specifically, the text that says "Full option list at https://docs.generic-mapping-tools.org/latest/somegmtmodule.html" Here are a few that I've found so far: - [ ] [grdsample](https://www.pygmt.org/v0.5.0/api/generated/pygmt.grdsample.html) - [ ] [grdproject](https://www.pygmt.org/v0.5.0/api/generated/pygmt.grdproject.html) - [ ] [project](https://www.pygmt.org/v0.5.0/api/generated/pygmt.project.html) For example, the one for `grdsample` looks like this: https://github.com/GenericMappingTools/pygmt/blob/46fc49eb849420c385a62159e125c515e035ea6b/pygmt/src/grdsample.py#L46-L50 But it should look like this: https://github.com/GenericMappingTools/pygmt/blob/46fc49eb849420c385a62159e125c515e035ea6b/pygmt/src/grdgradient.py#L40-L45 See https://www.pygmt.org/v0.5.0/contributing.html#editing-the-documentation-on-github for instructions on how to edit the documentation to add the missing text. Let us know if you are keen to work on this by leaving a comment below before you start :grin:
1.0
Missing links to upstream GMT documentation - Some functions are missing links to the official GMT docs at https://docs.generic-mapping-tools.org/latest/modules.html. Specifically, the text that says "Full option list at https://docs.generic-mapping-tools.org/latest/somegmtmodule.html" Here are a few that I've found so far: - [ ] [grdsample](https://www.pygmt.org/v0.5.0/api/generated/pygmt.grdsample.html) - [ ] [grdproject](https://www.pygmt.org/v0.5.0/api/generated/pygmt.grdproject.html) - [ ] [project](https://www.pygmt.org/v0.5.0/api/generated/pygmt.project.html) For example, the one for `grdsample` looks like this: https://github.com/GenericMappingTools/pygmt/blob/46fc49eb849420c385a62159e125c515e035ea6b/pygmt/src/grdsample.py#L46-L50 But it should look like this: https://github.com/GenericMappingTools/pygmt/blob/46fc49eb849420c385a62159e125c515e035ea6b/pygmt/src/grdgradient.py#L40-L45 See https://www.pygmt.org/v0.5.0/contributing.html#editing-the-documentation-on-github for instructions on how to edit the documentation to add the missing text. Let us know if you are keen to work on this by leaving a comment below before you start :grin:
non_process
missing links to upstream gmt documentation some functions are missing links to the official gmt docs at specifically the text that says full option list at here are a few that i ve found so far for example the one for grdsample looks like this but it should look like this see for instructions on how to edit the documentation to add the missing text let us know if you are keen to work on this by leaving a comment below before you start grin
0
10,232
13,095,140,348
IssuesEvent
2020-08-03 13:38:58
ESMValGroup/ESMValCore
https://api.github.com/repos/ESMValGroup/ESMValCore
opened
Preprocessor function multi_model_statistics does not work if no time coordinate or only single time point in data
bug preprocessor
Because the code assumes a time coordinate is present.
1.0
Preprocessor function multi_model_statistics does not work if no time coordinate or only single time point in data - Because the code assumes a time coordinate is present.
process
preprocessor function multi model statistics does not work if no time coordinate or only single time point in data because the code assumes a time coordinate is present
1
16,474
21,407,823,206
IssuesEvent
2022-04-22 00:05:00
nkdAgility/azure-devops-migration-tools
https://api.github.com/repos/nkdAgility/azure-devops-migration-tools
closed
Default agent pool setting for pipeline not migrated
no-issue-activity Pipeline Processor
post migration of my azure pipelines to new org, while running the pipeline in new org, i encounter an error saying "no pool was specified" , what i observed is in the pipeline triggers the "Default agent pool" setting for YAML was not copied from the source to destination while migration. Any fix for the same? attached screenshot below post migration <img width="367" alt="agentpool" src="https://user-images.githubusercontent.com/95461731/157855338-dc3fb1f3-352a-4f3e-a51e-4904c1cdd9e2.PNG">
1.0
Default agent pool setting for pipeline not migrated - post migration of my azure pipelines to new org, while running the pipeline in new org, i encounter an error saying "no pool was specified" , what i observed is in the pipeline triggers the "Default agent pool" setting for YAML was not copied from the source to destination while migration. Any fix for the same? attached screenshot below post migration <img width="367" alt="agentpool" src="https://user-images.githubusercontent.com/95461731/157855338-dc3fb1f3-352a-4f3e-a51e-4904c1cdd9e2.PNG">
process
default agent pool setting for pipeline not migrated post migration of my azure pipelines to new org while running the pipeline in new org i encounter an error saying no pool was specified what i observed is in the pipeline triggers the default agent pool setting for yaml was not copied from the source to destination while migration any fix for the same attached screenshot below post migration img width alt agentpool src
1
12,023
14,738,515,821
IssuesEvent
2021-01-07 04:59:29
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
Site062 Facility management payment error
anc-ops anc-process anc-ui anp-1.5 ant-support has attachment
In GitLab by @kdjstudios on Jun 7, 2018, 11:20 **Submitted by:** "Denise Joseph" <denise.joseph@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-06-07-11789/conversation **Server:** Internal **Client/Site:** 62 **Account:** Facility management **Issue:** Our contact Orlando Zelaya for Facility management account advised that he applied payment for the following invoice on May 2,2018. 062-37160 5/1/18 98.88 98.88 72.5 0.0 0.0 15.0 11.38 98.88 98.88 0.00 6/7/18 He left the portal thinking it was paid because it stated it went through. He did not get a confirmation email or receipt to confirm the payment but later learned it was not processed. His feedback in his email and during our call earlier was the same. “Our invoice 062-37160 was paid throughout your new web system on May 2, 2018. However, it shows outstanding with late charge on the most recent invoice 062-38157. We are considering a cancellation of our subscription because your new web system adds an extra layer of clerical work and it is unreliable.” Checking his statement confirmed payment was not taken at all but he had no notification from the portal saying the payment declined or any type of explanation. I spoke with the client a few minutes ago an apologized for his experience. Thanked him for his feedback and was able to process the payment on the system. I assume the client will get a receipt once payment is process via the portal. Orlanda stated he did not get one. Is there any way to investigate this client claim and perhaps see if the above notifications can be made available so they receive an email automatically advising of payment confirmation or payment declining. Please let me know so that I can follow up with the client. HERE IS THE FULL EMAIL: [original_message__6_.html](/uploads/1ae03f496050f0a99ff110dd9eff83ee/original_message__6_.html)
1.0
Site062 Facility management payment error - In GitLab by @kdjstudios on Jun 7, 2018, 11:20 **Submitted by:** "Denise Joseph" <denise.joseph@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-06-07-11789/conversation **Server:** Internal **Client/Site:** 62 **Account:** Facility management **Issue:** Our contact Orlando Zelaya for Facility management account advised that he applied payment for the following invoice on May 2,2018. 062-37160 5/1/18 98.88 98.88 72.5 0.0 0.0 15.0 11.38 98.88 98.88 0.00 6/7/18 He left the portal thinking it was paid because it stated it went through. He did not get a confirmation email or receipt to confirm the payment but later learned it was not processed. His feedback in his email and during our call earlier was the same. “Our invoice 062-37160 was paid throughout your new web system on May 2, 2018. However, it shows outstanding with late charge on the most recent invoice 062-38157. We are considering a cancellation of our subscription because your new web system adds an extra layer of clerical work and it is unreliable.” Checking his statement confirmed payment was not taken at all but he had no notification from the portal saying the payment declined or any type of explanation. I spoke with the client a few minutes ago an apologized for his experience. Thanked him for his feedback and was able to process the payment on the system. I assume the client will get a receipt once payment is process via the portal. Orlanda stated he did not get one. Is there any way to investigate this client claim and perhaps see if the above notifications can be made available so they receive an email automatically advising of payment confirmation or payment declining. Please let me know so that I can follow up with the client. HERE IS THE FULL EMAIL: [original_message__6_.html](/uploads/1ae03f496050f0a99ff110dd9eff83ee/original_message__6_.html)
process
facility management payment error in gitlab by kdjstudios on jun submitted by denise joseph helpdesk server internal client site account facility management issue our contact orlando zelaya for facility management account advised that he applied payment for the following invoice on may he left the portal thinking it was paid because it stated it went through he did not get a confirmation email or receipt to confirm the payment but later learned it was not processed his feedback in his email and during our call earlier was the same “our invoice was paid throughout your new web system on may however it shows outstanding with late charge on the most recent invoice we are considering a cancellation of our subscription because your new web system adds an extra layer of clerical work and it is unreliable ” checking his statement confirmed payment was not taken at all but he had no notification from the portal saying the payment declined or any type of explanation i spoke with the client a few minutes ago an apologized for his experience thanked him for his feedback and was able to process the payment on the system i assume the client will get a receipt once payment is process via the portal orlanda stated he did not get one is there any way to investigate this client claim and perhaps see if the above notifications can be made available so they receive an email automatically advising of payment confirmation or payment declining please let me know so that i can follow up with the client here is the full email uploads original message html
1
22,554
31,765,377,981
IssuesEvent
2023-09-12 08:27:33
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
Check components that aren't yet wired up
bug good first issue processor/redaction processor/schema extension/jaegerremotesampling receiver/azureblob receiver/snowflake
As seen in #5725, it is possible that components that have been completed already aren't wired up in the list of components. Currently, this seems to be the list of exporters and receivers that haven't been wired up yet as of 29-Nov-2022: - [x] logicmonitorexporter (under development, should not be wired up) - [x] #16567 - [x] purefareceiver (under development, should not be wired up) - [x] snowflakereceiver - [x] jaegerremotesampling - [x] logstransformprocessor (correctly omitted) - [x] redactionprocessor - [x] schemaprocessor Original description: ```console $ ls -1 exporter/ > /tmp/exporters-available.txt $ grep "github.com/open-telemetry/opentelemetry-collector-contrib" internal/components/components.go | sed 's/"//g' | grep exporter | awk -F/ '{print $5}' > /tmp/exporters-wired.txt $ diff /tmp/exporters-* 2d1 < awscloudwatchlogsexporter 12d10 < elasticsearchexporter 16d13 < googlecloudpubsubexporter 26d22 < observiqexporter 33d28 < skywalkingexporter $ grep "github.com/open-telemetry/opentelemetry-collector-contrib" internal/components/components.go | sed 's/"//g' | grep receiver | awk -F/ '{print $5}' > /tmp/receivers-wired.txt $ ls -1 receiver/ > /tmp/receivers-available.txt $ diff /tmp/receivers-* 1d0 < awscontainerinsightreceiver 5d3 < cloudfoundryreceiver 11,12d8 < googlecloudpubsubreceiver < googlecloudspannerreceiver 14d9 < httpdreceiver 23,26d17 < memcachedreceiver < mongodbatlasreceiver < mysqlreceiver < nginxreceiver 34d24 < scraperhelper ``` It is possible that some of those components aren't ready yet, but it's possible that some of them just missed this last step. This ticket is to verify which ones should be added, and to create a PR addressing it.
2.0
Check components that aren't yet wired up - As seen in #5725, it is possible that components that have been completed already aren't wired up in the list of components. Currently, this seems to be the list of exporters and receivers that haven't been wired up yet as of 29-Nov-2022: - [x] logicmonitorexporter (under development, should not be wired up) - [x] #16567 - [x] purefareceiver (under development, should not be wired up) - [x] snowflakereceiver - [x] jaegerremotesampling - [x] logstransformprocessor (correctly omitted) - [x] redactionprocessor - [x] schemaprocessor Original description: ```console $ ls -1 exporter/ > /tmp/exporters-available.txt $ grep "github.com/open-telemetry/opentelemetry-collector-contrib" internal/components/components.go | sed 's/"//g' | grep exporter | awk -F/ '{print $5}' > /tmp/exporters-wired.txt $ diff /tmp/exporters-* 2d1 < awscloudwatchlogsexporter 12d10 < elasticsearchexporter 16d13 < googlecloudpubsubexporter 26d22 < observiqexporter 33d28 < skywalkingexporter $ grep "github.com/open-telemetry/opentelemetry-collector-contrib" internal/components/components.go | sed 's/"//g' | grep receiver | awk -F/ '{print $5}' > /tmp/receivers-wired.txt $ ls -1 receiver/ > /tmp/receivers-available.txt $ diff /tmp/receivers-* 1d0 < awscontainerinsightreceiver 5d3 < cloudfoundryreceiver 11,12d8 < googlecloudpubsubreceiver < googlecloudspannerreceiver 14d9 < httpdreceiver 23,26d17 < memcachedreceiver < mongodbatlasreceiver < mysqlreceiver < nginxreceiver 34d24 < scraperhelper ``` It is possible that some of those components aren't ready yet, but it's possible that some of them just missed this last step. This ticket is to verify which ones should be added, and to create a PR addressing it.
process
check components that aren t yet wired up as seen in it is possible that components that have been completed already aren t wired up in the list of components currently this seems to be the list of exporters and receivers that haven t been wired up yet as of nov logicmonitorexporter under development should not be wired up purefareceiver under development should not be wired up snowflakereceiver jaegerremotesampling logstransformprocessor correctly omitted redactionprocessor schemaprocessor original description console ls exporter tmp exporters available txt grep github com open telemetry opentelemetry collector contrib internal components components go sed s g grep exporter awk f print tmp exporters wired txt diff tmp exporters awscloudwatchlogsexporter elasticsearchexporter googlecloudpubsubexporter observiqexporter skywalkingexporter grep github com open telemetry opentelemetry collector contrib internal components components go sed s g grep receiver awk f print tmp receivers wired txt ls receiver tmp receivers available txt diff tmp receivers awscontainerinsightreceiver cloudfoundryreceiver googlecloudpubsubreceiver googlecloudspannerreceiver httpdreceiver memcachedreceiver mongodbatlasreceiver mysqlreceiver nginxreceiver scraperhelper it is possible that some of those components aren t ready yet but it s possible that some of them just missed this last step this ticket is to verify which ones should be added and to create a pr addressing it
1
332,403
24,342,161,836
IssuesEvent
2022-10-01 21:06:23
Blockception/Minecraft-bedrock-json-schemas
https://api.github.com/repos/Blockception/Minecraft-bedrock-json-schemas
closed
Check entity ai behaviour: minecraft:behavior.circle_around_anchor
documentation help wanted Hacktoberfest
Check if the ai behaviour: minecraft:behavior.circle_around_anchor is update to date
1.0
Check entity ai behaviour: minecraft:behavior.circle_around_anchor - Check if the ai behaviour: minecraft:behavior.circle_around_anchor is update to date
non_process
check entity ai behaviour minecraft behavior circle around anchor check if the ai behaviour minecraft behavior circle around anchor is update to date
0
17,571
23,384,503,206
IssuesEvent
2022-08-11 12:40:30
MicrosoftDocs/windows-uwp
https://api.github.com/repos/MicrosoftDocs/windows-uwp
closed
URI for a settings page is missing
uwp/prod processes-and-threading/tech Pri1
In the list of settings pages and URIs there is no listing for the page "Bluetooth & devices". Both "ms-settings:bluetooth" and "ms-settings:connecteddevices" will open "Bluetooth & devices > Devices" I dont know if there is no URI for this page or it is missing from the list. But a URI for this page would be very usefull now that this page have direct buttons to connect to a specific bluetooth device. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 987ec16c-9456-93a4-177a-dbd563be7eb7 * Version Independent ID: f41f0344-f7f6-f092-a6bf-fc4184a9b460 * Content: [Launch the Windows Settings app - UWP applications](https://docs.microsoft.com/en-us/windows/uwp/launch-resume/launch-settings-app) * Content Source: [windows-apps-src/launch-resume/launch-settings-app.md](https://github.com/MicrosoftDocs/windows-uwp/blob/docs/windows-apps-src/launch-resume/launch-settings-app.md) * Product: **uwp** * Technology: **processes-and-threading** * GitHub Login: @alvinashcraft * Microsoft Alias: **aashcraft**
1.0
URI for a settings page is missing - In the list of settings pages and URIs there is no listing for the page "Bluetooth & devices". Both "ms-settings:bluetooth" and "ms-settings:connecteddevices" will open "Bluetooth & devices > Devices" I dont know if there is no URI for this page or it is missing from the list. But a URI for this page would be very usefull now that this page have direct buttons to connect to a specific bluetooth device. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 987ec16c-9456-93a4-177a-dbd563be7eb7 * Version Independent ID: f41f0344-f7f6-f092-a6bf-fc4184a9b460 * Content: [Launch the Windows Settings app - UWP applications](https://docs.microsoft.com/en-us/windows/uwp/launch-resume/launch-settings-app) * Content Source: [windows-apps-src/launch-resume/launch-settings-app.md](https://github.com/MicrosoftDocs/windows-uwp/blob/docs/windows-apps-src/launch-resume/launch-settings-app.md) * Product: **uwp** * Technology: **processes-and-threading** * GitHub Login: @alvinashcraft * Microsoft Alias: **aashcraft**
process
uri for a settings page is missing in the list of settings pages and uris there is no listing for the page bluetooth devices both ms settings bluetooth and ms settings connecteddevices will open bluetooth devices devices i dont know if there is no uri for this page or it is missing from the list but a uri for this page would be very usefull now that this page have direct buttons to connect to a specific bluetooth device document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product uwp technology processes and threading github login alvinashcraft microsoft alias aashcraft
1
32,750
7,588,647,687
IssuesEvent
2018-04-26 02:52:13
shawkinsl/mtga-tracker
https://api.github.com/repos/shawkinsl/mtga-tracker
opened
travis-ci tests should use a local database
code-cleanup dogfood enhancement good first issue help wanted task
Main benefit is that we could run tests concurrently (if Travis allows)
1.0
travis-ci tests should use a local database - Main benefit is that we could run tests concurrently (if Travis allows)
non_process
travis ci tests should use a local database main benefit is that we could run tests concurrently if travis allows
0
12,025
14,738,536,031
IssuesEvent
2021-01-07 05:02:40
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
Keener - simple question - email customer/account
anc-external anc-ops anc-process anp-0.5 ant-bug ant-support
In GitLab by @kdjstudios on Jun 8, 2018, 12:57 **Submitted by:** Gaylan Garrett <gaylan@keenercom.net> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-06-08-44383/conversation **Server:** External (Both) **Client/Site:** Keener (All) **Account:** NA **Issue:** I was curious if you have one email in customer and a different email in account, which email is used to send the invoices to. My suspicion is that it is the one in customer. The reason I say this Is because I just had a client request we send invoices to a different email but the email they requested is the email I already had set up in account but that person did not get it. The person whose email was in customer received the invoice. This is confusing because the only place you can view the invoices is in account so you would think it would use the email that is in account not customer. Can you please confirm does it use the email from customer or the email from account to email the invoice to.
1.0
Keener - simple question - email customer/account - In GitLab by @kdjstudios on Jun 8, 2018, 12:57 **Submitted by:** Gaylan Garrett <gaylan@keenercom.net> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-06-08-44383/conversation **Server:** External (Both) **Client/Site:** Keener (All) **Account:** NA **Issue:** I was curious if you have one email in customer and a different email in account, which email is used to send the invoices to. My suspicion is that it is the one in customer. The reason I say this Is because I just had a client request we send invoices to a different email but the email they requested is the email I already had set up in account but that person did not get it. The person whose email was in customer received the invoice. This is confusing because the only place you can view the invoices is in account so you would think it would use the email that is in account not customer. Can you please confirm does it use the email from customer or the email from account to email the invoice to.
process
keener simple question email customer account in gitlab by kdjstudios on jun submitted by gaylan garrett helpdesk server external both client site keener all account na issue i was curious if you have one email in customer and a different email in account which email is used to send the invoices to my suspicion is that it is the one in customer the reason i say this is because i just had a client request we send invoices to a different email but the email they requested is the email i already had set up in account but that person did not get it the person whose email was in customer received the invoice this is confusing because the only place you can view the invoices is in account so you would think it would use the email that is in account not customer can you please confirm does it use the email from customer or the email from account to email the invoice to
1
12,721
15,093,600,186
IssuesEvent
2021-02-07 01:28:57
Maximus5/ConEmu
https://api.github.com/repos/Maximus5/ConEmu
closed
Executing batch scripts from FAR fails if command line contains quotes
processes
### Versions ConEmu build: 210128 x64 OS version: Windows 10 20H2 x64 Used shell version: FAR x64 3.0.0.5737 ### Problem description Starting with ConEmu 210128, executing batch files will instantly fail if the command line contains parameters with quotes. Using ConEmu before 210128 does not have this bug (210112 works fine with FAR 3.0.0.5737) Using ConEmu without FAR does not have this bug Using FAR without ConEmu does not have this bug Executing a `*.EXE` will not have this bug, it only happens with batch files ### Steps to reproduce 1. Create a batch file `BATCH.CMD` (contents don't matter) 2. Execute `batch abc` from FAR shell 3. Execute `batch "abc"` from FAR shell ### Actual results 2. No output, batch file will be called with parameter `abc` 3. Instant fail with output `'"C:\TEST\batch.cmd" "abc"' is not recognized as an internal or external command, operable program or batch file.` The batch file will not get executed at all. ### Expected results Batch file should be executed in both situations. (Quotes are needed if a parameter contains spaces)
1.0
Executing batch scripts from FAR fails if command line contains quotes - ### Versions ConEmu build: 210128 x64 OS version: Windows 10 20H2 x64 Used shell version: FAR x64 3.0.0.5737 ### Problem description Starting with ConEmu 210128, executing batch files will instantly fail if the command line contains parameters with quotes. Using ConEmu before 210128 does not have this bug (210112 works fine with FAR 3.0.0.5737) Using ConEmu without FAR does not have this bug Using FAR without ConEmu does not have this bug Executing a `*.EXE` will not have this bug, it only happens with batch files ### Steps to reproduce 1. Create a batch file `BATCH.CMD` (contents don't matter) 2. Execute `batch abc` from FAR shell 3. Execute `batch "abc"` from FAR shell ### Actual results 2. No output, batch file will be called with parameter `abc` 3. Instant fail with output `'"C:\TEST\batch.cmd" "abc"' is not recognized as an internal or external command, operable program or batch file.` The batch file will not get executed at all. ### Expected results Batch file should be executed in both situations. (Quotes are needed if a parameter contains spaces)
process
executing batch scripts from far fails if command line contains quotes versions conemu build os version windows used shell version far problem description starting with conemu executing batch files will instantly fail if the command line contains parameters with quotes using conemu before does not have this bug works fine with far using conemu without far does not have this bug using far without conemu does not have this bug executing a exe will not have this bug it only happens with batch files steps to reproduce create a batch file batch cmd contents don t matter execute batch abc from far shell execute batch abc from far shell actual results no output batch file will be called with parameter abc instant fail with output c test batch cmd abc is not recognized as an internal or external command operable program or batch file the batch file will not get executed at all expected results batch file should be executed in both situations quotes are needed if a parameter contains spaces
1
61,560
14,630,196,351
IssuesEvent
2020-12-23 17:14:16
silinternational/hipchat-php-client
https://api.github.com/repos/silinternational/hipchat-php-client
closed
CVE-2019-11358 (Medium) detected in jquery-1.11.3.min.js
security vulnerability
## CVE-2019-11358 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.11.3.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3/jquery.min.js</a></p> <p>Path to vulnerable library: /hipchat-php-client/vendor/phpunit/php-code-coverage/src/CodeCoverage/Report/HTML/Renderer/Template/js/jquery.min.js</p> <p> Dependency Hierarchy: - :x: **jquery-1.11.3.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://api.github.com/repos/silinternational/hipchat-php-client/commits/4e42c2001a53baaa6826725742091ec10dc728ec">4e42c2001a53baaa6826725742091ec10dc728ec</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype. <p>Publish Date: 2019-04-20 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358>CVE-2019-11358</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p> <p>Release Date: 2019-04-20</p> <p>Fix Resolution: 3.4.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-11358 (Medium) detected in jquery-1.11.3.min.js - ## CVE-2019-11358 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.11.3.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3/jquery.min.js</a></p> <p>Path to vulnerable library: /hipchat-php-client/vendor/phpunit/php-code-coverage/src/CodeCoverage/Report/HTML/Renderer/Template/js/jquery.min.js</p> <p> Dependency Hierarchy: - :x: **jquery-1.11.3.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://api.github.com/repos/silinternational/hipchat-php-client/commits/4e42c2001a53baaa6826725742091ec10dc728ec">4e42c2001a53baaa6826725742091ec10dc728ec</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype. <p>Publish Date: 2019-04-20 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358>CVE-2019-11358</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p> <p>Release Date: 2019-04-20</p> <p>Fix Resolution: 3.4.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to vulnerable library hipchat php client vendor phpunit php code coverage src codecoverage report html renderer template js jquery min js dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
9,632
12,597,771,699
IssuesEvent
2020-06-11 00:53:07
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
Training using “mp.spawn”, can not reproduce the training results
module: distributed module: multiprocessing triaged
I used a distributed training method. And use **mp.spawn(main_worker, nprocs=ngpus_per_node)** to open up multiple processes.But the random number seeds for each process are different. This has led me to fail to reproduce previous results. My code is as follows: ``` import torch import torch.nn as nn import torch.optim as optim import torchvision.transforms as transforms import torchvision.datasets as datasets import numpy as np import torch.multiprocessing as mp import torch.distributed as dist def main(): SEED = 1375 # random seed for reproduce results torch.manual_seed(SEED) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = True ngpus_per_node = torch.cuda.device_count() world_size = 1 * ngpus_per_node mp.spawn(main_worker, nprocs=ngpus_per_node) def main_worker(gpu): ... ``` And the seeds I set in the main function don't seem to work. What should I do to reproduce the previous results. Thank you! cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 @osalpekar
1.0
Training using “mp.spawn”, can not reproduce the training results - I used a distributed training method. And use **mp.spawn(main_worker, nprocs=ngpus_per_node)** to open up multiple processes.But the random number seeds for each process are different. This has led me to fail to reproduce previous results. My code is as follows: ``` import torch import torch.nn as nn import torch.optim as optim import torchvision.transforms as transforms import torchvision.datasets as datasets import numpy as np import torch.multiprocessing as mp import torch.distributed as dist def main(): SEED = 1375 # random seed for reproduce results torch.manual_seed(SEED) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = True ngpus_per_node = torch.cuda.device_count() world_size = 1 * ngpus_per_node mp.spawn(main_worker, nprocs=ngpus_per_node) def main_worker(gpu): ... ``` And the seeds I set in the main function don't seem to work. What should I do to reproduce the previous results. Thank you! cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 @osalpekar
process
training using “mp spawn” can not reproduce the training results i used a distributed training method and use mp spawn main worker nprocs ngpus per node to open up multiple processes but the random number seeds for each process are different this has led me to fail to reproduce previous results my code is as follows import torch import torch nn as nn import torch optim as optim import torchvision transforms as transforms import torchvision datasets as datasets import numpy as np import torch multiprocessing as mp import torch distributed as dist def main seed random seed for reproduce results torch manual seed seed torch backends cudnn deterministic true torch backends cudnn benchmark true ngpus per node torch cuda device count world size ngpus per node mp spawn main worker nprocs ngpus per node def main worker gpu and the seeds i set in the main function don t seem to work what should i do to reproduce the previous results thank you! cc pietern mrshenli zhaojuanmao satgera rohan varma gqchen aazzolini osalpekar
1
220,090
16,887,220,883
IssuesEvent
2021-06-23 02:59:11
sideway/joi
https://api.github.com/repos/sideway/joi
opened
Document message templating
documentation
<!-- ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ You must complete this entire issue template to receive support. You MUST NOT remove, change, or replace the template with your own format. A missing or incomplete report will cause your issue to be closed without comment. Please respect the time and experience that went into this template. It is here for a reason. Thank you! ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ --> #### Context * *node version*: N/A * *module version*: N/A #### What are you trying to achieve or the steps to reproduce ? <!-- Before opening a documentation issue, please consider opening a Pull Request instead for trivial changes such as typos, spelling, incorrect links, anchors, or other corrections that are easier to just fix than report using this template. Please do not spend valuable time proposing extensive changes to the documentation before first asking about it. We value your time and do not want to waste it. Just open an issue first using this template and ask if your proposed changes would be helpful. Make sure to wrap all code examples in backticks so that they display correctly. Before submitting an issue, make sure to click on the Preview tab above to verify everything is formatted correctly. --> ```js const some = 'properly formatted code example'; ``` Message formatting isn't at all documented, including: - the templating syntax (I've seen things like `{{#label}}` and `{:[.]}`). - which values are available to be interpolated into the message for each (digging through the source I found `key`, `label`, and `value`, but had to use `console.log()` to learn that `label` is a quoted, dot-joined key path) Copied from a StackOverflow answer I posted: https://stackoverflow.com/a/68092831/101869 --- ### Using templates I had to dig through the [source][] to find an example of how to do context-dependent templating / formatting of messages since it doesn't seem to be documented: [source]: https://github.com/sideway/joi/blob/master/lib/types/string.js#L688 ```js messages: { 'string.alphanum': '{{#label}} must only contain alpha-numeric characters', 'string.base': '{{#label}} must be a string', 'string.base64': '{{#label}} must be a valid base64 string', 'string.creditCard': '{{#label}} must be a credit card', 'string.dataUri': '{{#label}} must be a valid dataUri string', 'string.domain': '{{#label}} must contain a valid domain name', 'string.email': '{{#label}} must be a valid email', 'string.empty': '{{#label}} is not allowed to be empty', 'string.guid': '{{#label}} must be a valid GUID', 'string.hex': '{{#label}} must only contain hexadecimal characters', 'string.hexAlign': '{{#label}} hex decoded representation must be byte aligned', 'string.hostname': '{{#label}} must be a valid hostname', 'string.ip': '{{#label}} must be a valid ip address with a {{#cidr}} CIDR', 'string.ipVersion': '{{#label}} must be a valid ip address of one of the following versions {{#version}} with a {{#cidr}} CIDR', 'string.isoDate': '{{#label}} must be in iso format', 'string.isoDuration': '{{#label}} must be a valid ISO 8601 duration', 'string.length': '{{#label}} length must be {{#limit}} characters long', 'string.lowercase': '{{#label}} must only contain lowercase characters', 'string.max': '{{#label}} length must be less than or equal to {{#limit}} characters long', 'string.min': '{{#label}} length must be at least {{#limit}} characters long', 'string.normalize': '{{#label}} must be unicode normalized in the {{#form}} form', 'string.token': '{{#label}} must only contain alpha-numeric and underscore characters', 'string.pattern.base': '{{#label}} with value {:[.]} fails to match the required pattern: {{#regex}}', 'string.pattern.name': '{{#label}} with value {:[.]} fails to match the {{#name}} pattern', 'string.pattern.invert.base': '{{#label}} with value {:[.]} matches the inverted pattern: {{#regex}}', 'string.pattern.invert.name': '{{#label}} with value {:[.]} matches the inverted {{#name}} pattern', 'string.trim': '{{#label}} must not have leading or trailing whitespace', 'string.uri': '{{#label}} must be a valid uri', 'string.uriCustomScheme': '{{#label}} must be a valid uri with a scheme matching the {{#scheme}} pattern', 'string.uriRelativeOnly': '{{#label}} must be a valid relative uri', 'string.uppercase': '{{#label}} must only contain uppercase characters' } ``` An example of using a templated message: ``` js const Joi = require("joi"); const schema = Joi.object({ nested: Joi.object({ name: Joi.string().required().messages({ "any.required": "{{#label}} is required!!", "string.empty": "{{#label}} can't be empty!!", }), }), }); const result = schema.validate({ nested: { // comment/uncomment to see the other message // name: "", }, }); console.log(result.error.details); ``` When using the template syntax, the context values that seem to be passed are something like the following, though specific rules / validators may pass more context: ``` js { ​key: "name", // this key, without ancestry ​label: `"nested.name"`, // full path with dots as separators, in quotes ​value: "", // the value that was validated } ```
1.0
Document message templating - <!-- ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ You must complete this entire issue template to receive support. You MUST NOT remove, change, or replace the template with your own format. A missing or incomplete report will cause your issue to be closed without comment. Please respect the time and experience that went into this template. It is here for a reason. Thank you! ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ --> #### Context * *node version*: N/A * *module version*: N/A #### What are you trying to achieve or the steps to reproduce ? <!-- Before opening a documentation issue, please consider opening a Pull Request instead for trivial changes such as typos, spelling, incorrect links, anchors, or other corrections that are easier to just fix than report using this template. Please do not spend valuable time proposing extensive changes to the documentation before first asking about it. We value your time and do not want to waste it. Just open an issue first using this template and ask if your proposed changes would be helpful. Make sure to wrap all code examples in backticks so that they display correctly. Before submitting an issue, make sure to click on the Preview tab above to verify everything is formatted correctly. --> ```js const some = 'properly formatted code example'; ``` Message formatting isn't at all documented, including: - the templating syntax (I've seen things like `{{#label}}` and `{:[.]}`). - which values are available to be interpolated into the message for each (digging through the source I found `key`, `label`, and `value`, but had to use `console.log()` to learn that `label` is a quoted, dot-joined key path) Copied from a StackOverflow answer I posted: https://stackoverflow.com/a/68092831/101869 --- ### Using templates I had to dig through the [source][] to find an example of how to do context-dependent templating / formatting of messages since it doesn't seem to be documented: [source]: https://github.com/sideway/joi/blob/master/lib/types/string.js#L688 ```js messages: { 'string.alphanum': '{{#label}} must only contain alpha-numeric characters', 'string.base': '{{#label}} must be a string', 'string.base64': '{{#label}} must be a valid base64 string', 'string.creditCard': '{{#label}} must be a credit card', 'string.dataUri': '{{#label}} must be a valid dataUri string', 'string.domain': '{{#label}} must contain a valid domain name', 'string.email': '{{#label}} must be a valid email', 'string.empty': '{{#label}} is not allowed to be empty', 'string.guid': '{{#label}} must be a valid GUID', 'string.hex': '{{#label}} must only contain hexadecimal characters', 'string.hexAlign': '{{#label}} hex decoded representation must be byte aligned', 'string.hostname': '{{#label}} must be a valid hostname', 'string.ip': '{{#label}} must be a valid ip address with a {{#cidr}} CIDR', 'string.ipVersion': '{{#label}} must be a valid ip address of one of the following versions {{#version}} with a {{#cidr}} CIDR', 'string.isoDate': '{{#label}} must be in iso format', 'string.isoDuration': '{{#label}} must be a valid ISO 8601 duration', 'string.length': '{{#label}} length must be {{#limit}} characters long', 'string.lowercase': '{{#label}} must only contain lowercase characters', 'string.max': '{{#label}} length must be less than or equal to {{#limit}} characters long', 'string.min': '{{#label}} length must be at least {{#limit}} characters long', 'string.normalize': '{{#label}} must be unicode normalized in the {{#form}} form', 'string.token': '{{#label}} must only contain alpha-numeric and underscore characters', 'string.pattern.base': '{{#label}} with value {:[.]} fails to match the required pattern: {{#regex}}', 'string.pattern.name': '{{#label}} with value {:[.]} fails to match the {{#name}} pattern', 'string.pattern.invert.base': '{{#label}} with value {:[.]} matches the inverted pattern: {{#regex}}', 'string.pattern.invert.name': '{{#label}} with value {:[.]} matches the inverted {{#name}} pattern', 'string.trim': '{{#label}} must not have leading or trailing whitespace', 'string.uri': '{{#label}} must be a valid uri', 'string.uriCustomScheme': '{{#label}} must be a valid uri with a scheme matching the {{#scheme}} pattern', 'string.uriRelativeOnly': '{{#label}} must be a valid relative uri', 'string.uppercase': '{{#label}} must only contain uppercase characters' } ``` An example of using a templated message: ``` js const Joi = require("joi"); const schema = Joi.object({ nested: Joi.object({ name: Joi.string().required().messages({ "any.required": "{{#label}} is required!!", "string.empty": "{{#label}} can't be empty!!", }), }), }); const result = schema.validate({ nested: { // comment/uncomment to see the other message // name: "", }, }); console.log(result.error.details); ``` When using the template syntax, the context values that seem to be passed are something like the following, though specific rules / validators may pass more context: ``` js { ​key: "name", // this key, without ancestry ​label: `"nested.name"`, // full path with dots as separators, in quotes ​value: "", // the value that was validated } ```
non_process
document message templating ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ you must complete this entire issue template to receive support you must not remove change or replace the template with your own format a missing or incomplete report will cause your issue to be closed without comment please respect the time and experience that went into this template it is here for a reason thank you ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ context node version n a module version n a what are you trying to achieve or the steps to reproduce before opening a documentation issue please consider opening a pull request instead for trivial changes such as typos spelling incorrect links anchors or other corrections that are easier to just fix than report using this template please do not spend valuable time proposing extensive changes to the documentation before first asking about it we value your time and do not want to waste it just open an issue first using this template and ask if your proposed changes would be helpful make sure to wrap all code examples in backticks so that they display correctly before submitting an issue make sure to click on the preview tab above to verify everything is formatted correctly js const some properly formatted code example message formatting isn t at all documented including the templating syntax i ve seen things like label and which values are available to be interpolated into the message for each digging through the source i found key label and value but had to use console log to learn that label is a quoted dot joined key path copied from a stackoverflow answer i posted using templates i had to dig through the to find an example of how to do context dependent templating formatting of messages since it doesn t seem to be documented js messages string alphanum label must only contain alpha numeric characters string base label must be a string string label must be a valid string string creditcard label must be a credit card string datauri label must be a valid datauri string string domain label must contain a valid domain name string email label must be a valid email string empty label is not allowed to be empty string guid label must be a valid guid string hex label must only contain hexadecimal characters string hexalign label hex decoded representation must be byte aligned string hostname label must be a valid hostname string ip label must be a valid ip address with a cidr cidr string ipversion label must be a valid ip address of one of the following versions version with a cidr cidr string isodate label must be in iso format string isoduration label must be a valid iso duration string length label length must be limit characters long string lowercase label must only contain lowercase characters string max label length must be less than or equal to limit characters long string min label length must be at least limit characters long string normalize label must be unicode normalized in the form form string token label must only contain alpha numeric and underscore characters string pattern base label with value fails to match the required pattern regex string pattern name label with value fails to match the name pattern string pattern invert base label with value matches the inverted pattern regex string pattern invert name label with value matches the inverted name pattern string trim label must not have leading or trailing whitespace string uri label must be a valid uri string uricustomscheme label must be a valid uri with a scheme matching the scheme pattern string urirelativeonly label must be a valid relative uri string uppercase label must only contain uppercase characters an example of using a templated message js const joi require joi const schema joi object nested joi object name joi string required messages any required label is required string empty label can t be empty const result schema validate nested comment uncomment to see the other message name console log result error details when using the template syntax the context values that seem to be passed are something like the following though specific rules validators may pass more context js ​key name this key without ancestry ​label nested name full path with dots as separators in quotes ​value the value that was validated
0
20,678
27,350,280,632
IssuesEvent
2023-02-27 09:05:33
bitfocus/companion-module-requests
https://api.github.com/repos/bitfocus/companion-module-requests
opened
Extron IPCP Pro...
NOT YET PROCESSED
- [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested** The name of the device, hardware, or software you would like to control: IPCP PRO What you would like to be able to make it do from Companion: I want to send strings over ethernet which i can "read" (condition) with a monitor in Global Configurator plus/pro. As action I want to contral all others. Direct links or attachments to the ethernet control protocol or API: https://media.extron.com/public/download/files/userman/68-2961-01_F_Extron_Ctrl_NetworkPortsnLicenses.pdf https://media.extron.com/public/download/files/userman/68-2438-01_L_IPCP_Pro_UG.pdf
1.0
Extron IPCP Pro... - - [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested** The name of the device, hardware, or software you would like to control: IPCP PRO What you would like to be able to make it do from Companion: I want to send strings over ethernet which i can "read" (condition) with a monitor in Global Configurator plus/pro. As action I want to contral all others. Direct links or attachments to the ethernet control protocol or API: https://media.extron.com/public/download/files/userman/68-2961-01_F_Extron_Ctrl_NetworkPortsnLicenses.pdf https://media.extron.com/public/download/files/userman/68-2438-01_L_IPCP_Pro_UG.pdf
process
extron ipcp pro i have researched the list of existing companion modules and requests and have determined this has not yet been requested the name of the device hardware or software you would like to control ipcp pro what you would like to be able to make it do from companion i want to send strings over ethernet which i can read condition with a monitor in global configurator plus pro as action i want to contral all others direct links or attachments to the ethernet control protocol or api
1
737,188
25,505,037,882
IssuesEvent
2022-11-28 08:46:55
bash-lsp/bash-language-server
https://api.github.com/repos/bash-lsp/bash-language-server
closed
VSCode: Repo with not really 21K shell scripts locks up language server in "Analyzing" loop
bug priority ⭐️
I have a repo with a few submodules; nothing out of ordinary: gRPC, Kaldi, and a couple of lesser libs. gRPC has a dozen submodules on its own. Kaldi is written half in C++, half in Bash as a glue. gRPC and friends also have a non-negligible count of shell scripts. But some of these files are accounted multiple times, close to a hundred, via systematic symlinking. _There aren't really 21K files_, in fact "only" 4K, but some of them are massively reachable through symlinks with a high multiplicity. ``` $ find -name \*.sh | wc -l 4104 ``` The server seems to constantly crash and restart. VSCode is logging non-stop, and I never seen anything like code completion working: ``` [Info - 11:48:45 PM] BashLanguageServer initializing... Analyzing files matching glob "**/*@(.sh|.inc|.bash|.command)" inside /home/kkm/work/ikke Glob resolved with 21045 files after 2.389 seconds Analyzing file:///home/kkm/work/ikke/ext/fmt/test/fuzzing/build.sh Analyzing file:///home/kkm/work/ikke/ext/grpc/bazel/update_mirror.sh ... Only 16646 total "Analyzing" or "Skipping" records, 4399 short of 21045 ... Analyzing file:///home/kkm/work/ikke/ext/kaldi/egs/swahili/s5/utils/mkgraph_lookahead.sh Analyzing file:///home/kkm/work/ikke/ext/kaldi/egs/swahili/s5/utils/mkgraph.sh [Info - 11:49:33 PM] Connection to server got closed. Server will restart. [Info - 11:49:34 PM] BashLanguageServer initializing... Analyzing files matching glob "**/*@(.sh|.inc|.bash|.command)" inside /home/kkm/work/ikke Glob resolved with 21045 files after 2.474 seconds ``` ...and there we go again. And again. Ideally, _in this project_ I'd prefer to exclude all submodule files from the scan, or, in fact, all files: we have just a dozen standalone scripts here. Other library scripts, including Kaldi, are irrelevant, they are just happened to be dropped by Git submodules. But I'm also member of the Kaldi team. Working on Kaldi itself is a different story. I'd probably want to select only specific directories with common library scripts (just three, in fact). All subdirectories of `egs/` are independent experiments ("ee Jeez", "ee jee" singular — likely from "e.g.," but this terminology pre-dates my joining the project :)), practically never calling each other. Most correspond to published research and are contributed by various people, and we safekeep them for reproducibility of published results. Every eg invariably links to the two common script libraries, `utils` and `steps`. For example, [the `swahili/s5` eg](https://github.com/kaldi-asr/kaldi/tree/57f8d991e/egs/swahili/s5) in the last pre-crash log message above reached `utils/mkgraph.sh` through the `utils` symlink. At this moment, [we have exactly 99 egs](https://github.com/kaldi-asr/kaldi/tree/57f8d991e/egs/) and a PR for the 100th one, so that every file in these two libraries is counted 99 times by the language server.¹ . So, my "if I had a magic wand" configuration for standalone Kaldi would be to scan only files of a single subdirectory under egs/ where I have opened a file, and these two utility libraries (either through symlinks, or configured manually if you could teach the server to ignore symlinks altogether; adding 2 directories to the VSCode config is no sweat). In fact, out of the ≈4100 shell scripts, ≈3500 are in this `egs/` directory. I don't know if you consider it too large a number to do anything special about it, or not. As long as it works for me, anything goes. :) ————————— ¹ Yes, it could have been done better, but the toolkit is a de facto standard in our research area and is approaching 20 years, so we are very careful with radical changes. Users are familiar with this layout. Even weirder, [the `steps` and `utils` are real subdirectories of (the first ever(?)) eg, `wsj/s5`](https://github.com/kaldi-asr/kaldi/tree/57f8d991e/egs/wsj/s5), and the next one started the 20 year of the tradition of symlinking them.
1.0
VSCode: Repo with not really 21K shell scripts locks up language server in "Analyzing" loop - I have a repo with a few submodules; nothing out of ordinary: gRPC, Kaldi, and a couple of lesser libs. gRPC has a dozen submodules on its own. Kaldi is written half in C++, half in Bash as a glue. gRPC and friends also have a non-negligible count of shell scripts. But some of these files are accounted multiple times, close to a hundred, via systematic symlinking. _There aren't really 21K files_, in fact "only" 4K, but some of them are massively reachable through symlinks with a high multiplicity. ``` $ find -name \*.sh | wc -l 4104 ``` The server seems to constantly crash and restart. VSCode is logging non-stop, and I never seen anything like code completion working: ``` [Info - 11:48:45 PM] BashLanguageServer initializing... Analyzing files matching glob "**/*@(.sh|.inc|.bash|.command)" inside /home/kkm/work/ikke Glob resolved with 21045 files after 2.389 seconds Analyzing file:///home/kkm/work/ikke/ext/fmt/test/fuzzing/build.sh Analyzing file:///home/kkm/work/ikke/ext/grpc/bazel/update_mirror.sh ... Only 16646 total "Analyzing" or "Skipping" records, 4399 short of 21045 ... Analyzing file:///home/kkm/work/ikke/ext/kaldi/egs/swahili/s5/utils/mkgraph_lookahead.sh Analyzing file:///home/kkm/work/ikke/ext/kaldi/egs/swahili/s5/utils/mkgraph.sh [Info - 11:49:33 PM] Connection to server got closed. Server will restart. [Info - 11:49:34 PM] BashLanguageServer initializing... Analyzing files matching glob "**/*@(.sh|.inc|.bash|.command)" inside /home/kkm/work/ikke Glob resolved with 21045 files after 2.474 seconds ``` ...and there we go again. And again. Ideally, _in this project_ I'd prefer to exclude all submodule files from the scan, or, in fact, all files: we have just a dozen standalone scripts here. Other library scripts, including Kaldi, are irrelevant, they are just happened to be dropped by Git submodules. But I'm also member of the Kaldi team. Working on Kaldi itself is a different story. I'd probably want to select only specific directories with common library scripts (just three, in fact). All subdirectories of `egs/` are independent experiments ("ee Jeez", "ee jee" singular — likely from "e.g.," but this terminology pre-dates my joining the project :)), practically never calling each other. Most correspond to published research and are contributed by various people, and we safekeep them for reproducibility of published results. Every eg invariably links to the two common script libraries, `utils` and `steps`. For example, [the `swahili/s5` eg](https://github.com/kaldi-asr/kaldi/tree/57f8d991e/egs/swahili/s5) in the last pre-crash log message above reached `utils/mkgraph.sh` through the `utils` symlink. At this moment, [we have exactly 99 egs](https://github.com/kaldi-asr/kaldi/tree/57f8d991e/egs/) and a PR for the 100th one, so that every file in these two libraries is counted 99 times by the language server.¹ . So, my "if I had a magic wand" configuration for standalone Kaldi would be to scan only files of a single subdirectory under egs/ where I have opened a file, and these two utility libraries (either through symlinks, or configured manually if you could teach the server to ignore symlinks altogether; adding 2 directories to the VSCode config is no sweat). In fact, out of the ≈4100 shell scripts, ≈3500 are in this `egs/` directory. I don't know if you consider it too large a number to do anything special about it, or not. As long as it works for me, anything goes. :) ————————— ¹ Yes, it could have been done better, but the toolkit is a de facto standard in our research area and is approaching 20 years, so we are very careful with radical changes. Users are familiar with this layout. Even weirder, [the `steps` and `utils` are real subdirectories of (the first ever(?)) eg, `wsj/s5`](https://github.com/kaldi-asr/kaldi/tree/57f8d991e/egs/wsj/s5), and the next one started the 20 year of the tradition of symlinking them.
non_process
vscode repo with not really shell scripts locks up language server in analyzing loop i have a repo with a few submodules nothing out of ordinary grpc kaldi and a couple of lesser libs grpc has a dozen submodules on its own kaldi is written half in c half in bash as a glue grpc and friends also have a non negligible count of shell scripts but some of these files are accounted multiple times close to a hundred via systematic symlinking there aren t really files in fact only but some of them are massively reachable through symlinks with a high multiplicity find name sh wc l the server seems to constantly crash and restart vscode is logging non stop and i never seen anything like code completion working bashlanguageserver initializing analyzing files matching glob sh inc bash command inside home kkm work ikke glob resolved with files after seconds analyzing file home kkm work ikke ext fmt test fuzzing build sh analyzing file home kkm work ikke ext grpc bazel update mirror sh only total analyzing or skipping records short of analyzing file home kkm work ikke ext kaldi egs swahili utils mkgraph lookahead sh analyzing file home kkm work ikke ext kaldi egs swahili utils mkgraph sh connection to server got closed server will restart bashlanguageserver initializing analyzing files matching glob sh inc bash command inside home kkm work ikke glob resolved with files after seconds and there we go again and again ideally in this project i d prefer to exclude all submodule files from the scan or in fact all files we have just a dozen standalone scripts here other library scripts including kaldi are irrelevant they are just happened to be dropped by git submodules but i m also member of the kaldi team working on kaldi itself is a different story i d probably want to select only specific directories with common library scripts just three in fact all subdirectories of egs are independent experiments ee jeez ee jee singular — likely from e g but this terminology pre dates my joining the project practically never calling each other most correspond to published research and are contributed by various people and we safekeep them for reproducibility of published results every eg invariably links to the two common script libraries utils and steps for example in the last pre crash log message above reached utils mkgraph sh through the utils symlink at this moment and a pr for the one so that every file in these two libraries is counted times by the language server ¹ so my if i had a magic wand configuration for standalone kaldi would be to scan only files of a single subdirectory under egs where i have opened a file and these two utility libraries either through symlinks or configured manually if you could teach the server to ignore symlinks altogether adding directories to the vscode config is no sweat in fact out of the ≈ shell scripts ≈ are in this egs directory i don t know if you consider it too large a number to do anything special about it or not as long as it works for me anything goes ————————— ¹ yes it could have been done better but the toolkit is a de facto standard in our research area and is approaching years so we are very careful with radical changes users are familiar with this layout even weirder and the next one started the year of the tradition of symlinking them
0
398,654
27,207,273,534
IssuesEvent
2023-02-20 14:01:44
5G-ERA/middleware
https://api.github.com/repos/5G-ERA/middleware
closed
Middleware - Create contribution guidelines
bug documentation
**Describe the bug** Middleware lacks clear contribution guidelines and a roadmap. An increased focus on the documentation has to be put in.
1.0
Middleware - Create contribution guidelines - **Describe the bug** Middleware lacks clear contribution guidelines and a roadmap. An increased focus on the documentation has to be put in.
non_process
middleware create contribution guidelines describe the bug middleware lacks clear contribution guidelines and a roadmap an increased focus on the documentation has to be put in
0
13,101
15,496,326,445
IssuesEvent
2021-03-11 02:28:13
dluiscosta/weather_api
https://api.github.com/repos/dluiscosta/weather_api
opened
Support on-demand custom execution of tests
development process enhancement
- Allow configuration of automatic testing upon Docker container startup; - Provide an entrypoint to run tests by demand and allow usage of custom pytest parameters;
1.0
Support on-demand custom execution of tests - - Allow configuration of automatic testing upon Docker container startup; - Provide an entrypoint to run tests by demand and allow usage of custom pytest parameters;
process
support on demand custom execution of tests allow configuration of automatic testing upon docker container startup provide an entrypoint to run tests by demand and allow usage of custom pytest parameters
1
414,723
12,110,791,803
IssuesEvent
2020-04-21 11:03:21
AbsaOSS/enceladus
https://api.github.com/repos/AbsaOSS/enceladus
opened
Add tag "Latest" to the latest version in version pickers
Menas UX feature priority: undecided under discussion
## Feature When picking a version add a tag latest next to the update date of the latest version
1.0
Add tag "Latest" to the latest version in version pickers - ## Feature When picking a version add a tag latest next to the update date of the latest version
non_process
add tag latest to the latest version in version pickers feature when picking a version add a tag latest next to the update date of the latest version
0
16,306
20,960,721,738
IssuesEvent
2022-03-27 19:05:27
lynnandtonic/nestflix.fun
https://api.github.com/repos/lynnandtonic/nestflix.fun
closed
Vital Social Issues N' Stuff
suggested title in process
Please add as much of the following info as you can: Title:Vital Social Issues N' Stuff Type (film/tv show):Public access TV Show Film or show in which it appears:Married with Children Is the parent film/show streaming anywhere?Hulu About when in the parent film/show does it appear?S6 Ep 9 10m:08s sActual footage of the film/show can be seen (yes/no)?Yes ![vital social issues n stuff](https://user-images.githubusercontent.com/11180833/129305679-2e1230bb-b740-43f2-9dde-3282f5a14b17.png)
1.0
Vital Social Issues N' Stuff - Please add as much of the following info as you can: Title:Vital Social Issues N' Stuff Type (film/tv show):Public access TV Show Film or show in which it appears:Married with Children Is the parent film/show streaming anywhere?Hulu About when in the parent film/show does it appear?S6 Ep 9 10m:08s sActual footage of the film/show can be seen (yes/no)?Yes ![vital social issues n stuff](https://user-images.githubusercontent.com/11180833/129305679-2e1230bb-b740-43f2-9dde-3282f5a14b17.png)
process
vital social issues n stuff please add as much of the following info as you can title vital social issues n stuff type film tv show public access tv show film or show in which it appears married with children is the parent film show streaming anywhere hulu about when in the parent film show does it appear ep sactual footage of the film show can be seen yes no yes
1
16,561
21,573,346,087
IssuesEvent
2022-05-02 11:00:34
elastic/beats
https://api.github.com/repos/elastic/beats
closed
[filebeat] Allow wildcards for drop_fields
enhancement Filebeat libbeat :Processors Team:Integrations
**Describe the enhancement:** `drop_fields.fields` should support glob or regex patterns. **Describe a specific use case for the enhancement or feature:** I'm using the `add_docker_metadata` processor to extract container metadata, which also includes Docker labels. For some containers, there are quite a lot that don't contain relevant information for logging purposes. For example, the official `elasticsearch` image includes: - "container.labels.license" - "container.labels.org_label-schema_build-date" - "container.labels.org_label-schema_license" - "container.labels.org_label-schema_name" - "container.labels.org_label-schema_schema-version" - "container.labels.org_label-schema_url" - "container.labels.org_label-schema_vcs-url" - "container.labels.org_label-schema_vendor" - "container.labels.org_label-schema_version" Also many labels coming from `docker-compose` are not really needed and can be dropped as well. So you quickly end up with quite a list of labels, all starting with the same prefix. (The exact same labels are also repeated as `docker.container.…`, but that's a different issue). So instead of having to list all those labels under `drop_fields`, it would be nice to just write: - "container.labels.license" - "container.labels.org_label-schema*"
1.0
[filebeat] Allow wildcards for drop_fields - **Describe the enhancement:** `drop_fields.fields` should support glob or regex patterns. **Describe a specific use case for the enhancement or feature:** I'm using the `add_docker_metadata` processor to extract container metadata, which also includes Docker labels. For some containers, there are quite a lot that don't contain relevant information for logging purposes. For example, the official `elasticsearch` image includes: - "container.labels.license" - "container.labels.org_label-schema_build-date" - "container.labels.org_label-schema_license" - "container.labels.org_label-schema_name" - "container.labels.org_label-schema_schema-version" - "container.labels.org_label-schema_url" - "container.labels.org_label-schema_vcs-url" - "container.labels.org_label-schema_vendor" - "container.labels.org_label-schema_version" Also many labels coming from `docker-compose` are not really needed and can be dropped as well. So you quickly end up with quite a list of labels, all starting with the same prefix. (The exact same labels are also repeated as `docker.container.…`, but that's a different issue). So instead of having to list all those labels under `drop_fields`, it would be nice to just write: - "container.labels.license" - "container.labels.org_label-schema*"
process
allow wildcards for drop fields describe the enhancement drop fields fields should support glob or regex patterns describe a specific use case for the enhancement or feature i m using the add docker metadata processor to extract container metadata which also includes docker labels for some containers there are quite a lot that don t contain relevant information for logging purposes for example the official elasticsearch image includes container labels license container labels org label schema build date container labels org label schema license container labels org label schema name container labels org label schema schema version container labels org label schema url container labels org label schema vcs url container labels org label schema vendor container labels org label schema version also many labels coming from docker compose are not really needed and can be dropped as well so you quickly end up with quite a list of labels all starting with the same prefix the exact same labels are also repeated as docker container … but that s a different issue so instead of having to list all those labels under drop fields it would be nice to just write container labels license container labels org label schema
1
582,555
17,364,391,642
IssuesEvent
2021-07-30 04:08:09
neogcamp/platform-93by4
https://api.github.com/repos/neogcamp/platform-93by4
opened
Proposed: Link logo change on Student dashboard.
Priority: Low Type: Enhancement Type: good first issue
The logo on the dashboard for Submit your portfolio states it is an external link while it's not, ![Screenshot from 2021-07-30 09-30-00](https://user-images.githubusercontent.com/55375534/127598686-ab2ba057-50d3-447d-a4c3-5d558e05ec51.png) So I believe we need to change the logo to the logo in the picture below. ![Screenshot from 2021-07-30 09-30-23](https://user-images.githubusercontent.com/55375534/127598724-eae9198a-7fe5-43c5-b653-77e01d4c265b.png) .
1.0
Proposed: Link logo change on Student dashboard. - The logo on the dashboard for Submit your portfolio states it is an external link while it's not, ![Screenshot from 2021-07-30 09-30-00](https://user-images.githubusercontent.com/55375534/127598686-ab2ba057-50d3-447d-a4c3-5d558e05ec51.png) So I believe we need to change the logo to the logo in the picture below. ![Screenshot from 2021-07-30 09-30-23](https://user-images.githubusercontent.com/55375534/127598724-eae9198a-7fe5-43c5-b653-77e01d4c265b.png) .
non_process
proposed link logo change on student dashboard the logo on the dashboard for submit your portfolio states it is an external link while it s not so i believe we need to change the logo to the logo in the picture below
0
7,014
10,165,584,452
IssuesEvent
2019-08-07 14:11:15
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Why are the examples AzureRM command?
Pri2 automation/svc cxp process-automation/subsvc product-question triaged
Didn't Microsoft (you) replace the powershell AzureRM module with the powershell Az module. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 805a6236-70b7-7dd5-ac86-eea6efceff3a * Version Independent ID: e8078e34-bdf0-32b1-fac4-550091f2a06a * Content: [Learning PowerShell Workflow for Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-powershell-workflow) * Content Source: [articles/automation/automation-powershell-workflow.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-powershell-workflow.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @bobbytreed * Microsoft Alias: **robreed**
1.0
Why are the examples AzureRM command? - Didn't Microsoft (you) replace the powershell AzureRM module with the powershell Az module. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 805a6236-70b7-7dd5-ac86-eea6efceff3a * Version Independent ID: e8078e34-bdf0-32b1-fac4-550091f2a06a * Content: [Learning PowerShell Workflow for Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-powershell-workflow) * Content Source: [articles/automation/automation-powershell-workflow.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-powershell-workflow.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @bobbytreed * Microsoft Alias: **robreed**
process
why are the examples azurerm command didn t microsoft you replace the powershell azurerm module with the powershell az module document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login bobbytreed microsoft alias robreed
1
104,667
11,418,292,383
IssuesEvent
2020-02-03 03:56:21
vuetifyjs/vuetify
https://api.github.com/repos/vuetifyjs/vuetify
reopened
[Bug Report] [v-autocomplete] Wrong menu position on items change if it is already open
C: VSelect T: documentation
### Environment **Vuetify Version:** 2.2.8 **Vue Version:** 2.6.11 **Browsers:** Chrome 79.0.3945.117 **OS:** Mac OS 10.15.2 ### Steps to reproduce While the autocomplete menu is open and items of the autocomplete change (in the example from empty array to a 5 elements array), the menu doesn't change position until I close and re-open it. ### Expected Behavior I expect that after the item list of the autocomplete change, it will adjust the dropdown menu position, also if it already open ### Actual Behavior Position of dropdown position doesn't change until the dropdown menu is not reopened ### Reproduction Link <a href="https://codepen.io/alessandro-mesti-sinerbit/pen/ZEYNyjW?editors=1010" target="_blank">https://codepen.io/alessandro-mesti-sinerbit/pen/ZEYNyjW?editors=1010</a> <!-- generated by vuetify-issue-helper. DO NOT REMOVE -->
1.0
[Bug Report] [v-autocomplete] Wrong menu position on items change if it is already open - ### Environment **Vuetify Version:** 2.2.8 **Vue Version:** 2.6.11 **Browsers:** Chrome 79.0.3945.117 **OS:** Mac OS 10.15.2 ### Steps to reproduce While the autocomplete menu is open and items of the autocomplete change (in the example from empty array to a 5 elements array), the menu doesn't change position until I close and re-open it. ### Expected Behavior I expect that after the item list of the autocomplete change, it will adjust the dropdown menu position, also if it already open ### Actual Behavior Position of dropdown position doesn't change until the dropdown menu is not reopened ### Reproduction Link <a href="https://codepen.io/alessandro-mesti-sinerbit/pen/ZEYNyjW?editors=1010" target="_blank">https://codepen.io/alessandro-mesti-sinerbit/pen/ZEYNyjW?editors=1010</a> <!-- generated by vuetify-issue-helper. DO NOT REMOVE -->
non_process
wrong menu position on items change if it is already open environment vuetify version vue version browsers chrome os mac os steps to reproduce while the autocomplete menu is open and items of the autocomplete change in the example from empty array to a elements array the menu doesn t change position until i close and re open it expected behavior i expect that after the item list of the autocomplete change it will adjust the dropdown menu position also if it already open actual behavior position of dropdown position doesn t change until the dropdown menu is not reopened reproduction link
0
203,940
7,078,216,009
IssuesEvent
2018-01-10 02:19:14
internetarchive/openlibrary
https://api.github.com/repos/internetarchive/openlibrary
opened
availability.js getUsersLoansAndWaitlists should be rolled back
blocker easy performance priority
A feature was started in `availability.js` which fetches a user's active waitlists to be used when checking `getAvailabilityV2` -- `getUsersLoansAndWaitlists`. This method is incomplete and should be rolled back and moved to its own branch (until correctly implemented, it just negatively impacts performance)
1.0
availability.js getUsersLoansAndWaitlists should be rolled back - A feature was started in `availability.js` which fetches a user's active waitlists to be used when checking `getAvailabilityV2` -- `getUsersLoansAndWaitlists`. This method is incomplete and should be rolled back and moved to its own branch (until correctly implemented, it just negatively impacts performance)
non_process
availability js getusersloansandwaitlists should be rolled back a feature was started in availability js which fetches a user s active waitlists to be used when checking getusersloansandwaitlists this method is incomplete and should be rolled back and moved to its own branch until correctly implemented it just negatively impacts performance
0
534,410
15,618,284,947
IssuesEvent
2021-03-20 00:33:22
ainslec/adventuron-issue-tracker
https://api.github.com/repos/ainslec/adventuron-issue-tracker
closed
Be able to scan deeply inside an outer container
priority
Chris M's reminder about what he'd like to be able to do. `This reminds me of a thing I asked ages ago and I don't think it was possible then (I but don't recall there being a clear answer). I was trying to check if any of the children of a container object had a particular trait. I ended up checking individual objects to see if a.they had the trait and b. they were in the container but I wanted to scan the whole set at once.`
1.0
Be able to scan deeply inside an outer container - Chris M's reminder about what he'd like to be able to do. `This reminds me of a thing I asked ages ago and I don't think it was possible then (I but don't recall there being a clear answer). I was trying to check if any of the children of a container object had a particular trait. I ended up checking individual objects to see if a.they had the trait and b. they were in the container but I wanted to scan the whole set at once.`
non_process
be able to scan deeply inside an outer container chris m s reminder about what he d like to be able to do this reminds me of a thing i asked ages ago and i don t think it was possible then i but don t recall there being a clear answer i was trying to check if any of the children of a container object had a particular trait i ended up checking individual objects to see if a they had the trait and b they were in the container but i wanted to scan the whole set at once
0
19,101
25,148,589,804
IssuesEvent
2022-11-10 08:11:56
EBIvariation/eva-opentargets
https://api.github.com/repos/EBIvariation/eva-opentargets
closed
Evidence string generation for 2022.11 release
Processing
**Deadline for submission: 4 November 2022** Refer to [documentation](https://github.com/EBIvariation/eva-opentargets/blob/master/docs/generate-evidence-strings.md) for full description of steps.
1.0
Evidence string generation for 2022.11 release - **Deadline for submission: 4 November 2022** Refer to [documentation](https://github.com/EBIvariation/eva-opentargets/blob/master/docs/generate-evidence-strings.md) for full description of steps.
process
evidence string generation for release deadline for submission november refer to for full description of steps
1
20,117
26,656,032,567
IssuesEvent
2023-01-25 16:55:39
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
Change the hashing algorithm for probabilistic sampler
enhancement good first issue processor/probabilisticsampler
In the probabilistic sampler, we seem to use a hashing algorithm of our own, instead of relying on a library to do so, as it's done in other modules. We should try to follow the guideline we have on this: https://github.com/open-telemetry/opentelemetry-collector/blob/main/CONTRIBUTING.md#recommended-libraries--defaults _Originally posted by @MovieStoreGuy in https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/14920#discussion_r1019684154_
1.0
Change the hashing algorithm for probabilistic sampler - In the probabilistic sampler, we seem to use a hashing algorithm of our own, instead of relying on a library to do so, as it's done in other modules. We should try to follow the guideline we have on this: https://github.com/open-telemetry/opentelemetry-collector/blob/main/CONTRIBUTING.md#recommended-libraries--defaults _Originally posted by @MovieStoreGuy in https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/14920#discussion_r1019684154_
process
change the hashing algorithm for probabilistic sampler in the probabilistic sampler we seem to use a hashing algorithm of our own instead of relying on a library to do so as it s done in other modules we should try to follow the guideline we have on this originally posted by moviestoreguy in
1
27,120
11,439,781,743
IssuesEvent
2020-02-05 08:13:10
kyma-project/test-infra
https://api.github.com/repos/kyma-project/test-infra
reopened
Prow Security
area/ci area/security ci/future
Renew Prow secrets. `All Secret used in the Prow production cluster must be changed every six months. Follow Prow secret management to create a new key ring and new Secrets. Then, use Secrets populator to change all Secrets in the Prow cluster.` `NOTE: The next Secrets changewas planned for August 1, 2019.` https://github.com/kyma-project/test-infra/blob/64f57d095145a087be9f5ecebfabf066c0403bd5/docs/prow/obligatory-security-measures.md
True
Prow Security - Renew Prow secrets. `All Secret used in the Prow production cluster must be changed every six months. Follow Prow secret management to create a new key ring and new Secrets. Then, use Secrets populator to change all Secrets in the Prow cluster.` `NOTE: The next Secrets changewas planned for August 1, 2019.` https://github.com/kyma-project/test-infra/blob/64f57d095145a087be9f5ecebfabf066c0403bd5/docs/prow/obligatory-security-measures.md
non_process
prow security renew prow secrets all secret used in the prow production cluster must be changed every six months follow prow secret management to create a new key ring and new secrets then use secrets populator to change all secrets in the prow cluster note the next secrets changewas planned for august
0
21,369
29,202,226,676
IssuesEvent
2023-05-21 00:36:29
devssa/onde-codar-em-salvador
https://api.github.com/repos/devssa/onde-codar-em-salvador
closed
[Hibrido / Belo Horizonte, Minas Gerais, Brazil] Product Owner (Hibrido em Belo horizonte) na Coodesh
SALVADOR SCRUM REQUISITOS PROCESSOS GITHUB CI SEGURANÇA UMA LÓGICA DE PROGRAMAÇÃO METODOLOGIAS ÁGEIS PRODUCT OWNER BACKLOG HIBRIDO B2B ALOCADO Stale
## Descrição da vaga: Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios. Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/product-owner-hibrido-em-belo-horizonte-mg-131650389?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋 <p>A <strong>Onysis</strong> está em busca de <strong><ins>Product Owner</ins></strong> para compor seu time!</p> <p>Plataforma de gestão de Segurança no Transporte. Cuidando da segurança, o maior ativo da sua transportadora: o MOTORISTA.</p> <p>A pessoa será responsável por participar dos processos de descoberta, ideação e validação dos recursos do produto, analisando e gerando insights e oportunidades para Produto e Negócio, assim como a construção da visão de produto e roadmap, definindo os requisitos de entrega para o time de desenvolvimento.&nbsp;</p> <p><strong>Responsabilidades:</strong></p> <ul> <li>Reportar o andamento da implementação e acompanhar a entrega final sob a perspectiva do nosso usuário;</li> <li>Apontar melhorias pós entrega para garantir a melhor experiência do usuário;</li> <li>Escrever as histórias de usuário referente ao produto;</li> <li>Detalhar e documentar requisitos de negócio das funcionalidades;</li> <li>Atuar nas etapas de Discovery e Delivery ao lado do time;</li> <li>Administrar o backlog do produto, gerenciando a lista de tarefas que precisam ser realizadas;</li> <li>Coletar e analisar as métricas de uso do produto;</li> <li>Priorizar as atividades da sprint</li> <li>Benchmark de produto;</li> <li>Reports com métricas do produto.</li> </ul> ## Onisys: <p>Plataforma de gestão de Segurança no Transporte. Cuidando da segurança do maior ativo da sua transportadora: o MOTORISTA</p> <p>A plataforma Onisys possui dois planos que conseguem atender a todos os tipos de empresas de transporte, ou empresas que tenham o transporte como parte de sua operação: Onisys Básico e Onisys Safe.</p><a href='https://coodesh.com/empresas/onisys'>Veja mais no site</a> ## Habilidades: - SCRUM - Design de Sprint - Lógica de Programação ## Local: Belo Horizonte, Minas Gerais, Brazil ## Requisitos: - Graduação completa em Administração, Engenharia, Sistemas, Ciência da Computação, Marketing ou cursos relacionados; - Participação em cursos ou eventos da área de produto; - Experiência com site ou apps; - Conhecimento em lógica de programação; - Experiência com metodologias ágeis; - Experiência prévia com criação de produtos digitais; - Experiência como Product Owner em outras empresa. ## Diferenciais: - Experiência com produtos para o setor logistico; - Conhecimento em sistemas de telemetria; - Experiência com produtos B2B. ## Benefícios: - Horários Flexíveis; - Vale alimentação; - Plano de saúde integral; - Plano Odontológico Integral; - Vale transporte se necessário. ## Como se candidatar: Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Product Owner (Hibrido em Belo horizonte) na Onisys](https://coodesh.com/vagas/product-owner-hibrido-em-belo-horizonte-mg-131650389?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação. ## Labels #### Alocação Alocado #### Regime CLT #### Categoria Gestão em TI
1.0
[Hibrido / Belo Horizonte, Minas Gerais, Brazil] Product Owner (Hibrido em Belo horizonte) na Coodesh - ## Descrição da vaga: Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios. Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/product-owner-hibrido-em-belo-horizonte-mg-131650389?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋 <p>A <strong>Onysis</strong> está em busca de <strong><ins>Product Owner</ins></strong> para compor seu time!</p> <p>Plataforma de gestão de Segurança no Transporte. Cuidando da segurança, o maior ativo da sua transportadora: o MOTORISTA.</p> <p>A pessoa será responsável por participar dos processos de descoberta, ideação e validação dos recursos do produto, analisando e gerando insights e oportunidades para Produto e Negócio, assim como a construção da visão de produto e roadmap, definindo os requisitos de entrega para o time de desenvolvimento.&nbsp;</p> <p><strong>Responsabilidades:</strong></p> <ul> <li>Reportar o andamento da implementação e acompanhar a entrega final sob a perspectiva do nosso usuário;</li> <li>Apontar melhorias pós entrega para garantir a melhor experiência do usuário;</li> <li>Escrever as histórias de usuário referente ao produto;</li> <li>Detalhar e documentar requisitos de negócio das funcionalidades;</li> <li>Atuar nas etapas de Discovery e Delivery ao lado do time;</li> <li>Administrar o backlog do produto, gerenciando a lista de tarefas que precisam ser realizadas;</li> <li>Coletar e analisar as métricas de uso do produto;</li> <li>Priorizar as atividades da sprint</li> <li>Benchmark de produto;</li> <li>Reports com métricas do produto.</li> </ul> ## Onisys: <p>Plataforma de gestão de Segurança no Transporte. Cuidando da segurança do maior ativo da sua transportadora: o MOTORISTA</p> <p>A plataforma Onisys possui dois planos que conseguem atender a todos os tipos de empresas de transporte, ou empresas que tenham o transporte como parte de sua operação: Onisys Básico e Onisys Safe.</p><a href='https://coodesh.com/empresas/onisys'>Veja mais no site</a> ## Habilidades: - SCRUM - Design de Sprint - Lógica de Programação ## Local: Belo Horizonte, Minas Gerais, Brazil ## Requisitos: - Graduação completa em Administração, Engenharia, Sistemas, Ciência da Computação, Marketing ou cursos relacionados; - Participação em cursos ou eventos da área de produto; - Experiência com site ou apps; - Conhecimento em lógica de programação; - Experiência com metodologias ágeis; - Experiência prévia com criação de produtos digitais; - Experiência como Product Owner em outras empresa. ## Diferenciais: - Experiência com produtos para o setor logistico; - Conhecimento em sistemas de telemetria; - Experiência com produtos B2B. ## Benefícios: - Horários Flexíveis; - Vale alimentação; - Plano de saúde integral; - Plano Odontológico Integral; - Vale transporte se necessário. ## Como se candidatar: Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Product Owner (Hibrido em Belo horizonte) na Onisys](https://coodesh.com/vagas/product-owner-hibrido-em-belo-horizonte-mg-131650389?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação. ## Labels #### Alocação Alocado #### Regime CLT #### Categoria Gestão em TI
process
product owner hibrido em belo horizonte na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a onysis está em busca de product owner para compor seu time plataforma de gestão de segurança no transporte cuidando da segurança o maior ativo da sua transportadora o motorista a pessoa será responsável por participar dos processos de descoberta ideação e validação dos recursos do produto analisando e gerando insights e oportunidades para produto e negócio assim como a construção da visão de produto e roadmap definindo os requisitos de entrega para o time de desenvolvimento nbsp responsabilidades reportar o andamento da implementação e acompanhar a entrega final sob a perspectiva do nosso usuário apontar melhorias pós entrega para garantir a melhor experiência do usuário escrever as histórias de usuário referente ao produto detalhar e documentar requisitos de negócio das funcionalidades atuar nas etapas de discovery e delivery ao lado do time administrar o backlog do produto gerenciando a lista de tarefas que precisam ser realizadas coletar e analisar as métricas de uso do produto priorizar as atividades da sprint benchmark de produto reports com métricas do produto onisys plataforma de gestão de segurança no transporte cuidando da segurança do maior ativo da sua transportadora o motorista a plataforma onisys possui dois planos que conseguem atender a todos os tipos de empresas de transporte ou empresas que tenham o transporte como parte de sua operação onisys básico e onisys safe habilidades scrum design de sprint lógica de programação local belo horizonte minas gerais brazil requisitos graduação completa em administração engenharia sistemas ciência da computação marketing ou cursos relacionados participação em cursos ou eventos da área de produto experiência com site ou apps conhecimento em lógica de programação experiência com metodologias ágeis experiência prévia com criação de produtos digitais experiência como product owner em outras empresa diferenciais experiência com produtos para o setor logistico conhecimento em sistemas de telemetria experiência com produtos benefícios horários flexíveis vale alimentação plano de saúde integral plano odontológico integral vale transporte se necessário como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação alocado regime clt categoria gestão em ti
1
3,946
6,886,371,433
IssuesEvent
2017-11-21 19:14:28
mcellteam/neuropil_tools
https://api.github.com/repos/mcellteam/neuropil_tools
opened
Improve search function with regex in trace list
processor
Possible need to rectify with Blender regex pecularities
1.0
Improve search function with regex in trace list - Possible need to rectify with Blender regex pecularities
process
improve search function with regex in trace list possible need to rectify with blender regex pecularities
1
6,189
9,103,768,287
IssuesEvent
2019-02-20 16:35:16
googleapis/cloud-debug-nodejs
https://api.github.com/repos/googleapis/cloud-debug-nodejs
closed
Move samples to this repo
priority: p2 type: process
Hi there, our docs-samples repo still have some [debugger samples](https://github.com/GoogleCloudPlatform/nodejs-docs-samples/tree/master/debugger). For Node.js, all samples live with their client library if they have one. Can you move the samples over please. Thanks! cc @ofrobots
1.0
Move samples to this repo - Hi there, our docs-samples repo still have some [debugger samples](https://github.com/GoogleCloudPlatform/nodejs-docs-samples/tree/master/debugger). For Node.js, all samples live with their client library if they have one. Can you move the samples over please. Thanks! cc @ofrobots
process
move samples to this repo hi there our docs samples repo still have some for node js all samples live with their client library if they have one can you move the samples over please thanks cc ofrobots
1
20,667
27,334,852,478
IssuesEvent
2023-02-26 03:51:00
cse442-at-ub/project_s23-team-infinity
https://api.github.com/repos/cse442-at-ub/project_s23-team-infinity
closed
Create frontend documentation in order to collate and organize instructions for easier on-boarding and general guidance in the frontend.
IO Task Processing Task Sprint 1 UI Task
**Task Test** *Test 1* 1) Research about what is React.js 2) Follow the tutorial from w3schools.com to understand how to set up the environment of React.js 3) Keep reading from the tutorial to understand the basic component of React.js 4) Set up the environment and checked if it works 5) Verify that the application I created shows properly in the website. 6) Create a shared google doc. and write down the documentation about my understanding of React.js. [https://docs.google.com/document/d/1p5TH8LCwoyRS_pSOWSzL6BV8iflkgyHti9fkvFkcutM/edit#](url)
1.0
Create frontend documentation in order to collate and organize instructions for easier on-boarding and general guidance in the frontend. - **Task Test** *Test 1* 1) Research about what is React.js 2) Follow the tutorial from w3schools.com to understand how to set up the environment of React.js 3) Keep reading from the tutorial to understand the basic component of React.js 4) Set up the environment and checked if it works 5) Verify that the application I created shows properly in the website. 6) Create a shared google doc. and write down the documentation about my understanding of React.js. [https://docs.google.com/document/d/1p5TH8LCwoyRS_pSOWSzL6BV8iflkgyHti9fkvFkcutM/edit#](url)
process
create frontend documentation in order to collate and organize instructions for easier on boarding and general guidance in the frontend task test test research about what is react js follow the tutorial from com to understand how to set up the environment of react js keep reading from the tutorial to understand the basic component of react js set up the environment and checked if it works verify that the application i created shows properly in the website create a shared google doc and write down the documentation about my understanding of react js url
1
10,662
13,453,183,584
IssuesEvent
2020-09-09 00:11:11
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
reopened
Workspaces are always cleaned.
Pri2 cba devops-cicd-process/tech devops/prod doc-bug needs-more-info
In the documentation it's stated that: > “When you run a pipeline on a self-hosted agent, by default, none of the sub-directories are cleaned in between two consecutive runs.” is not what I am observing. Looking at my build it seems that outputs are cleaned: ``` ##[command]git version git version 2.26.2 ##[command]git config --get remote.origin.url ##[command]git clean -ffdx Lösche Android/Build/ Lösche Appium/Build/ Lösche Appium_Test/Build/ Lösche Browser/Build/ Lösche iOS/Build/ ##[command]git reset --hard HEAD ``` Needless to say that cleaning the workspace before running automated tests will make those tests fail. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 67504b34-d64b-02a4-2e10-ab99f3b8cfe4 * Version Independent ID: 2cf63b2e-184b-7726-3b8a-d8baffd6fcce * Content: [Jobs in Azure Pipelines and TFS - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/phases.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/phases.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
Workspaces are always cleaned. - In the documentation it's stated that: > “When you run a pipeline on a self-hosted agent, by default, none of the sub-directories are cleaned in between two consecutive runs.” is not what I am observing. Looking at my build it seems that outputs are cleaned: ``` ##[command]git version git version 2.26.2 ##[command]git config --get remote.origin.url ##[command]git clean -ffdx Lösche Android/Build/ Lösche Appium/Build/ Lösche Appium_Test/Build/ Lösche Browser/Build/ Lösche iOS/Build/ ##[command]git reset --hard HEAD ``` Needless to say that cleaning the workspace before running automated tests will make those tests fail. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 67504b34-d64b-02a4-2e10-ab99f3b8cfe4 * Version Independent ID: 2cf63b2e-184b-7726-3b8a-d8baffd6fcce * Content: [Jobs in Azure Pipelines and TFS - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/phases.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/phases.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
workspaces are always cleaned in the documentation it s stated that “when you run a pipeline on a self hosted agent by default none of the sub directories are cleaned in between two consecutive runs ” is not what i am observing looking at my build it seems that outputs are cleaned git version git version git config get remote origin url git clean ffdx lösche android build lösche appium build lösche appium test build lösche browser build lösche ios build git reset hard head needless to say that cleaning the workspace before running automated tests will make those tests fail document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
329,793
28,308,814,574
IssuesEvent
2023-04-10 13:39:40
envoyproxy/envoy
https://api.github.com/repos/envoyproxy/envoy
opened
CI flake in building docker images
bug area/test flakes area/docker area/ci
This seems to be a new flake in the docker publishing, altho i may have seen previously ```console 10 | ARG TARGETPLATFORM 11 | ENV TARGETPLATFORM="${TARGETPLATFORM:-linux/amd64}" 12 | >>> ADD "${TARGETPLATFORM}/release.tar.zst" /usr/local/bin/ 13 | 14 | -------------------- ERROR: failed to solve: Error processing tar file(invalid input: magic number mismatch): ``` seems as tho the zst has been corrupted in the pipeline somehow
1.0
CI flake in building docker images - This seems to be a new flake in the docker publishing, altho i may have seen previously ```console 10 | ARG TARGETPLATFORM 11 | ENV TARGETPLATFORM="${TARGETPLATFORM:-linux/amd64}" 12 | >>> ADD "${TARGETPLATFORM}/release.tar.zst" /usr/local/bin/ 13 | 14 | -------------------- ERROR: failed to solve: Error processing tar file(invalid input: magic number mismatch): ``` seems as tho the zst has been corrupted in the pipeline somehow
non_process
ci flake in building docker images this seems to be a new flake in the docker publishing altho i may have seen previously console arg targetplatform env targetplatform targetplatform linux add targetplatform release tar zst usr local bin error failed to solve error processing tar file invalid input magic number mismatch seems as tho the zst has been corrupted in the pipeline somehow
0
369,996
10,923,821,975
IssuesEvent
2019-11-22 08:46:02
bounswe/bounswe2019group6
https://api.github.com/repos/bounswe/bounswe2019group6
closed
Deciding "Trading Equipments" Endpoints
priority:high related:android related:backend related:frontend type:discussion / question
Hi fellows, Observing the previous endpoints changes, I think we should discuss about how endpoints should be defined and the what data they should include in themselves. First of all, I request to Backend team about their opinion about these two points which should be considered according to Project Requirements. In that way, we can make more solid progress than going back and forth on endpoints changes. Obviously, there will be some changes after decision about the endpoints, however, should be less than previously.
1.0
Deciding "Trading Equipments" Endpoints - Hi fellows, Observing the previous endpoints changes, I think we should discuss about how endpoints should be defined and the what data they should include in themselves. First of all, I request to Backend team about their opinion about these two points which should be considered according to Project Requirements. In that way, we can make more solid progress than going back and forth on endpoints changes. Obviously, there will be some changes after decision about the endpoints, however, should be less than previously.
non_process
deciding trading equipments endpoints hi fellows observing the previous endpoints changes i think we should discuss about how endpoints should be defined and the what data they should include in themselves first of all i request to backend team about their opinion about these two points which should be considered according to project requirements in that way we can make more solid progress than going back and forth on endpoints changes obviously there will be some changes after decision about the endpoints however should be less than previously
0
19,325
25,472,123,761
IssuesEvent
2022-11-25 11:05:46
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[IDP] [PM] UI issue in the admin details screen
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
**Pre-condition:** mfa should be disabled in the PM **Steps:** 1. Login to PM 2. Click on 'Admins' tab 3. Edit admin in the list and Verify **AR:** UI issue is observed in the admin details screen **ER:** All the field should be aligned properly ![UIPhone](https://user-images.githubusercontent.com/86007179/175250064-aab4edf7-c20c-44c9-9c2e-15b3618f02a9.png)
3.0
[IDP] [PM] UI issue in the admin details screen - **Pre-condition:** mfa should be disabled in the PM **Steps:** 1. Login to PM 2. Click on 'Admins' tab 3. Edit admin in the list and Verify **AR:** UI issue is observed in the admin details screen **ER:** All the field should be aligned properly ![UIPhone](https://user-images.githubusercontent.com/86007179/175250064-aab4edf7-c20c-44c9-9c2e-15b3618f02a9.png)
process
ui issue in the admin details screen pre condition mfa should be disabled in the pm steps login to pm click on admins tab edit admin in the list and verify ar ui issue is observed in the admin details screen er all the field should be aligned properly
1
7,362
10,509,347,040
IssuesEvent
2019-09-27 10:45:31
prisma/prisma2
https://api.github.com/repos/prisma/prisma2
closed
Lift and Photon search for database in different folders
bug/2-confirmed kind/bug kind/regression process/candidate
Given a schema ```groovy datasource db { provider = env("PRISMA_PROVIDER") url = env("PRISMA_URL") } generator photon { provider = "photonjs" } model Role { id String @default(cuid()) @id role String @unique } ``` and `package.json` ```json { "private": true, "scripts": { "dev": "PRISMA_PROVIDER=sqlite PRISMA_URL=file:./dev.db prisma2 dev", "seed": "PRISMA_PROVIDER=sqlite PRISMA_URL=file:./dev.db ts-node prisma/seed.ts" }, "dependencies": { "prisma2": "^2.0.0-preview-6.1", "ts-node": "^8.3.0", "typescript": "^3.5.3" } } ``` and seeding script ```ts import Photon from "@generated/photon"; new Photon().roles .create({ data: { role: "DEFAULT" } }) .then(console.log) .catch(console.error) .finally(() => process.exit()); ``` when I run prisma2 dev and seed script ``` yarn dev yarn seed ``` I see an error ``` Invalid `photon.()` invocation in /Users/roman.paradeev/workspace/photon-tslint-bug/prisma/seed.ts:4:4 Reason: Error in connector: Error querying the database: no such table: dev.Role ``` and I see two databases created. - `<project root>/prisma/dev.db` created by `prisma2 dev` - `<project root>/dev.db` created by `ts-node prisma/seed.ts` When I change `PRISMA_URL` for seed script ```json "dev": "PRISMA_PROVIDER=sqlite PRISMA_URL=\"file:./dev.db\" prisma2 dev", "seed": "PRISMA_PROVIDER=sqlite PRISMA_URL=\"file:./prisma/dev.db\" ts-node prisma/seed.ts" ``` seeding script runs without errors.
1.0
Lift and Photon search for database in different folders - Given a schema ```groovy datasource db { provider = env("PRISMA_PROVIDER") url = env("PRISMA_URL") } generator photon { provider = "photonjs" } model Role { id String @default(cuid()) @id role String @unique } ``` and `package.json` ```json { "private": true, "scripts": { "dev": "PRISMA_PROVIDER=sqlite PRISMA_URL=file:./dev.db prisma2 dev", "seed": "PRISMA_PROVIDER=sqlite PRISMA_URL=file:./dev.db ts-node prisma/seed.ts" }, "dependencies": { "prisma2": "^2.0.0-preview-6.1", "ts-node": "^8.3.0", "typescript": "^3.5.3" } } ``` and seeding script ```ts import Photon from "@generated/photon"; new Photon().roles .create({ data: { role: "DEFAULT" } }) .then(console.log) .catch(console.error) .finally(() => process.exit()); ``` when I run prisma2 dev and seed script ``` yarn dev yarn seed ``` I see an error ``` Invalid `photon.()` invocation in /Users/roman.paradeev/workspace/photon-tslint-bug/prisma/seed.ts:4:4 Reason: Error in connector: Error querying the database: no such table: dev.Role ``` and I see two databases created. - `<project root>/prisma/dev.db` created by `prisma2 dev` - `<project root>/dev.db` created by `ts-node prisma/seed.ts` When I change `PRISMA_URL` for seed script ```json "dev": "PRISMA_PROVIDER=sqlite PRISMA_URL=\"file:./dev.db\" prisma2 dev", "seed": "PRISMA_PROVIDER=sqlite PRISMA_URL=\"file:./prisma/dev.db\" ts-node prisma/seed.ts" ``` seeding script runs without errors.
process
lift and photon search for database in different folders given a schema groovy datasource db provider env prisma provider url env prisma url generator photon provider photonjs model role id string default cuid id role string unique and package json json private true scripts dev prisma provider sqlite prisma url file dev db dev seed prisma provider sqlite prisma url file dev db ts node prisma seed ts dependencies preview ts node typescript and seeding script ts import photon from generated photon new photon roles create data role default then console log catch console error finally process exit when i run dev and seed script yarn dev yarn seed i see an error invalid photon invocation in users roman paradeev workspace photon tslint bug prisma seed ts reason error in connector error querying the database no such table dev role and i see two databases created prisma dev db created by dev dev db created by ts node prisma seed ts when i change prisma url for seed script json dev prisma provider sqlite prisma url file dev db dev seed prisma provider sqlite prisma url file prisma dev db ts node prisma seed ts seeding script runs without errors
1
5,858
31,388,243,560
IssuesEvent
2023-08-26 02:42:27
backdrop-contrib/rules
https://api.github.com/repos/backdrop-contrib/rules
closed
Error: Class 'RulesLog' not found in rules_update_1000()
type - bug pr - maintainer review requested
I'm trying to Migrate a Drupal 7.92 site using Rules 7.x-2.13 to a fresh Backdrop 1.23.0 site using Rules 1.x-2.3.2. When I get to the core/update.php step I'm getting an error: ```Internal Server Error ResponseText: Error: Class 'RulesLog' not found in rules_update_1000() (line 205 of /var/www/html/modules/contrib/rules/rules.install).``` Feels like I'm missing something basic here? Anyone have ideas?
True
Error: Class 'RulesLog' not found in rules_update_1000() - I'm trying to Migrate a Drupal 7.92 site using Rules 7.x-2.13 to a fresh Backdrop 1.23.0 site using Rules 1.x-2.3.2. When I get to the core/update.php step I'm getting an error: ```Internal Server Error ResponseText: Error: Class 'RulesLog' not found in rules_update_1000() (line 205 of /var/www/html/modules/contrib/rules/rules.install).``` Feels like I'm missing something basic here? Anyone have ideas?
non_process
error class ruleslog not found in rules update i m trying to migrate a drupal site using rules x to a fresh backdrop site using rules x when i get to the core update php step i m getting an error internal server error responsetext error class ruleslog not found in rules update line of var www html modules contrib rules rules install feels like i m missing something basic here anyone have ideas
0
22,081
30,603,767,686
IssuesEvent
2023-07-22 18:29:06
h4sh5/pypi-auto-scanner
https://api.github.com/repos/h4sh5/pypi-auto-scanner
opened
roblox-pyc 1.21.89 has 5 GuardDog issues
guarddog silent-process-execution
https://pypi.org/project/roblox-pyc https://inspector.pypi.io/project/roblox-pyc ```{ "dependency": "roblox-pyc", "version": "1.21.89", "result": { "issues": 5, "errors": {}, "results": { "silent-process-execution": [ { "location": "roblox-pyc-1.21.89/robloxpyc/robloxpy.py:133", "code": " subprocess.call([\"npm\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" }, { "location": "roblox-pyc-1.21.89/robloxpyc/robloxpy.py:139", "code": " subprocess.call([\"rbxtsc\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" }, { "location": "roblox-pyc-1.21.89/robloxpyc/robloxpy.py:178", "code": " subprocess.call([\"wally\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" }, { "location": "roblox-pyc-1.21.89/robloxpyc/robloxpy.py:188", "code": " subprocess.call([\"luarocks\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" }, { "location": "roblox-pyc-1.21.89/robloxpyc/robloxpy.py:195", "code": " subprocess.call([\"moonc\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmpfc_hm3wf/roblox-pyc" } }```
1.0
roblox-pyc 1.21.89 has 5 GuardDog issues - https://pypi.org/project/roblox-pyc https://inspector.pypi.io/project/roblox-pyc ```{ "dependency": "roblox-pyc", "version": "1.21.89", "result": { "issues": 5, "errors": {}, "results": { "silent-process-execution": [ { "location": "roblox-pyc-1.21.89/robloxpyc/robloxpy.py:133", "code": " subprocess.call([\"npm\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" }, { "location": "roblox-pyc-1.21.89/robloxpyc/robloxpy.py:139", "code": " subprocess.call([\"rbxtsc\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" }, { "location": "roblox-pyc-1.21.89/robloxpyc/robloxpy.py:178", "code": " subprocess.call([\"wally\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" }, { "location": "roblox-pyc-1.21.89/robloxpyc/robloxpy.py:188", "code": " subprocess.call([\"luarocks\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" }, { "location": "roblox-pyc-1.21.89/robloxpyc/robloxpy.py:195", "code": " subprocess.call([\"moonc\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmpfc_hm3wf/roblox-pyc" } }```
process
roblox pyc has guarddog issues dependency roblox pyc version result issues errors results silent process execution location roblox pyc robloxpyc robloxpy py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null location roblox pyc robloxpyc robloxpy py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null location roblox pyc robloxpyc robloxpy py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null location roblox pyc robloxpyc robloxpy py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null location roblox pyc robloxpyc robloxpy py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp tmpfc roblox pyc
1
72,972
8,797,386,756
IssuesEvent
2018-12-23 18:58:16
mohitkh7/Easy-Learning
https://api.github.com/repos/mohitkh7/Easy-Learning
closed
Design 404 Error Page
Design Easy
Design a new template for custom 404 Error Page. Update the file `learn/templates/404.html` using bootstrap.
1.0
Design 404 Error Page - Design a new template for custom 404 Error Page. Update the file `learn/templates/404.html` using bootstrap.
non_process
design error page design a new template for custom error page update the file learn templates html using bootstrap
0
21,927
30,446,558,931
IssuesEvent
2023-07-15 18:48:27
h4sh5/pypi-auto-scanner
https://api.github.com/repos/h4sh5/pypi-auto-scanner
opened
pyutils 0.0.1b13 has 2 GuardDog issues
guarddog typosquatting silent-process-execution
https://pypi.org/project/pyutils https://inspector.pypi.io/project/pyutils ```{ "dependency": "pyutils", "version": "0.0.1b13", "result": { "issues": 2, "errors": {}, "results": { "typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: pytils, python-utils", "silent-process-execution": [ { "location": "pyutils/exec_utils.py/pyutils/exec_utils.py:205", "code": " subproc = subprocess.Popen(\n args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmpbsyv1u49/pyutils" } }```
1.0
pyutils 0.0.1b13 has 2 GuardDog issues - https://pypi.org/project/pyutils https://inspector.pypi.io/project/pyutils ```{ "dependency": "pyutils", "version": "0.0.1b13", "result": { "issues": 2, "errors": {}, "results": { "typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: pytils, python-utils", "silent-process-execution": [ { "location": "pyutils/exec_utils.py/pyutils/exec_utils.py:205", "code": " subproc = subprocess.Popen(\n args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmpbsyv1u49/pyutils" } }```
process
pyutils has guarddog issues dependency pyutils version result issues errors results typosquatting this package closely ressembles the following package names and might be a typosquatting attempt pytils python utils silent process execution location pyutils exec utils py pyutils exec utils py code subproc subprocess popen n args n stdin subprocess devnull n stdout subprocess devnull n stderr subprocess devnull n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp pyutils
1
736,968
25,494,695,996
IssuesEvent
2022-11-27 14:28:37
bounswe/bounswe2022group2
https://api.github.com/repos/bounswe/bounswe2022group2
opened
Mobile: Implementing Search Page
enhancement priority-high status-new mobile
### Issue Description I will create search screen for our mobile app. There will be search bar and results will be separated into two sections: Users and Learnifies. Further implementations details can be found under the PR. ### Step Details Steps that will be performed: - [ ] Building app bar - [ ] Building search bar - [ ] Building tab bar for sections. - [ ] Showing results in grid view ### Final Actions When the work is done, further details and final work can be found under the related PR. Link of the PR will be provided here. ### Deadline of the Issue 02.12.2022 - 23.59 ### Reviewer Egemen Atik ### Deadline for the Review 03.12.2022 - 15.00
1.0
Mobile: Implementing Search Page - ### Issue Description I will create search screen for our mobile app. There will be search bar and results will be separated into two sections: Users and Learnifies. Further implementations details can be found under the PR. ### Step Details Steps that will be performed: - [ ] Building app bar - [ ] Building search bar - [ ] Building tab bar for sections. - [ ] Showing results in grid view ### Final Actions When the work is done, further details and final work can be found under the related PR. Link of the PR will be provided here. ### Deadline of the Issue 02.12.2022 - 23.59 ### Reviewer Egemen Atik ### Deadline for the Review 03.12.2022 - 15.00
non_process
mobile implementing search page issue description i will create search screen for our mobile app there will be search bar and results will be separated into two sections users and learnifies further implementations details can be found under the pr step details steps that will be performed building app bar building search bar building tab bar for sections showing results in grid view final actions when the work is done further details and final work can be found under the related pr link of the pr will be provided here deadline of the issue reviewer egemen atik deadline for the review
0
3,974
6,904,988,328
IssuesEvent
2017-11-27 03:49:40
Great-Hill-Corporation/quickBlocks
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
closed
New options for getBalance and getTokenBal
status-inprocess tools-getBalance tools-getTokenBal type-enhancement
Need to add --showOnlyChange in getBalance and getTokenBal
1.0
New options for getBalance and getTokenBal - Need to add --showOnlyChange in getBalance and getTokenBal
process
new options for getbalance and gettokenbal need to add showonlychange in getbalance and gettokenbal
1
662,685
22,149,623,461
IssuesEvent
2022-06-03 15:26:23
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
closed
Button to open Specs panel is hard to see
unification stage: internal bug-hunt epic:ui-ux-improvements jira-migration fast-follows-1 priority: medium
## **Summary** The expandable specs icon is very hard to see and may also not meet accessibility requirements.I am unsure if the feedback from a month or so ago was addressed regarding overall accessibility issues for the new test runner UI. !image (48).png|width=108,height=141! !image (49).png|width=106,height=58! ## **Acceptance Criteria** * Should… * Should also… ### **Resources** Any Notion documents, Google documents, Figma Boards [https://cypressio.slack.com/archives/C02MYBT9Y5S/p1650569691458469](https://cypressio.slack.com/archives/C02MYBT9Y5S/p1650569691458469|smart-card) ┆Issue is synchronized with this [Jira Task](https://cypress-io.atlassian.net/browse/UNIFY-1667) by [Unito](https://www.unito.io) ┆Attachments: <a href="https://cypress-io.atlassian.net/rest/api/2/attachment/content/11870">image (48).png</a> | <a href="https://cypress-io.atlassian.net/rest/api/2/attachment/content/11869">image (49).png</a> | <a href="https://cypress-io.atlassian.net/rest/api/2/attachment/content/11903">Screen Shot 2022-04-26 at 4.45.49 PM.png</a> | <a href="https://cypress-io.atlassian.net/rest/api/2/attachment/content/11902">Screen Shot 2022-04-26 at 4.47.58 PM.png</a> ┆author: Jonathan Canales ┆epic: UI/UX Improvements ┆friendlyId: UNIFY-1667 ┆priority: Medium ┆sprint: Fast Follows 1 ┆taskType: Task
1.0
Button to open Specs panel is hard to see - ## **Summary** The expandable specs icon is very hard to see and may also not meet accessibility requirements.I am unsure if the feedback from a month or so ago was addressed regarding overall accessibility issues for the new test runner UI. !image (48).png|width=108,height=141! !image (49).png|width=106,height=58! ## **Acceptance Criteria** * Should… * Should also… ### **Resources** Any Notion documents, Google documents, Figma Boards [https://cypressio.slack.com/archives/C02MYBT9Y5S/p1650569691458469](https://cypressio.slack.com/archives/C02MYBT9Y5S/p1650569691458469|smart-card) ┆Issue is synchronized with this [Jira Task](https://cypress-io.atlassian.net/browse/UNIFY-1667) by [Unito](https://www.unito.io) ┆Attachments: <a href="https://cypress-io.atlassian.net/rest/api/2/attachment/content/11870">image (48).png</a> | <a href="https://cypress-io.atlassian.net/rest/api/2/attachment/content/11869">image (49).png</a> | <a href="https://cypress-io.atlassian.net/rest/api/2/attachment/content/11903">Screen Shot 2022-04-26 at 4.45.49 PM.png</a> | <a href="https://cypress-io.atlassian.net/rest/api/2/attachment/content/11902">Screen Shot 2022-04-26 at 4.47.58 PM.png</a> ┆author: Jonathan Canales ┆epic: UI/UX Improvements ┆friendlyId: UNIFY-1667 ┆priority: Medium ┆sprint: Fast Follows 1 ┆taskType: Task
non_process
button to open specs panel is hard to see summary the expandable specs icon is very hard to see and may also not meet accessibility requirements i am unsure if the feedback from a month or so ago was addressed regarding overall accessibility issues for the new test runner ui image png width height image png width height acceptance criteria should… should also… resources any notion documents google documents figma boards ┆issue is synchronized with this by ┆attachments ┆author jonathan canales ┆epic ui ux improvements ┆friendlyid unify ┆priority medium ┆sprint fast follows ┆tasktype task
0
18,734
24,632,057,549
IssuesEvent
2022-10-17 03:37:08
Sunbird-cQube/community
https://api.github.com/repos/Sunbird-cQube/community
closed
Configuration Testing
enhancement Processing Analyze
Configuration testing with positive and negative scenarios and capturing the output in the document
1.0
Configuration Testing - Configuration testing with positive and negative scenarios and capturing the output in the document
process
configuration testing configuration testing with positive and negative scenarios and capturing the output in the document
1
77,367
9,993,395,215
IssuesEvent
2019-07-11 15:16:21
opengeospatial/LANDRS
https://api.github.com/repos/opengeospatial/LANDRS
closed
Add new proposed OAS for LANDRS coverages
documentation enhancement help wanted
Starting with an [OpenAPI document](https://github.com/opengeospatial/LANDRS/tree/master/DesignDocs/DesignHack1/openapi/openapi.yaml) largely influenced by the [OGC Coverages OpenAPI](https://github.com/opengeospatial/ogc_api_coverages/blob/master/core/openapi/openapi.yaml) we will determine whether the API meets a predefined, well understood use case before moving on to writing the service implementation in NodeJS. Additionally, we need to merge in the existing API design by @r4space which exists [here](https://github.com/opengeospatial/LANDRS/blob/master/DesignDocs/DesignHack1/dummydemo/temperature/api/oas-doc.yaml)
1.0
Add new proposed OAS for LANDRS coverages - Starting with an [OpenAPI document](https://github.com/opengeospatial/LANDRS/tree/master/DesignDocs/DesignHack1/openapi/openapi.yaml) largely influenced by the [OGC Coverages OpenAPI](https://github.com/opengeospatial/ogc_api_coverages/blob/master/core/openapi/openapi.yaml) we will determine whether the API meets a predefined, well understood use case before moving on to writing the service implementation in NodeJS. Additionally, we need to merge in the existing API design by @r4space which exists [here](https://github.com/opengeospatial/LANDRS/blob/master/DesignDocs/DesignHack1/dummydemo/temperature/api/oas-doc.yaml)
non_process
add new proposed oas for landrs coverages starting with an largely influenced by the we will determine whether the api meets a predefined well understood use case before moving on to writing the service implementation in nodejs additionally we need to merge in the existing api design by which exists
0
129,034
17,668,694,507
IssuesEvent
2021-08-23 00:21:03
Automattic/wp-calypso
https://api.github.com/repos/Automattic/wp-calypso
closed
Write Button: Make it visually more clear that the number of posts shown to the right is a separate button
[Type] Enhancement Navigation Design [Size] XS [Closed] Fixed Back to Basics
**What** I only recently found out that the number next to the Write button in the WordPress.com toolbar is a separate button! If you click it, it shows the list of the drafts you have. https://user-images.githubusercontent.com/39308239/114265504-d1c22480-99f9-11eb-9208-f9e9cc06467b.mov **Why** Users may not be aware that the number is clickable and it results in a different action than clicking Write. The Write button with the number appears visually as one button, so it's unclear that those are two different buttons actually. **How** Maybe add a clearer visual separation with a vertical bar `|` or some other way that is more clear from the usability point of view. A designer's input would be helpful here to propose the best solution.
1.0
Write Button: Make it visually more clear that the number of posts shown to the right is a separate button - **What** I only recently found out that the number next to the Write button in the WordPress.com toolbar is a separate button! If you click it, it shows the list of the drafts you have. https://user-images.githubusercontent.com/39308239/114265504-d1c22480-99f9-11eb-9208-f9e9cc06467b.mov **Why** Users may not be aware that the number is clickable and it results in a different action than clicking Write. The Write button with the number appears visually as one button, so it's unclear that those are two different buttons actually. **How** Maybe add a clearer visual separation with a vertical bar `|` or some other way that is more clear from the usability point of view. A designer's input would be helpful here to propose the best solution.
non_process
write button make it visually more clear that the number of posts shown to the right is a separate button what i only recently found out that the number next to the write button in the wordpress com toolbar is a separate button if you click it it shows the list of the drafts you have why users may not be aware that the number is clickable and it results in a different action than clicking write the write button with the number appears visually as one button so it s unclear that those are two different buttons actually how maybe add a clearer visual separation with a vertical bar or some other way that is more clear from the usability point of view a designer s input would be helpful here to propose the best solution
0
178,722
29,996,820,289
IssuesEvent
2023-06-26 06:25:33
SeaGL/organization
https://api.github.com/repos/SeaGL/organization
opened
2023 Linux Magazine site ads
design
They prefer png or jpg. We can get ads on the Linux Magazine site. Static or animated images under 1MB in size. 728x90 300x250 160x600 Let's start with announcing basics of conference with a space for the tagline of the time, e.g. location announcement, keynotes, etc. This is likely highest priority for the sponsorship committee as they can toss our ads up once we get them over.
1.0
2023 Linux Magazine site ads - They prefer png or jpg. We can get ads on the Linux Magazine site. Static or animated images under 1MB in size. 728x90 300x250 160x600 Let's start with announcing basics of conference with a space for the tagline of the time, e.g. location announcement, keynotes, etc. This is likely highest priority for the sponsorship committee as they can toss our ads up once we get them over.
non_process
linux magazine site ads they prefer png or jpg we can get ads on the linux magazine site static or animated images under in size let s start with announcing basics of conference with a space for the tagline of the time e g location announcement keynotes etc this is likely highest priority for the sponsorship committee as they can toss our ads up once we get them over
0
5,346
8,178,571,469
IssuesEvent
2018-08-28 14:09:39
GoogleCloudPlatform/google-cloud-dotnet
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-dotnet
closed
Release Storage library
type: process
Hi, GCS CMEK is now GA. Based on developer feedback, developers are unable to use the CMEK features for DotNet as the features are only in `2.2.0-beta01`, [based on release history](https://github.com/GoogleCloudPlatform/google-cloud-dotnet/releases/tag/Google.Cloud.Storage.V1-2.2.0-beta01). If possible, request is to release a non-beta version of `2.2.0-beta01` library so it will be picked up when a developer runs `dotnet add package Google.Cloud.Storage.V1`. Thank you!
1.0
Release Storage library - Hi, GCS CMEK is now GA. Based on developer feedback, developers are unable to use the CMEK features for DotNet as the features are only in `2.2.0-beta01`, [based on release history](https://github.com/GoogleCloudPlatform/google-cloud-dotnet/releases/tag/Google.Cloud.Storage.V1-2.2.0-beta01). If possible, request is to release a non-beta version of `2.2.0-beta01` library so it will be picked up when a developer runs `dotnet add package Google.Cloud.Storage.V1`. Thank you!
process
release storage library hi gcs cmek is now ga based on developer feedback developers are unable to use the cmek features for dotnet as the features are only in if possible request is to release a non beta version of library so it will be picked up when a developer runs dotnet add package google cloud storage thank you
1
20,333
26,985,170,222
IssuesEvent
2023-02-09 15:40:31
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
closed
Images are not displayed when transforming ditamap with chunk="to-content" using HTML5 with preprocess2
bug preprocess2
## Expected Behavior When transforming ditamap having chunk="to-content" set on it, images should be displayed. ## Actual Behavior No images are present in the output. ## Steps to Reproduce [chunk.zip](https://github.com/dita-ot/dita-ot/files/9972528/chunk.zip) 1. Download and unzip the archive 2. Run the HTML5 transformation using preprocess2 ## Environment * DITA-OT version: 3.7.4 * Operating system and version: Windows * How did you run DITA-OT? oXygen * Transformation type: HTML5 using preprocess2
1.0
Images are not displayed when transforming ditamap with chunk="to-content" using HTML5 with preprocess2 - ## Expected Behavior When transforming ditamap having chunk="to-content" set on it, images should be displayed. ## Actual Behavior No images are present in the output. ## Steps to Reproduce [chunk.zip](https://github.com/dita-ot/dita-ot/files/9972528/chunk.zip) 1. Download and unzip the archive 2. Run the HTML5 transformation using preprocess2 ## Environment * DITA-OT version: 3.7.4 * Operating system and version: Windows * How did you run DITA-OT? oXygen * Transformation type: HTML5 using preprocess2
process
images are not displayed when transforming ditamap with chunk to content using with expected behavior when transforming ditamap having chunk to content set on it images should be displayed actual behavior no images are present in the output steps to reproduce download and unzip the archive run the transformation using environment dita ot version operating system and version windows how did you run dita ot oxygen transformation type using
1
20,013
26,486,339,940
IssuesEvent
2023-01-17 18:21:04
nion-software/nionswift
https://api.github.com/repos/nion-software/nionswift
opened
Line profile on distinct x-y calibrations should choose based on line orientation
type - enhancement level - easy f - line-profile f - processing f - calibration
Email 2022-05-19: #### N if I draw a vertical line profile on an image that has different x and y units of calibration, then the width gets to be specified in vertical units, and the length in horizontal units, obviously it should be the other way around. One can discuss about what to use for a tilted line, but a vertical line there is no ambiguity. Not displaying the width and showing it in wrong units can be kind of awkward. #### C I can change it so that it figures out whether the orientation of the line is mostly horizontal or vertical with 45° being an arbitrary, but consistent choice. I think it defaults to its current behavior since we were mostly using it for integrating 2D spectra images into 1D spectra early on.
1.0
Line profile on distinct x-y calibrations should choose based on line orientation - Email 2022-05-19: #### N if I draw a vertical line profile on an image that has different x and y units of calibration, then the width gets to be specified in vertical units, and the length in horizontal units, obviously it should be the other way around. One can discuss about what to use for a tilted line, but a vertical line there is no ambiguity. Not displaying the width and showing it in wrong units can be kind of awkward. #### C I can change it so that it figures out whether the orientation of the line is mostly horizontal or vertical with 45° being an arbitrary, but consistent choice. I think it defaults to its current behavior since we were mostly using it for integrating 2D spectra images into 1D spectra early on.
process
line profile on distinct x y calibrations should choose based on line orientation email n if i draw a vertical line profile on an image that has different x and y units of calibration then the width gets to be specified in vertical units and the length in horizontal units obviously it should be the other way around one can discuss about what to use for a tilted line but a vertical line there is no ambiguity not displaying the width and showing it in wrong units can be kind of awkward c i can change it so that it figures out whether the orientation of the line is mostly horizontal or vertical with ° being an arbitrary but consistent choice i think it defaults to its current behavior since we were mostly using it for integrating spectra images into spectra early on
1
11,981
14,737,101,132
IssuesEvent
2021-01-07 00:52:26
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
SA Billing Login Issues
anc-ops anc-process anp-1 ant-bug ant-support has attachment
In GitLab by @kdjstudios on Apr 16, 2018, 08:39 **Submitted by:** "Trawana Ervin" <trawana.ervin@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-04-09-11294 **Server:** Internal **Client/Site:** Chattanooga **Account:** NA **Issue:** There is an issue with SABilling and I am unable to log into the system, Chrome & Firefox. The following error message is received: Please let me know if more information is needed. ![image](/uploads/e40cdaf7965471beaf8497b4cd1fc107/image.png)
1.0
SA Billing Login Issues - In GitLab by @kdjstudios on Apr 16, 2018, 08:39 **Submitted by:** "Trawana Ervin" <trawana.ervin@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-04-09-11294 **Server:** Internal **Client/Site:** Chattanooga **Account:** NA **Issue:** There is an issue with SABilling and I am unable to log into the system, Chrome & Firefox. The following error message is received: Please let me know if more information is needed. ![image](/uploads/e40cdaf7965471beaf8497b4cd1fc107/image.png)
process
sa billing login issues in gitlab by kdjstudios on apr submitted by trawana ervin helpdesk server internal client site chattanooga account na issue there is an issue with sabilling and i am unable to log into the system chrome firefox the following error message is received please let me know if more information is needed uploads image png
1
7,213
10,568,001,988
IssuesEvent
2019-10-06 09:41:19
lastunicorn/MedicX
https://api.github.com/repos/lastunicorn/MedicX
opened
Create new clinic when "New Clinic" button is clicked
requirement
As a user, When I press the "New Clinic" button, I want a new empty clinic item to be created in the clinics list, to be selected as the current one and to be displayed in the details panel, So that I can add information in the system about a new clinic. Epic: #3
1.0
Create new clinic when "New Clinic" button is clicked - As a user, When I press the "New Clinic" button, I want a new empty clinic item to be created in the clinics list, to be selected as the current one and to be displayed in the details panel, So that I can add information in the system about a new clinic. Epic: #3
non_process
create new clinic when new clinic button is clicked as a user when i press the new clinic button i want a new empty clinic item to be created in the clinics list to be selected as the current one and to be displayed in the details panel so that i can add information in the system about a new clinic epic
0
12,772
15,158,152,022
IssuesEvent
2021-02-12 00:27:11
darktable-org/darktable
https://api.github.com/repos/darktable-org/darktable
closed
local contrast / local laplacian broken
no-issue-activity scope: image processing understood: incomplete
**Describe the bug** dt 3.1 displays wrong/different colors than in the exported image This issue has been already reported several times - apparently there was an attempt fix it but now the result is worse than before - in dt 2.6 the colors were correct when the top/bottom/left/right panels were hidden at 100% view, now all zoom levels are wrong, with and without panels, including fit to screen view. **To Reproduce** 1. open a photo, disable base curve 2. apply filmic 3. apply local contrast in local laplacian mode, choose a very high detail value, e.g. 400% 4. switch between fit to screen and 100% view, hide and unhide the top, bottom left or right "panel" (which results in a smaller or larger preview) and compare the colors 5. export the photo, open it in an image viewer and compare carefully with the photo open in dt **Expected behavior** colors should always look the same at all zoom levels **Screenshots** exported photo, fit to screen view in image viewer: ![screenshot_fittoscreeneported](https://user-images.githubusercontent.com/16564210/72168352-4bb9c700-33cd-11ea-8053-0b516e1f9a1d.png) photo in dt, fit to screen view: ![screenshot_fittoscreendt](https://user-images.githubusercontent.com/16564210/72168421-70ae3a00-33cd-11ea-8b4f-8518eca581aa.png) you can see that an area in the middle of the sky is brighter, however, the area around the woman's feet (left bottom corner) is a little less bright photo in dt, fit to screen, with thumbnails: ![screenshot_fittoscreenwiththumbs](https://user-images.githubusercontent.com/16564210/72168762-15307c00-33ce-11ea-9efb-e5d839a34b49.png) here you can see that the whole sky is the same as in the exported image, but the botto left corner is brighter finally screenshots at 100% view of the exported photo and the photo open in dt, I think you can see the difference, so no comments here ![screenshot_100percentexported](https://user-images.githubusercontent.com/16564210/72168982-853f0200-33ce-11ea-84b7-07c461520940.png) ![screenshot_100percentdt](https://user-images.githubusercontent.com/16564210/72169233-05656780-33cf-11ea-877f-aa13f82be160.png) ![screenshot_100percentexported2](https://user-images.githubusercontent.com/16564210/72169308-24fc9000-33cf-11ea-931b-a1cc4402e7ab.png) ![screenshot_100percentdt2](https://user-images.githubusercontent.com/16564210/72169671-d8fe1b00-33cf-11ea-8973-126cb49c78c3.png) **Platform (please complete the following information):** - Darktable Version: 3.1 (git master, deb-package from Opensuse), 3.0 - OS: Debian Bullseye - OpenCL active and inactive - Nvidia MX250, driver Nvidia 430 **Additional context** Sorry to report this again, nevertheless thanks for the attempt to fix. I thought it is important to report that there is a different behaviour now. I can provide the raw + sidecar if necessary. Maybe one more thing: The screenshots were taken on an external BenQ screen, I also checked on my internal screen, there the result is different, but the previews are not correct/identical with the exported photo either, I think it is exactly the other way round, the brighter areas are darker and the darker areas are brighter. It apprears to be really strange...
1.0
local contrast / local laplacian broken - **Describe the bug** dt 3.1 displays wrong/different colors than in the exported image This issue has been already reported several times - apparently there was an attempt fix it but now the result is worse than before - in dt 2.6 the colors were correct when the top/bottom/left/right panels were hidden at 100% view, now all zoom levels are wrong, with and without panels, including fit to screen view. **To Reproduce** 1. open a photo, disable base curve 2. apply filmic 3. apply local contrast in local laplacian mode, choose a very high detail value, e.g. 400% 4. switch between fit to screen and 100% view, hide and unhide the top, bottom left or right "panel" (which results in a smaller or larger preview) and compare the colors 5. export the photo, open it in an image viewer and compare carefully with the photo open in dt **Expected behavior** colors should always look the same at all zoom levels **Screenshots** exported photo, fit to screen view in image viewer: ![screenshot_fittoscreeneported](https://user-images.githubusercontent.com/16564210/72168352-4bb9c700-33cd-11ea-8053-0b516e1f9a1d.png) photo in dt, fit to screen view: ![screenshot_fittoscreendt](https://user-images.githubusercontent.com/16564210/72168421-70ae3a00-33cd-11ea-8b4f-8518eca581aa.png) you can see that an area in the middle of the sky is brighter, however, the area around the woman's feet (left bottom corner) is a little less bright photo in dt, fit to screen, with thumbnails: ![screenshot_fittoscreenwiththumbs](https://user-images.githubusercontent.com/16564210/72168762-15307c00-33ce-11ea-9efb-e5d839a34b49.png) here you can see that the whole sky is the same as in the exported image, but the botto left corner is brighter finally screenshots at 100% view of the exported photo and the photo open in dt, I think you can see the difference, so no comments here ![screenshot_100percentexported](https://user-images.githubusercontent.com/16564210/72168982-853f0200-33ce-11ea-84b7-07c461520940.png) ![screenshot_100percentdt](https://user-images.githubusercontent.com/16564210/72169233-05656780-33cf-11ea-877f-aa13f82be160.png) ![screenshot_100percentexported2](https://user-images.githubusercontent.com/16564210/72169308-24fc9000-33cf-11ea-931b-a1cc4402e7ab.png) ![screenshot_100percentdt2](https://user-images.githubusercontent.com/16564210/72169671-d8fe1b00-33cf-11ea-8973-126cb49c78c3.png) **Platform (please complete the following information):** - Darktable Version: 3.1 (git master, deb-package from Opensuse), 3.0 - OS: Debian Bullseye - OpenCL active and inactive - Nvidia MX250, driver Nvidia 430 **Additional context** Sorry to report this again, nevertheless thanks for the attempt to fix. I thought it is important to report that there is a different behaviour now. I can provide the raw + sidecar if necessary. Maybe one more thing: The screenshots were taken on an external BenQ screen, I also checked on my internal screen, there the result is different, but the previews are not correct/identical with the exported photo either, I think it is exactly the other way round, the brighter areas are darker and the darker areas are brighter. It apprears to be really strange...
process
local contrast local laplacian broken describe the bug dt displays wrong different colors than in the exported image this issue has been already reported several times apparently there was an attempt fix it but now the result is worse than before in dt the colors were correct when the top bottom left right panels were hidden at view now all zoom levels are wrong with and without panels including fit to screen view to reproduce open a photo disable base curve apply filmic apply local contrast in local laplacian mode choose a very high detail value e g switch between fit to screen and view hide and unhide the top bottom left or right panel which results in a smaller or larger preview and compare the colors export the photo open it in an image viewer and compare carefully with the photo open in dt expected behavior colors should always look the same at all zoom levels screenshots exported photo fit to screen view in image viewer photo in dt fit to screen view you can see that an area in the middle of the sky is brighter however the area around the woman s feet left bottom corner is a little less bright photo in dt fit to screen with thumbnails here you can see that the whole sky is the same as in the exported image but the botto left corner is brighter finally screenshots at view of the exported photo and the photo open in dt i think you can see the difference so no comments here platform please complete the following information darktable version git master deb package from opensuse os debian bullseye opencl active and inactive nvidia driver nvidia additional context sorry to report this again nevertheless thanks for the attempt to fix i thought it is important to report that there is a different behaviour now i can provide the raw sidecar if necessary maybe one more thing the screenshots were taken on an external benq screen i also checked on my internal screen there the result is different but the previews are not correct identical with the exported photo either i think it is exactly the other way round the brighter areas are darker and the darker areas are brighter it apprears to be really strange
1
16,000
20,188,207,743
IssuesEvent
2022-02-11 01:18:05
savitamittalmsft/WAS-SEC-TEST
https://api.github.com/repos/savitamittalmsft/WAS-SEC-TEST
opened
Deprecate legacy network security controls
WARP-Import WAF FEB 2021 Security Performance and Scalability Capacity Management Processes Security & Compliance Network Security
<a href="https://docs.microsoft.com/azure/architecture/framework/Security/network-security-containment#discontinue-legacy-network-security-technology">Deprecate legacy network security controls</a> <p><b>Why Consider This?</b></p> Network-based DLP is decreasingly effective at identifying both inadvertent and deliberate data loss. The reason for this is that most modern protocols and attackers use network-level encryption for inbound and outbound communications. While the organization can use "SSL-bridging" to provide an "authorized man-in-the-middle" that terminates and then reestablishes encrypted network connections, this can also introduce privacy, security and reliability challenges. <p><b>Context</b></p> <p><span>Carefully plan the use of signature-based Network Intrusion Detection/Network Intrusion Prevention (NIDS/NIPS) Systems and Network Data Leakage/Loss Prevention (DLP) as you adopt cloud applications services. IDS/IPS often generate an overwhelming number of false positive alerts that can contribute to SOC Analyst alert fatigue. While a well-tuned IDS/IPS system can be effective for classic application architectures, these systems do not work well for modern SaaS and PaaS application delivery models.</span></p><p><span>Because of how much is changing with network security, it's recommended to reviewand update existing network security strategy focused on these considerations as workloads are migrated toAzure:</span></p><ul style="list-style-type:disc"><li value="1" style="margin-right: 0px;text-indent: 0px;"><span>The major cloud service providers filter malformed packets and common network layer attacks.</span></li><li value="2" style="margin-right: 0px;text-indent: 0px;"><span>Many traditional NIDS/NIPS solutions use signature-based approaches on a per packet basis and easily evaded by attackers and typically produce a high rate of false positives.</span></li><li value="3" style="margin-right: 0px;text-indent: 0px;"><span>Ensure your IDS/IPS system(s) are providing meaningful positive value from alerts they generate.</span></li><li value="4" style="margin-right: 0px;text-indent: 0px;"><span>Measure alert quality by the percentage of true positives (real attacks detections) vs false positive alerts (false alarms) in the alerts raised by the system.</span></li><li value="5" style="margin-right: 0px;text-indent: 0px;"><span>Avoid analyst fatigue by providing only high-quality alerts to security analysts who investigate them. Ideally, alerts should have a 90% true positive rate for creating incidents in the primary queue that triage (Tier 1) investigation teams must respond to, while lower quality alerts would go to proactive hunting exercises to reduce analyst fatigue and burnout.</span></li><li value="6" style="margin-right: 0px;text-indent: 0px;"><span>Adopt modern Zero Trust identity approaches for protecting modern SaaS and PaaS applications. See https://aka.ms/zero-trust for more information</span></li><li value="7" style="margin-right: 0px;text-indent: 0px;"><span>For IaaS workloads, focus on network security solutions that provide per network context rather than per packet/session context. While the technology to achieve this is still evolving, software defined networks in the cloud are naturally instrumented and can achieve this much more easily than on-premises equipment.</span></li><li value="8" style="margin-right: 0px;text-indent: 0px;"><span>Favor solutions that effectively apply machine learning techniques across these large volumes of traffic. ML technology is far superior to static/manual human analysis at rapidly identifying anomalies that could be attacker activity out of normal traffic patterns.</span></li></ul> <p><b>Suggested Actions</b></p> <p><span>Review existing network security controls and minimize the use of signature-based Network Intrusion Detection/Network Intrusion Prevention (NIDS/NIPS) Systems and Network Data Leakage/Loss Prevention (DLP) as you adopt cloud applications services.</span></p> <p><b>Learn More</b></p> <p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/Security/network-security-containment#use-of-legacy-network-security-technology" target="_blank"><span>Use of legacy network security technology</span></a><span /></p>
1.0
Deprecate legacy network security controls - <a href="https://docs.microsoft.com/azure/architecture/framework/Security/network-security-containment#discontinue-legacy-network-security-technology">Deprecate legacy network security controls</a> <p><b>Why Consider This?</b></p> Network-based DLP is decreasingly effective at identifying both inadvertent and deliberate data loss. The reason for this is that most modern protocols and attackers use network-level encryption for inbound and outbound communications. While the organization can use "SSL-bridging" to provide an "authorized man-in-the-middle" that terminates and then reestablishes encrypted network connections, this can also introduce privacy, security and reliability challenges. <p><b>Context</b></p> <p><span>Carefully plan the use of signature-based Network Intrusion Detection/Network Intrusion Prevention (NIDS/NIPS) Systems and Network Data Leakage/Loss Prevention (DLP) as you adopt cloud applications services. IDS/IPS often generate an overwhelming number of false positive alerts that can contribute to SOC Analyst alert fatigue. While a well-tuned IDS/IPS system can be effective for classic application architectures, these systems do not work well for modern SaaS and PaaS application delivery models.</span></p><p><span>Because of how much is changing with network security, it's recommended to reviewand update existing network security strategy focused on these considerations as workloads are migrated toAzure:</span></p><ul style="list-style-type:disc"><li value="1" style="margin-right: 0px;text-indent: 0px;"><span>The major cloud service providers filter malformed packets and common network layer attacks.</span></li><li value="2" style="margin-right: 0px;text-indent: 0px;"><span>Many traditional NIDS/NIPS solutions use signature-based approaches on a per packet basis and easily evaded by attackers and typically produce a high rate of false positives.</span></li><li value="3" style="margin-right: 0px;text-indent: 0px;"><span>Ensure your IDS/IPS system(s) are providing meaningful positive value from alerts they generate.</span></li><li value="4" style="margin-right: 0px;text-indent: 0px;"><span>Measure alert quality by the percentage of true positives (real attacks detections) vs false positive alerts (false alarms) in the alerts raised by the system.</span></li><li value="5" style="margin-right: 0px;text-indent: 0px;"><span>Avoid analyst fatigue by providing only high-quality alerts to security analysts who investigate them. Ideally, alerts should have a 90% true positive rate for creating incidents in the primary queue that triage (Tier 1) investigation teams must respond to, while lower quality alerts would go to proactive hunting exercises to reduce analyst fatigue and burnout.</span></li><li value="6" style="margin-right: 0px;text-indent: 0px;"><span>Adopt modern Zero Trust identity approaches for protecting modern SaaS and PaaS applications. See https://aka.ms/zero-trust for more information</span></li><li value="7" style="margin-right: 0px;text-indent: 0px;"><span>For IaaS workloads, focus on network security solutions that provide per network context rather than per packet/session context. While the technology to achieve this is still evolving, software defined networks in the cloud are naturally instrumented and can achieve this much more easily than on-premises equipment.</span></li><li value="8" style="margin-right: 0px;text-indent: 0px;"><span>Favor solutions that effectively apply machine learning techniques across these large volumes of traffic. ML technology is far superior to static/manual human analysis at rapidly identifying anomalies that could be attacker activity out of normal traffic patterns.</span></li></ul> <p><b>Suggested Actions</b></p> <p><span>Review existing network security controls and minimize the use of signature-based Network Intrusion Detection/Network Intrusion Prevention (NIDS/NIPS) Systems and Network Data Leakage/Loss Prevention (DLP) as you adopt cloud applications services.</span></p> <p><b>Learn More</b></p> <p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/Security/network-security-containment#use-of-legacy-network-security-technology" target="_blank"><span>Use of legacy network security technology</span></a><span /></p>
process
deprecate legacy network security controls why consider this network based dlp is decreasingly effective at identifying both inadvertent and deliberate data loss the reason for this is that most modern protocols and attackers use network level encryption for inbound and outbound communications while the organization can use ssl bridging to provide an authorized man in the middle that terminates and then reestablishes encrypted network connections this can also introduce privacy security and reliability challenges context carefully plan the use of signature based network intrusion detection network intrusion prevention nids nips systems and network data leakage loss prevention dlp as you adopt cloud applications services ids ips often generate an overwhelming number of false positive alerts that can contribute to soc analyst alert fatigue while a well tuned ids ips system can be effective for classic application architectures these systems do not work well for modern saas and paas application delivery models because of how much is changing with network security it s recommended to reviewand update existing network security strategy focused on these considerations as workloads are migrated toazure the major cloud service providers filter malformed packets and common network layer attacks many traditional nids nips solutions use signature based approaches on a per packet basis and easily evaded by attackers and typically produce a high rate of false positives ensure your ids ips system s are providing meaningful positive value from alerts they generate measure alert quality by the percentage of true positives real attacks detections vs false positive alerts false alarms in the alerts raised by the system avoid analyst fatigue by providing only high quality alerts to security analysts who investigate them ideally alerts should have a true positive rate for creating incidents in the primary queue that triage tier investigation teams must respond to while lower quality alerts would go to proactive hunting exercises to reduce analyst fatigue and burnout adopt modern zero trust identity approaches for protecting modern saas and paas applications see for more information for iaas workloads focus on network security solutions that provide per network context rather than per packet session context while the technology to achieve this is still evolving software defined networks in the cloud are naturally instrumented and can achieve this much more easily than on premises equipment favor solutions that effectively apply machine learning techniques across these large volumes of traffic ml technology is far superior to static manual human analysis at rapidly identifying anomalies that could be attacker activity out of normal traffic patterns suggested actions review existing network security controls and minimize the use of signature based network intrusion detection network intrusion prevention nids nips systems and network data leakage loss prevention dlp as you adopt cloud applications services learn more use of legacy network security technology
1
22,061
30,579,451,100
IssuesEvent
2023-07-21 08:34:03
LAAC-LSCP/ChildProject
https://api.github.com/repos/LAAC-LSCP/ChildProject
opened
audio conversion should have a default 'standard' conversion
enhancement audio-processing
**Is your feature request related to a problem? Please describe.** The audio processor pipeline gives a lot of feature that allow people to handle their audio the way they want. But for the vast majority of the time, we just convert them to the standard format. So people have to figure out what the standard is and then apply it to the raw recordings (including multiple operations if the number of channels and sample rate need to be changed) **Describe the solution you'd like** Create a call in audio processor that just takes the raw audio and converts it to the standard format without any other option, so 16kHz, pcm_s16le , mono, wav format. (An interesting thing would also be that if the raw audio is exactly in this format already, just copy it to the converted standard folder without doing conversions (this will allow git-annex to consider raw and standard as the same file, removing the duplication of the same file in the annex (as a conversion will slightly change the RIFF header))
1.0
audio conversion should have a default 'standard' conversion - **Is your feature request related to a problem? Please describe.** The audio processor pipeline gives a lot of feature that allow people to handle their audio the way they want. But for the vast majority of the time, we just convert them to the standard format. So people have to figure out what the standard is and then apply it to the raw recordings (including multiple operations if the number of channels and sample rate need to be changed) **Describe the solution you'd like** Create a call in audio processor that just takes the raw audio and converts it to the standard format without any other option, so 16kHz, pcm_s16le , mono, wav format. (An interesting thing would also be that if the raw audio is exactly in this format already, just copy it to the converted standard folder without doing conversions (this will allow git-annex to consider raw and standard as the same file, removing the duplication of the same file in the annex (as a conversion will slightly change the RIFF header))
process
audio conversion should have a default standard conversion is your feature request related to a problem please describe the audio processor pipeline gives a lot of feature that allow people to handle their audio the way they want but for the vast majority of the time we just convert them to the standard format so people have to figure out what the standard is and then apply it to the raw recordings including multiple operations if the number of channels and sample rate need to be changed describe the solution you d like create a call in audio processor that just takes the raw audio and converts it to the standard format without any other option so pcm mono wav format an interesting thing would also be that if the raw audio is exactly in this format already just copy it to the converted standard folder without doing conversions this will allow git annex to consider raw and standard as the same file removing the duplication of the same file in the annex as a conversion will slightly change the riff header
1
824,812
31,224,618,956
IssuesEvent
2023-08-19 00:42:02
googleapis/google-cloud-go
https://api.github.com/repos/googleapis/google-cloud-go
opened
spanner: TestClient_ReadWriteTransaction_Tag failed
type: bug priority: p1 flakybot: issue
Note: #7762 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky. ---- commit: 255bdf21a1f781ad79ce746cf0c320a6b90cd61f buildURL: [Build Status](https://source.cloud.google.com/results/invocations/2da27176-6669-4007-a3ee-7b837244d04e), [Sponge](http://sponge2/2da27176-6669-4007-a3ee-7b837244d04e) status: failed <details><summary>Test output</summary><br><pre>================== WARNING: DATA RACE Write at 0x00c000c0a670 by goroutine 21940: cloud.google.com/go/spanner/internal/testutil.StatementResult.getResultSetWithTransactionSet() /tmpfs/src/google-cloud-go/spanner/internal/testutil/inmem_spanner_server.go:222 +0x74b cloud.google.com/go/spanner/internal/testutil.(*inMemSpannerServer).executeStreamingSQL() /tmpfs/src/google-cloud-go/spanner/internal/testutil/inmem_spanner_server.go:878 +0x53a cloud.google.com/go/spanner/internal/testutil.(*inMemSpannerServer).ExecuteStreamingSql() /tmpfs/src/google-cloud-go/spanner/internal/testutil/inmem_spanner_server.go:849 +0xa4 cloud.google.com/go/spanner/apiv1/spannerpb._Spanner_ExecuteStreamingSql_Handler() /tmpfs/src/google-cloud-go/spanner/apiv1/spannerpb/spanner.pb.go:3909 +0xf8 google.golang.org/grpc.(*Server).processStreamingRPC() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:1654 +0x2087 google.golang.org/grpc.(*Server).handleStream() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:1741 +0xfae google.golang.org/grpc.(*Server).serveStreams.func1.1() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:982 +0xec Previous read at 0x00c000c0a670 by goroutine 21923: google.golang.org/protobuf/internal/impl.pointer.Elem() /go/pkg/mod/google.golang.org/protobuf@v1.31.0/internal/impl/pointer_unsafe.go:119 +0x411 google.golang.org/protobuf/internal/impl.(*MessageInfo).marshalAppendPointer() /go/pkg/mod/google.golang.org/protobuf@v1.31.0/internal/impl/encode.go:136 +0x3bf google.golang.org/protobuf/internal/impl.appendMessageInfo() /go/pkg/mod/google.golang.org/protobuf@v1.31.0/internal/impl/codec_field.go:238 +0x199 google.golang.org/protobuf/internal/impl.(*MessageInfo).marshalAppendPointer() /go/pkg/mod/google.golang.org/protobuf@v1.31.0/internal/impl/encode.go:139 +0x499 google.golang.org/protobuf/internal/impl.(*MessageInfo).marshal() /go/pkg/mod/google.golang.org/protobuf@v1.31.0/internal/impl/encode.go:107 +0xda google.golang.org/protobuf/internal/impl.(*MessageInfo).marshal-fm() <autogenerated>:1 +0xd4 google.golang.org/protobuf/proto.MarshalOptions.marshal() /go/pkg/mod/google.golang.org/protobuf@v1.31.0/proto/encode.go:166 +0x3c2 google.golang.org/protobuf/proto.MarshalOptions.MarshalAppend() /go/pkg/mod/google.golang.org/protobuf@v1.31.0/proto/encode.go:125 +0xa5 github.com/golang/protobuf/proto.marshalAppend() /go/pkg/mod/github.com/golang/protobuf@v1.5.3/proto/wire.go:40 +0xe4 github.com/golang/protobuf/proto.Marshal() /go/pkg/mod/github.com/golang/protobuf@v1.5.3/proto/wire.go:23 +0x67 google.golang.org/grpc/encoding/proto.codec.Marshal() /go/pkg/mod/google.golang.org/grpc@v1.57.0/encoding/proto/proto.go:45 +0x68 google.golang.org/grpc/encoding/proto.(*codec).Marshal() <autogenerated>:1 +0x5d google.golang.org/grpc.encode() /go/pkg/mod/google.golang.org/grpc@v1.57.0/rpc_util.go:633 +0x6a google.golang.org/grpc.prepareMsg() /go/pkg/mod/google.golang.org/grpc@v1.57.0/stream.go:1766 +0x1b6 google.golang.org/grpc.(*serverStream).SendMsg() /go/pkg/mod/google.golang.org/grpc@v1.57.0/stream.go:1642 +0x34d cloud.google.com/go/spanner/apiv1/spannerpb.(*spannerExecuteStreamingSqlServer).Send() /tmpfs/src/google-cloud-go/spanner/apiv1/spannerpb/spanner.pb.go:3922 +0x54 cloud.google.com/go/spanner/internal/testutil.(*inMemSpannerServer).executeStreamingSQL() /tmpfs/src/google-cloud-go/spanner/internal/testutil/inmem_spanner_server.go:906 +0x13a1 cloud.google.com/go/spanner/internal/testutil.(*inMemSpannerServer).ExecuteStreamingSql() /tmpfs/src/google-cloud-go/spanner/internal/testutil/inmem_spanner_server.go:849 +0xa4 cloud.google.com/go/spanner/apiv1/spannerpb._Spanner_ExecuteStreamingSql_Handler() /tmpfs/src/google-cloud-go/spanner/apiv1/spannerpb/spanner.pb.go:3909 +0xf8 google.golang.org/grpc.(*Server).processStreamingRPC() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:1654 +0x2087 google.golang.org/grpc.(*Server).handleStream() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:1741 +0xfae google.golang.org/grpc.(*Server).serveStreams.func1.1() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:982 +0xec Goroutine 21940 (running) created at: google.golang.org/grpc.(*Server).serveStreams.func1() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:980 +0x2b0 google.golang.org/grpc/internal/transport.(*http2Server).operateHeaders() /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/http2_server.go:631 +0x4ea9 google.golang.org/grpc/internal/transport.(*http2Server).HandleStreams() /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/http2_server.go:673 +0x264 google.golang.org/grpc.(*Server).serveStreams() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:969 +0x261 google.golang.org/grpc.(*Server).handleRawConn.func1() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:912 +0x64 Goroutine 21923 (finished) created at: google.golang.org/grpc.(*Server).serveStreams.func1() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:980 +0x2b0 google.golang.org/grpc/internal/transport.(*http2Server).operateHeaders() /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/http2_server.go:631 +0x4ea9 google.golang.org/grpc/internal/transport.(*http2Server).HandleStreams() /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/http2_server.go:673 +0x264 google.golang.org/grpc.(*Server).serveStreams() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:969 +0x261 google.golang.org/grpc.(*Server).handleRawConn.func1() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:912 +0x64 ================== testing.go:1446: race detected during execution of test</pre></details>
1.0
spanner: TestClient_ReadWriteTransaction_Tag failed - Note: #7762 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky. ---- commit: 255bdf21a1f781ad79ce746cf0c320a6b90cd61f buildURL: [Build Status](https://source.cloud.google.com/results/invocations/2da27176-6669-4007-a3ee-7b837244d04e), [Sponge](http://sponge2/2da27176-6669-4007-a3ee-7b837244d04e) status: failed <details><summary>Test output</summary><br><pre>================== WARNING: DATA RACE Write at 0x00c000c0a670 by goroutine 21940: cloud.google.com/go/spanner/internal/testutil.StatementResult.getResultSetWithTransactionSet() /tmpfs/src/google-cloud-go/spanner/internal/testutil/inmem_spanner_server.go:222 +0x74b cloud.google.com/go/spanner/internal/testutil.(*inMemSpannerServer).executeStreamingSQL() /tmpfs/src/google-cloud-go/spanner/internal/testutil/inmem_spanner_server.go:878 +0x53a cloud.google.com/go/spanner/internal/testutil.(*inMemSpannerServer).ExecuteStreamingSql() /tmpfs/src/google-cloud-go/spanner/internal/testutil/inmem_spanner_server.go:849 +0xa4 cloud.google.com/go/spanner/apiv1/spannerpb._Spanner_ExecuteStreamingSql_Handler() /tmpfs/src/google-cloud-go/spanner/apiv1/spannerpb/spanner.pb.go:3909 +0xf8 google.golang.org/grpc.(*Server).processStreamingRPC() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:1654 +0x2087 google.golang.org/grpc.(*Server).handleStream() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:1741 +0xfae google.golang.org/grpc.(*Server).serveStreams.func1.1() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:982 +0xec Previous read at 0x00c000c0a670 by goroutine 21923: google.golang.org/protobuf/internal/impl.pointer.Elem() /go/pkg/mod/google.golang.org/protobuf@v1.31.0/internal/impl/pointer_unsafe.go:119 +0x411 google.golang.org/protobuf/internal/impl.(*MessageInfo).marshalAppendPointer() /go/pkg/mod/google.golang.org/protobuf@v1.31.0/internal/impl/encode.go:136 +0x3bf google.golang.org/protobuf/internal/impl.appendMessageInfo() /go/pkg/mod/google.golang.org/protobuf@v1.31.0/internal/impl/codec_field.go:238 +0x199 google.golang.org/protobuf/internal/impl.(*MessageInfo).marshalAppendPointer() /go/pkg/mod/google.golang.org/protobuf@v1.31.0/internal/impl/encode.go:139 +0x499 google.golang.org/protobuf/internal/impl.(*MessageInfo).marshal() /go/pkg/mod/google.golang.org/protobuf@v1.31.0/internal/impl/encode.go:107 +0xda google.golang.org/protobuf/internal/impl.(*MessageInfo).marshal-fm() <autogenerated>:1 +0xd4 google.golang.org/protobuf/proto.MarshalOptions.marshal() /go/pkg/mod/google.golang.org/protobuf@v1.31.0/proto/encode.go:166 +0x3c2 google.golang.org/protobuf/proto.MarshalOptions.MarshalAppend() /go/pkg/mod/google.golang.org/protobuf@v1.31.0/proto/encode.go:125 +0xa5 github.com/golang/protobuf/proto.marshalAppend() /go/pkg/mod/github.com/golang/protobuf@v1.5.3/proto/wire.go:40 +0xe4 github.com/golang/protobuf/proto.Marshal() /go/pkg/mod/github.com/golang/protobuf@v1.5.3/proto/wire.go:23 +0x67 google.golang.org/grpc/encoding/proto.codec.Marshal() /go/pkg/mod/google.golang.org/grpc@v1.57.0/encoding/proto/proto.go:45 +0x68 google.golang.org/grpc/encoding/proto.(*codec).Marshal() <autogenerated>:1 +0x5d google.golang.org/grpc.encode() /go/pkg/mod/google.golang.org/grpc@v1.57.0/rpc_util.go:633 +0x6a google.golang.org/grpc.prepareMsg() /go/pkg/mod/google.golang.org/grpc@v1.57.0/stream.go:1766 +0x1b6 google.golang.org/grpc.(*serverStream).SendMsg() /go/pkg/mod/google.golang.org/grpc@v1.57.0/stream.go:1642 +0x34d cloud.google.com/go/spanner/apiv1/spannerpb.(*spannerExecuteStreamingSqlServer).Send() /tmpfs/src/google-cloud-go/spanner/apiv1/spannerpb/spanner.pb.go:3922 +0x54 cloud.google.com/go/spanner/internal/testutil.(*inMemSpannerServer).executeStreamingSQL() /tmpfs/src/google-cloud-go/spanner/internal/testutil/inmem_spanner_server.go:906 +0x13a1 cloud.google.com/go/spanner/internal/testutil.(*inMemSpannerServer).ExecuteStreamingSql() /tmpfs/src/google-cloud-go/spanner/internal/testutil/inmem_spanner_server.go:849 +0xa4 cloud.google.com/go/spanner/apiv1/spannerpb._Spanner_ExecuteStreamingSql_Handler() /tmpfs/src/google-cloud-go/spanner/apiv1/spannerpb/spanner.pb.go:3909 +0xf8 google.golang.org/grpc.(*Server).processStreamingRPC() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:1654 +0x2087 google.golang.org/grpc.(*Server).handleStream() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:1741 +0xfae google.golang.org/grpc.(*Server).serveStreams.func1.1() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:982 +0xec Goroutine 21940 (running) created at: google.golang.org/grpc.(*Server).serveStreams.func1() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:980 +0x2b0 google.golang.org/grpc/internal/transport.(*http2Server).operateHeaders() /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/http2_server.go:631 +0x4ea9 google.golang.org/grpc/internal/transport.(*http2Server).HandleStreams() /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/http2_server.go:673 +0x264 google.golang.org/grpc.(*Server).serveStreams() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:969 +0x261 google.golang.org/grpc.(*Server).handleRawConn.func1() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:912 +0x64 Goroutine 21923 (finished) created at: google.golang.org/grpc.(*Server).serveStreams.func1() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:980 +0x2b0 google.golang.org/grpc/internal/transport.(*http2Server).operateHeaders() /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/http2_server.go:631 +0x4ea9 google.golang.org/grpc/internal/transport.(*http2Server).HandleStreams() /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/http2_server.go:673 +0x264 google.golang.org/grpc.(*Server).serveStreams() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:969 +0x261 google.golang.org/grpc.(*Server).handleRawConn.func1() /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:912 +0x64 ================== testing.go:1446: race detected during execution of test</pre></details>
non_process
spanner testclient readwritetransaction tag failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output warning data race write at by goroutine cloud google com go spanner internal testutil statementresult getresultsetwithtransactionset tmpfs src google cloud go spanner internal testutil inmem spanner server go cloud google com go spanner internal testutil inmemspannerserver executestreamingsql tmpfs src google cloud go spanner internal testutil inmem spanner server go cloud google com go spanner internal testutil inmemspannerserver executestreamingsql tmpfs src google cloud go spanner internal testutil inmem spanner server go cloud google com go spanner spannerpb spanner executestreamingsql handler tmpfs src google cloud go spanner spannerpb spanner pb go google golang org grpc server processstreamingrpc go pkg mod google golang org grpc server go google golang org grpc server handlestream go pkg mod google golang org grpc server go google golang org grpc server servestreams go pkg mod google golang org grpc server go previous read at by goroutine google golang org protobuf internal impl pointer elem go pkg mod google golang org protobuf internal impl pointer unsafe go google golang org protobuf internal impl messageinfo marshalappendpointer go pkg mod google golang org protobuf internal impl encode go google golang org protobuf internal impl appendmessageinfo go pkg mod google golang org protobuf internal impl codec field go google golang org protobuf internal impl messageinfo marshalappendpointer go pkg mod google golang org protobuf internal impl encode go google golang org protobuf internal impl messageinfo marshal go pkg mod google golang org protobuf internal impl encode go google golang org protobuf internal impl messageinfo marshal fm google golang org protobuf proto marshaloptions marshal go pkg mod google golang org protobuf proto encode go google golang org protobuf proto marshaloptions marshalappend go pkg mod google golang org protobuf proto encode go github com golang protobuf proto marshalappend go pkg mod github com golang protobuf proto wire go github com golang protobuf proto marshal go pkg mod github com golang protobuf proto wire go google golang org grpc encoding proto codec marshal go pkg mod google golang org grpc encoding proto proto go google golang org grpc encoding proto codec marshal google golang org grpc encode go pkg mod google golang org grpc rpc util go google golang org grpc preparemsg go pkg mod google golang org grpc stream go google golang org grpc serverstream sendmsg go pkg mod google golang org grpc stream go cloud google com go spanner spannerpb spannerexecutestreamingsqlserver send tmpfs src google cloud go spanner spannerpb spanner pb go cloud google com go spanner internal testutil inmemspannerserver executestreamingsql tmpfs src google cloud go spanner internal testutil inmem spanner server go cloud google com go spanner internal testutil inmemspannerserver executestreamingsql tmpfs src google cloud go spanner internal testutil inmem spanner server go cloud google com go spanner spannerpb spanner executestreamingsql handler tmpfs src google cloud go spanner spannerpb spanner pb go google golang org grpc server processstreamingrpc go pkg mod google golang org grpc server go google golang org grpc server handlestream go pkg mod google golang org grpc server go google golang org grpc server servestreams go pkg mod google golang org grpc server go goroutine running created at google golang org grpc server servestreams go pkg mod google golang org grpc server go google golang org grpc internal transport operateheaders go pkg mod google golang org grpc internal transport server go google golang org grpc internal transport handlestreams go pkg mod google golang org grpc internal transport server go google golang org grpc server servestreams go pkg mod google golang org grpc server go google golang org grpc server handlerawconn go pkg mod google golang org grpc server go goroutine finished created at google golang org grpc server servestreams go pkg mod google golang org grpc server go google golang org grpc internal transport operateheaders go pkg mod google golang org grpc internal transport server go google golang org grpc internal transport handlestreams go pkg mod google golang org grpc internal transport server go google golang org grpc server servestreams go pkg mod google golang org grpc server go google golang org grpc server handlerawconn go pkg mod google golang org grpc server go testing go race detected during execution of test
0
15,415
19,602,665,264
IssuesEvent
2022-01-06 04:23:49
microsoft/vscode
https://api.github.com/repos/microsoft/vscode
closed
better triaging of terminal launch failures
feature-request terminal-process
I see at least a few terminal launch failure issues each week. I propose when a terminal process exits during launch (likely due to bad shell/shell args), we show a notification with a link to docs that might help. Even if it just points to docs that say: "in your issue, include your settings.json file", it would save time and point users in the right direction.
1.0
better triaging of terminal launch failures - I see at least a few terminal launch failure issues each week. I propose when a terminal process exits during launch (likely due to bad shell/shell args), we show a notification with a link to docs that might help. Even if it just points to docs that say: "in your issue, include your settings.json file", it would save time and point users in the right direction.
process
better triaging of terminal launch failures i see at least a few terminal launch failure issues each week i propose when a terminal process exits during launch likely due to bad shell shell args we show a notification with a link to docs that might help even if it just points to docs that say in your issue include your settings json file it would save time and point users in the right direction
1
13,273
15,757,018,121
IssuesEvent
2021-03-31 04:33:46
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
How to support grpclb protocol in the remote execution scenarios.
type: support / not a bug (process)
Hello. We now use remote cache and remote execution in our building scenario, and set --remote_cache build parameter to our remote cache server DNS. But if we want to use load balance,we should setup LVS or nginx as Layer4/7 proxy and set --remote_cache to the proxy. That is not our ideal deployment model. We want to use a lookaside load balancer which is the grpclb API specification of grpc,and set --remote_cache to the lookaside load balancer. Does Bazel support grpclb(look aside load balancing) now? and how to use it without change any Bazel source code. Thanks a lot!
1.0
How to support grpclb protocol in the remote execution scenarios. - Hello. We now use remote cache and remote execution in our building scenario, and set --remote_cache build parameter to our remote cache server DNS. But if we want to use load balance,we should setup LVS or nginx as Layer4/7 proxy and set --remote_cache to the proxy. That is not our ideal deployment model. We want to use a lookaside load balancer which is the grpclb API specification of grpc,and set --remote_cache to the lookaside load balancer. Does Bazel support grpclb(look aside load balancing) now? and how to use it without change any Bazel source code. Thanks a lot!
process
how to support grpclb protocol in the remote execution scenarios hello we now use remote cache and remote execution in our building scenario and set remote cache build parameter to our remote cache server dns but if we want to use load balance we should setup lvs or nginx as proxy and set remote cache to the proxy that is not our ideal deployment model we want to use a lookaside load balancer which is the grpclb api specification of grpc and set remote cache to the lookaside load balancer does bazel support grpclb look aside load balancing now and how to use it without change any bazel source code thanks a lot
1
274,772
30,173,601,633
IssuesEvent
2023-07-04 01:04:11
turkdevops/snyk
https://api.github.com/repos/turkdevops/snyk
reopened
CVE-2016-1902 (High) detected in symfony/symfony-v2.3.1
Mend: dependency security vulnerability
## CVE-2016-1902 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>symfony/symfony-v2.3.1</b></p></summary> <p>The Symfony PHP framework</p> <p>Library home page: <a href="https://api.github.com/repos/symfony/symfony/zipball/0902c606b4df1161f5b786ae89f37b71380b1f23">https://api.github.com/repos/symfony/symfony/zipball/0902c606b4df1161f5b786ae89f37b71380b1f23</a></p> <p> Dependency Hierarchy: - :x: **symfony/symfony-v2.3.1** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/turkdevops/snyk/commit/9505f4ca92405cc9273dc3726c2d274ce28a4407">9505f4ca92405cc9273dc3726c2d274ce28a4407</a></p> <p>Found in base branch: <b>ALL_HANDS/major-secrets</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> The nextBytes function in the SecureRandom class in Symfony before 2.3.37, 2.6.x before 2.6.13, and 2.7.x before 2.7.9 does not properly generate random numbers when used with PHP 5.x without the paragonie/random_compat library and the openssl_random_pseudo_bytes function fails, which makes it easier for attackers to defeat cryptographic protection mechanisms via unspecified vectors. <p>Publish Date: 2016-06-01 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-1902>CVE-2016-1902</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-1902">https://nvd.nist.gov/vuln/detail/CVE-2016-1902</a></p> <p>Release Date: 2016-06-01</p> <p>Fix Resolution: 2.3.37,2.6.13,2.7.9</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2016-1902 (High) detected in symfony/symfony-v2.3.1 - ## CVE-2016-1902 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>symfony/symfony-v2.3.1</b></p></summary> <p>The Symfony PHP framework</p> <p>Library home page: <a href="https://api.github.com/repos/symfony/symfony/zipball/0902c606b4df1161f5b786ae89f37b71380b1f23">https://api.github.com/repos/symfony/symfony/zipball/0902c606b4df1161f5b786ae89f37b71380b1f23</a></p> <p> Dependency Hierarchy: - :x: **symfony/symfony-v2.3.1** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/turkdevops/snyk/commit/9505f4ca92405cc9273dc3726c2d274ce28a4407">9505f4ca92405cc9273dc3726c2d274ce28a4407</a></p> <p>Found in base branch: <b>ALL_HANDS/major-secrets</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> The nextBytes function in the SecureRandom class in Symfony before 2.3.37, 2.6.x before 2.6.13, and 2.7.x before 2.7.9 does not properly generate random numbers when used with PHP 5.x without the paragonie/random_compat library and the openssl_random_pseudo_bytes function fails, which makes it easier for attackers to defeat cryptographic protection mechanisms via unspecified vectors. <p>Publish Date: 2016-06-01 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-1902>CVE-2016-1902</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-1902">https://nvd.nist.gov/vuln/detail/CVE-2016-1902</a></p> <p>Release Date: 2016-06-01</p> <p>Fix Resolution: 2.3.37,2.6.13,2.7.9</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in symfony symfony cve high severity vulnerability vulnerable library symfony symfony the symfony php framework library home page a href dependency hierarchy x symfony symfony vulnerable library found in head commit a href found in base branch all hands major secrets vulnerability details the nextbytes function in the securerandom class in symfony before x before and x before does not properly generate random numbers when used with php x without the paragonie random compat library and the openssl random pseudo bytes function fails which makes it easier for attackers to defeat cryptographic protection mechanisms via unspecified vectors publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
4,588
7,431,219,646
IssuesEvent
2018-03-25 12:38:10
nodejs/node
https://api.github.com/repos/nodejs/node
closed
Investigate flaky test-stdout-buffer-flush-on-exit
CI / flaky test arm process test
* **Version**: master * **Platform**: arm/linux * **Subsystem**: test https://ci.nodejs.org/job/node-test-commit-arm/10285/nodes=ubuntu1604-arm64/console ``` not ok 1448 known_issues/test-stdout-buffer-flush-on-exit --- duration_ms: 11.38 severity: fail stack: |- ```
1.0
Investigate flaky test-stdout-buffer-flush-on-exit - * **Version**: master * **Platform**: arm/linux * **Subsystem**: test https://ci.nodejs.org/job/node-test-commit-arm/10285/nodes=ubuntu1604-arm64/console ``` not ok 1448 known_issues/test-stdout-buffer-flush-on-exit --- duration_ms: 11.38 severity: fail stack: |- ```
process
investigate flaky test stdout buffer flush on exit version master platform arm linux subsystem test not ok known issues test stdout buffer flush on exit duration ms severity fail stack
1
18,781
24,684,378,200
IssuesEvent
2022-10-19 01:32:06
unicode-org/icu4x
https://api.github.com/repos/unicode-org/icu4x
closed
Fix up npm installation and caching on CI
T-docs-tests C-process S-small
We should never run `npm install` on CI, and in general we should avoid hitting npmjs.org in CI. We should leverage caching. We do this in general, but it doesn't seem to always work.
1.0
Fix up npm installation and caching on CI - We should never run `npm install` on CI, and in general we should avoid hitting npmjs.org in CI. We should leverage caching. We do this in general, but it doesn't seem to always work.
process
fix up npm installation and caching on ci we should never run npm install on ci and in general we should avoid hitting npmjs org in ci we should leverage caching we do this in general but it doesn t seem to always work
1
115,252
17,293,850,259
IssuesEvent
2021-07-25 10:19:21
atlslscsrv-app/B82B787C-E596
https://api.github.com/repos/atlslscsrv-app/B82B787C-E596
opened
WS-2017-0247 (Low) detected in ms-0.7.2.tgz
security vulnerability
## WS-2017-0247 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ms-0.7.2.tgz</b></p></summary> <p>Tiny milisecond conversion utility</p> <p>Library home page: <a href="https://registry.npmjs.org/ms/-/ms-0.7.2.tgz">https://registry.npmjs.org/ms/-/ms-0.7.2.tgz</a></p> <p>Path to dependency file: B82B787C-E596/dist/resources/app/package.json</p> <p>Path to vulnerable library: B82B787C-E596/dist/resources/app/node_modules/pouchdb-browser/node_modules/ms/package.json</p> <p> Dependency Hierarchy: - pouchdb-browser-6.2.0.tgz (Root Library) - debug-2.6.1.tgz - :x: **ms-0.7.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://api.github.com/repos/atlslscsrv-app/B82B787C-E596/commits/4e3e58a84e5a5ffb4bf1b6e88a12d06be953e450">4e3e58a84e5a5ffb4bf1b6e88a12d06be953e450</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS). <p>Publish Date: 2017-04-12 <p>URL: <a href=https://github.com/zeit/ms/commit/305f2ddcd4eff7cc7c518aca6bb2b2d2daad8fef>WS-2017-0247</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>3.4</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/vercel/ms/pull/89">https://github.com/vercel/ms/pull/89</a></p> <p>Release Date: 2017-04-12</p> <p>Fix Resolution: 2.1.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2017-0247 (Low) detected in ms-0.7.2.tgz - ## WS-2017-0247 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ms-0.7.2.tgz</b></p></summary> <p>Tiny milisecond conversion utility</p> <p>Library home page: <a href="https://registry.npmjs.org/ms/-/ms-0.7.2.tgz">https://registry.npmjs.org/ms/-/ms-0.7.2.tgz</a></p> <p>Path to dependency file: B82B787C-E596/dist/resources/app/package.json</p> <p>Path to vulnerable library: B82B787C-E596/dist/resources/app/node_modules/pouchdb-browser/node_modules/ms/package.json</p> <p> Dependency Hierarchy: - pouchdb-browser-6.2.0.tgz (Root Library) - debug-2.6.1.tgz - :x: **ms-0.7.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://api.github.com/repos/atlslscsrv-app/B82B787C-E596/commits/4e3e58a84e5a5ffb4bf1b6e88a12d06be953e450">4e3e58a84e5a5ffb4bf1b6e88a12d06be953e450</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS). <p>Publish Date: 2017-04-12 <p>URL: <a href=https://github.com/zeit/ms/commit/305f2ddcd4eff7cc7c518aca6bb2b2d2daad8fef>WS-2017-0247</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>3.4</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/vercel/ms/pull/89">https://github.com/vercel/ms/pull/89</a></p> <p>Release Date: 2017-04-12</p> <p>Fix Resolution: 2.1.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
ws low detected in ms tgz ws low severity vulnerability vulnerable library ms tgz tiny milisecond conversion utility library home page a href path to dependency file dist resources app package json path to vulnerable library dist resources app node modules pouchdb browser node modules ms package json dependency hierarchy pouchdb browser tgz root library debug tgz x ms tgz vulnerable library found in head commit a href found in base branch master vulnerability details affected versions of this package are vulnerable to regular expression denial of service redos publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
354,769
25,174,879,375
IssuesEvent
2022-11-11 08:19:59
cliftonfelix/pe
https://api.github.com/repos/cliftonfelix/pe
opened
javascript launch warning
type.DocumentationBug severity.Low
![image.png](https://raw.githubusercontent.com/cliftonfelix/pe/main/files/023a01f4-dfdd-4791-b0f3-d7936ef1cb7e.png) When I hover over and click the part "Links in blue" and "Words with a dotted underline", I got a warning ![image.png](https://raw.githubusercontent.com/cliftonfelix/pe/main/files/ff70b7ba-06ef-4d80-8a61-35dc2572573d.png) When I click yes, nothing happens <!--session: 1668145524208-25d2f708-da60-44a5-9b69-d29fc1345303--> <!--Version: Web v3.4.4-->
1.0
javascript launch warning - ![image.png](https://raw.githubusercontent.com/cliftonfelix/pe/main/files/023a01f4-dfdd-4791-b0f3-d7936ef1cb7e.png) When I hover over and click the part "Links in blue" and "Words with a dotted underline", I got a warning ![image.png](https://raw.githubusercontent.com/cliftonfelix/pe/main/files/ff70b7ba-06ef-4d80-8a61-35dc2572573d.png) When I click yes, nothing happens <!--session: 1668145524208-25d2f708-da60-44a5-9b69-d29fc1345303--> <!--Version: Web v3.4.4-->
non_process
javascript launch warning when i hover over and click the part links in blue and words with a dotted underline i got a warning when i click yes nothing happens
0
101,702
8,795,568,839
IssuesEvent
2018-12-22 17:20:08
edenlabllc/ehealth.api
https://api.github.com/repos/edenlabllc/ehealth.api
closed
Fill "reason" on sign declaration request
BE [zube]: In Test epic/declaration status/in-progress
On [sign declaration request](https://uaehealthapi.docs.apiary.io/#reference/public.-medical-service-provider-integration-layer/declaration-requests/sign-declaration-request) while inserting new declaration to ops, fill in ops.declarations.reason [conf](https://edenlab.atlassian.net/wiki/x/Y_oQ) - if declaration_request.person.authentication_methods.type = OFFLINE - set "reason" = offline - if declaration_request.person.no_tax_id=true - set "reason" - no_tax_id - if declaration_request.person.authentication_methods.type = OFFLINE AND declaration_request.person.no_tax_id=true - set "reason" - no_tax_id - else - do not set "reason"
1.0
Fill "reason" on sign declaration request - On [sign declaration request](https://uaehealthapi.docs.apiary.io/#reference/public.-medical-service-provider-integration-layer/declaration-requests/sign-declaration-request) while inserting new declaration to ops, fill in ops.declarations.reason [conf](https://edenlab.atlassian.net/wiki/x/Y_oQ) - if declaration_request.person.authentication_methods.type = OFFLINE - set "reason" = offline - if declaration_request.person.no_tax_id=true - set "reason" - no_tax_id - if declaration_request.person.authentication_methods.type = OFFLINE AND declaration_request.person.no_tax_id=true - set "reason" - no_tax_id - else - do not set "reason"
non_process
fill reason on sign declaration request on while inserting new declaration to ops fill in ops declarations reason if declaration request person authentication methods type offline set reason offline if declaration request person no tax id true set reason no tax id if declaration request person authentication methods type offline and declaration request person no tax id true set reason no tax id else do not set reason
0
148,965
23,407,872,398
IssuesEvent
2022-08-12 14:29:59
zuri-training/Col-films-Team-120
https://api.github.com/repos/zuri-training/Col-films-Team-120
closed
Notifications Page
design figma
Create the lo-fi and hi-fi designs for the website notifications page using the defined style guide
1.0
Notifications Page - Create the lo-fi and hi-fi designs for the website notifications page using the defined style guide
non_process
notifications page create the lo fi and hi fi designs for the website notifications page using the defined style guide
0
18,048
24,057,772,585
IssuesEvent
2022-09-16 18:38:21
openxla/stablehlo
https://api.github.com/repos/openxla/stablehlo
closed
Document compatibility guarantees
Process
As discussed in https://github.com/openxla/stablehlo/pull/1, one of the main reasons for introducing StableHLO is being able to provide compatibility guarantees for MHLO while keeping MHLO as flexible as possible. This is something that we can start doing right now, and within this ticket I'll work on putting together a proposal for what this means exactly.
1.0
Document compatibility guarantees - As discussed in https://github.com/openxla/stablehlo/pull/1, one of the main reasons for introducing StableHLO is being able to provide compatibility guarantees for MHLO while keeping MHLO as flexible as possible. This is something that we can start doing right now, and within this ticket I'll work on putting together a proposal for what this means exactly.
process
document compatibility guarantees as discussed in one of the main reasons for introducing stablehlo is being able to provide compatibility guarantees for mhlo while keeping mhlo as flexible as possible this is something that we can start doing right now and within this ticket i ll work on putting together a proposal for what this means exactly
1
1,972
4,797,105,586
IssuesEvent
2016-11-01 10:45:36
DevExpress/testcafe-hammerhead
https://api.github.com/repos/DevExpress/testcafe-hammerhead
closed
Systematize tests with url processing with the flag "i" (iframe)
AREA: client SYSTEM: URL processing
Cases of using the `i` flag: - Assign src to iframe - Assign a url attribute to elements (`a`, `form`, `area`, `base`) with the `target` attribute (`_blank`, `_self`, `_parent`, `_top` or framename) - Assign the `target` attribute to elements with a url attribute - Change iframe name - some `target` attributes cease to point to this iframe - some `target` attributes start to point to this iframe - Change the `target` attribute in the `base` tag - Change a url via `location` (`href`, `path`, `perlace()`, `assign()`) - from top window into iframe - from iframe into iframe - from top window into cross-domain iframe - from iframe into cross-domain iframe
1.0
Systematize tests with url processing with the flag "i" (iframe) - Cases of using the `i` flag: - Assign src to iframe - Assign a url attribute to elements (`a`, `form`, `area`, `base`) with the `target` attribute (`_blank`, `_self`, `_parent`, `_top` or framename) - Assign the `target` attribute to elements with a url attribute - Change iframe name - some `target` attributes cease to point to this iframe - some `target` attributes start to point to this iframe - Change the `target` attribute in the `base` tag - Change a url via `location` (`href`, `path`, `perlace()`, `assign()`) - from top window into iframe - from iframe into iframe - from top window into cross-domain iframe - from iframe into cross-domain iframe
process
systematize tests with url processing with the flag i iframe cases of using the i flag assign src to iframe assign a url attribute to elements a form area base with the target attribute blank self parent top or framename assign the target attribute to elements with a url attribute change iframe name some target attributes cease to point to this iframe some target attributes start to point to this iframe change the target attribute in the base tag change a url via location href path perlace assign from top window into iframe from iframe into iframe from top window into cross domain iframe from iframe into cross domain iframe
1
61,001
6,720,705,724
IssuesEvent
2017-10-16 08:53:03
consul/consul
https://api.github.com/repos/consul/consul
closed
Right answer resize should reorder
polls production testable
# Current Behaviour Right now when you click expand the gallery of the second answer on a Poll you'll get it expanded but below the first answer: ![zoom_right_answer](https://user-images.githubusercontent.com/983242/31355082-2601b8d2-ad39-11e7-800a-8c337b0a27d5.gif) # Desired Behaviour Instead of expanding below the first answer, it should be on top of it. With the same effect as expanding the first answer gallery does.
1.0
Right answer resize should reorder - # Current Behaviour Right now when you click expand the gallery of the second answer on a Poll you'll get it expanded but below the first answer: ![zoom_right_answer](https://user-images.githubusercontent.com/983242/31355082-2601b8d2-ad39-11e7-800a-8c337b0a27d5.gif) # Desired Behaviour Instead of expanding below the first answer, it should be on top of it. With the same effect as expanding the first answer gallery does.
non_process
right answer resize should reorder current behaviour right now when you click expand the gallery of the second answer on a poll you ll get it expanded but below the first answer desired behaviour instead of expanding below the first answer it should be on top of it with the same effect as expanding the first answer gallery does
0
93,445
8,416,469,968
IssuesEvent
2018-10-14 02:42:45
brave/brave-browser
https://api.github.com/repos/brave/brave-browser
closed
chrome:// urls should display as brave://
QA/Test-Plan-Specified QA/Yes feature/url-bar feature/user-interface
It may be too much work to redirect all chrome:// urls to brave:// and have the appropriate handlers and checks still work reliable. Luckily, the ToolbarModel provides GetURLForDisplay which can be overridden so that brave:// displays most of the time (until the URL is edited). When paired with https://github.com/brave/brave-browser/issues/810 this should cover quite a bit of the requirement to support brave:// URLs without confusing the users about the chromium-based core. Test plan: ## Displays brave:// url - Clean Profile - Launch browser - Welcome page should be open with `brave://welcome` as the address in LocationBar ## Edits chrome:// url - Open settings - Settings page should be open with `brave://` prefix in LocationBar - Edit the url by focusing in the LocationBar and moving the cursor or adding text - LocationBar text should have chrome:// prefix ## Copies chrome:// url - Open settings - Settings page should be open with `brave://` prefix in LocationBar - Right-click the LocationBar text and select 'Copy' - Clipboard text should have chrome:// prefix
1.0
chrome:// urls should display as brave:// - It may be too much work to redirect all chrome:// urls to brave:// and have the appropriate handlers and checks still work reliable. Luckily, the ToolbarModel provides GetURLForDisplay which can be overridden so that brave:// displays most of the time (until the URL is edited). When paired with https://github.com/brave/brave-browser/issues/810 this should cover quite a bit of the requirement to support brave:// URLs without confusing the users about the chromium-based core. Test plan: ## Displays brave:// url - Clean Profile - Launch browser - Welcome page should be open with `brave://welcome` as the address in LocationBar ## Edits chrome:// url - Open settings - Settings page should be open with `brave://` prefix in LocationBar - Edit the url by focusing in the LocationBar and moving the cursor or adding text - LocationBar text should have chrome:// prefix ## Copies chrome:// url - Open settings - Settings page should be open with `brave://` prefix in LocationBar - Right-click the LocationBar text and select 'Copy' - Clipboard text should have chrome:// prefix
non_process
chrome urls should display as brave it may be too much work to redirect all chrome urls to brave and have the appropriate handlers and checks still work reliable luckily the toolbarmodel provides geturlfordisplay which can be overridden so that brave displays most of the time until the url is edited when paired with this should cover quite a bit of the requirement to support brave urls without confusing the users about the chromium based core test plan displays brave url clean profile launch browser welcome page should be open with brave welcome as the address in locationbar edits chrome url open settings settings page should be open with brave prefix in locationbar edit the url by focusing in the locationbar and moving the cursor or adding text locationbar text should have chrome prefix copies chrome url open settings settings page should be open with brave prefix in locationbar right click the locationbar text and select copy clipboard text should have chrome prefix
0
439,702
12,685,480,976
IssuesEvent
2020-06-20 04:44:01
wso2/product-apim
https://api.github.com/repos/wso2/product-apim
opened
Show validation errors in the initial state
3.2.0 Priority/Normal Type/Bug Type/React-UI
### Description: ![categories](https://user-images.githubusercontent.com/3313885/85191972-6f6b2c80-b2de-11ea-9461-943e7b74d3d4.gif) ### Steps to reproduce: ### Affected Product Version: <!-- Members can use Affected/*** labels --> ### Environment details (with versions): - OS: - Client: - Env (Docker/K8s): --- ### Optional Fields #### Related Issues: <!-- Any related issues from this/other repositories--> #### Suggested Labels: <!--Only to be used by non-members--> #### Suggested Assignees: <!--Only to be used by non-members-->
1.0
Show validation errors in the initial state - ### Description: ![categories](https://user-images.githubusercontent.com/3313885/85191972-6f6b2c80-b2de-11ea-9461-943e7b74d3d4.gif) ### Steps to reproduce: ### Affected Product Version: <!-- Members can use Affected/*** labels --> ### Environment details (with versions): - OS: - Client: - Env (Docker/K8s): --- ### Optional Fields #### Related Issues: <!-- Any related issues from this/other repositories--> #### Suggested Labels: <!--Only to be used by non-members--> #### Suggested Assignees: <!--Only to be used by non-members-->
non_process
show validation errors in the initial state description steps to reproduce affected product version environment details with versions os client env docker optional fields related issues suggested labels suggested assignees
0
5,584
8,442,054,263
IssuesEvent
2018-10-18 12:11:22
kiwicom/orbit-components
https://api.github.com/repos/kiwicom/orbit-components
closed
<Modal />: controllable content overflow
bug processing
**A component designed to overflow Modal's area is hidden** <img width="376" alt="screen shot 2018-10-12 at 16 45 44" src="https://user-images.githubusercontent.com/3975660/46877195-93efdf80-ce40-11e8-8c22-9186dcbdde86.png"> **There should be available some property to turn on/off overflow on Modal** And our component could overflow the modal <img width="441" alt="screen shot 2018-10-12 at 16 48 30" src="https://user-images.githubusercontent.com/3975660/46877308-d5808a80-ce40-11e8-86c6-4bd325a03754.png">
1.0
<Modal />: controllable content overflow - **A component designed to overflow Modal's area is hidden** <img width="376" alt="screen shot 2018-10-12 at 16 45 44" src="https://user-images.githubusercontent.com/3975660/46877195-93efdf80-ce40-11e8-8c22-9186dcbdde86.png"> **There should be available some property to turn on/off overflow on Modal** And our component could overflow the modal <img width="441" alt="screen shot 2018-10-12 at 16 48 30" src="https://user-images.githubusercontent.com/3975660/46877308-d5808a80-ce40-11e8-86c6-4bd325a03754.png">
process
controllable content overflow a component designed to overflow modal s area is hidden img width alt screen shot at src there should be available some property to turn on off overflow on modal and our component could overflow the modal img width alt screen shot at src
1
175,825
21,334,306,824
IssuesEvent
2022-04-18 12:47:29
Gal-Doron/spring-bot
https://api.github.com/repos/Gal-Doron/spring-bot
opened
CVE-2021-44550 (High) detected in stanford-corenlp-3.9.2.jar, stanford-corenlp-3.9.2.jar
security vulnerability
## CVE-2021-44550 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>stanford-corenlp-3.9.2.jar</b>, <b>stanford-corenlp-3.9.2.jar</b></p></summary> <p> <details><summary><b>stanford-corenlp-3.9.2.jar</b></p></summary> <p>Stanford CoreNLP provides a set of natural language analysis tools which can take raw English language text input and give the base forms of words, their parts of speech, whether they are names of companies, people, etc., normalize dates, times, and numeric quantities, mark up the structure of sentences in terms of phrases and word dependencies, and indicate which noun phrases refer to the same entities. It provides the foundational building blocks for higher level text understanding applications.</p> <p>Library home page: <a href="https://nlp.stanford.edu/software/corenlp.html">https://nlp.stanford.edu/software/corenlp.html</a></p> <p>Path to dependency file: /tools/reminder-bot/pom.xml</p> <p>Path to vulnerable library: /repository/edu/stanford/nlp/stanford-corenlp/3.9.2/stanford-corenlp-3.9.2.jar</p> <p> Dependency Hierarchy: - :x: **stanford-corenlp-3.9.2.jar** (Vulnerable Library) </details> <details><summary><b>stanford-corenlp-3.9.2.jar</b></p></summary> <p>Stanford CoreNLP provides a set of natural language analysis tools which can take raw English language text input and give the base forms of words, their parts of speech, whether they are names of companies, people, etc., normalize dates, times, and numeric quantities, mark up the structure of sentences in terms of phrases and word dependencies, and indicate which noun phrases refer to the same entities. It provides the foundational building blocks for higher level text understanding applications.</p> <p>Library home page: <a href="https://nlp.stanford.edu/software/corenlp.html">https://nlp.stanford.edu/software/corenlp.html</a></p> <p>Path to dependency file: /tools/reminder-bot/pom.xml</p> <p>Path to vulnerable library: /repository/edu/stanford/nlp/stanford-corenlp/3.9.2/stanford-corenlp-3.9.2-models.jar</p> <p> Dependency Hierarchy: - :x: **stanford-corenlp-3.9.2.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/spring-bot/commit/6866b73f281ca08d15854ae18b17744dbad89a81">6866b73f281ca08d15854ae18b17744dbad89a81</a></p> <p>Found in base branch: <b>spring-bot-master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An Incorrect Access Control vulnerability exists in CoreNLP 4.3.2 via the classifier in NERServlet.java (lines 158 and 159). <p>Publish Date: 2022-02-24 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44550>CVE-2021-44550</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/stanfordnlp/CoreNLP/issues/1222">https://github.com/stanfordnlp/CoreNLP/issues/1222</a></p> <p>Release Date: 2022-02-24</p> <p>Fix Resolution: 4.4.0</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"edu.stanford.nlp","packageName":"stanford-corenlp","packageVersion":"3.9.2","packageFilePaths":["/tools/reminder-bot/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"edu.stanford.nlp:stanford-corenlp:3.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.4.0","isBinary":false},{"packageType":"Java","groupId":"edu.stanford.nlp","packageName":"stanford-corenlp","packageVersion":"3.9.2","packageFilePaths":["/tools/reminder-bot/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"edu.stanford.nlp:stanford-corenlp:3.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.4.0","isBinary":false}],"baseBranches":["spring-bot-master"],"vulnerabilityIdentifier":"CVE-2021-44550","vulnerabilityDetails":"An Incorrect Access Control vulnerability exists in CoreNLP 4.3.2 via the classifier in NERServlet.java (lines 158 and 159).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44550","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2021-44550 (High) detected in stanford-corenlp-3.9.2.jar, stanford-corenlp-3.9.2.jar - ## CVE-2021-44550 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>stanford-corenlp-3.9.2.jar</b>, <b>stanford-corenlp-3.9.2.jar</b></p></summary> <p> <details><summary><b>stanford-corenlp-3.9.2.jar</b></p></summary> <p>Stanford CoreNLP provides a set of natural language analysis tools which can take raw English language text input and give the base forms of words, their parts of speech, whether they are names of companies, people, etc., normalize dates, times, and numeric quantities, mark up the structure of sentences in terms of phrases and word dependencies, and indicate which noun phrases refer to the same entities. It provides the foundational building blocks for higher level text understanding applications.</p> <p>Library home page: <a href="https://nlp.stanford.edu/software/corenlp.html">https://nlp.stanford.edu/software/corenlp.html</a></p> <p>Path to dependency file: /tools/reminder-bot/pom.xml</p> <p>Path to vulnerable library: /repository/edu/stanford/nlp/stanford-corenlp/3.9.2/stanford-corenlp-3.9.2.jar</p> <p> Dependency Hierarchy: - :x: **stanford-corenlp-3.9.2.jar** (Vulnerable Library) </details> <details><summary><b>stanford-corenlp-3.9.2.jar</b></p></summary> <p>Stanford CoreNLP provides a set of natural language analysis tools which can take raw English language text input and give the base forms of words, their parts of speech, whether they are names of companies, people, etc., normalize dates, times, and numeric quantities, mark up the structure of sentences in terms of phrases and word dependencies, and indicate which noun phrases refer to the same entities. It provides the foundational building blocks for higher level text understanding applications.</p> <p>Library home page: <a href="https://nlp.stanford.edu/software/corenlp.html">https://nlp.stanford.edu/software/corenlp.html</a></p> <p>Path to dependency file: /tools/reminder-bot/pom.xml</p> <p>Path to vulnerable library: /repository/edu/stanford/nlp/stanford-corenlp/3.9.2/stanford-corenlp-3.9.2-models.jar</p> <p> Dependency Hierarchy: - :x: **stanford-corenlp-3.9.2.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/spring-bot/commit/6866b73f281ca08d15854ae18b17744dbad89a81">6866b73f281ca08d15854ae18b17744dbad89a81</a></p> <p>Found in base branch: <b>spring-bot-master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An Incorrect Access Control vulnerability exists in CoreNLP 4.3.2 via the classifier in NERServlet.java (lines 158 and 159). <p>Publish Date: 2022-02-24 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44550>CVE-2021-44550</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/stanfordnlp/CoreNLP/issues/1222">https://github.com/stanfordnlp/CoreNLP/issues/1222</a></p> <p>Release Date: 2022-02-24</p> <p>Fix Resolution: 4.4.0</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"edu.stanford.nlp","packageName":"stanford-corenlp","packageVersion":"3.9.2","packageFilePaths":["/tools/reminder-bot/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"edu.stanford.nlp:stanford-corenlp:3.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.4.0","isBinary":false},{"packageType":"Java","groupId":"edu.stanford.nlp","packageName":"stanford-corenlp","packageVersion":"3.9.2","packageFilePaths":["/tools/reminder-bot/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"edu.stanford.nlp:stanford-corenlp:3.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.4.0","isBinary":false}],"baseBranches":["spring-bot-master"],"vulnerabilityIdentifier":"CVE-2021-44550","vulnerabilityDetails":"An Incorrect Access Control vulnerability exists in CoreNLP 4.3.2 via the classifier in NERServlet.java (lines 158 and 159).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44550","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in stanford corenlp jar stanford corenlp jar cve high severity vulnerability vulnerable libraries stanford corenlp jar stanford corenlp jar stanford corenlp jar stanford corenlp provides a set of natural language analysis tools which can take raw english language text input and give the base forms of words their parts of speech whether they are names of companies people etc normalize dates times and numeric quantities mark up the structure of sentences in terms of phrases and word dependencies and indicate which noun phrases refer to the same entities it provides the foundational building blocks for higher level text understanding applications library home page a href path to dependency file tools reminder bot pom xml path to vulnerable library repository edu stanford nlp stanford corenlp stanford corenlp jar dependency hierarchy x stanford corenlp jar vulnerable library stanford corenlp jar stanford corenlp provides a set of natural language analysis tools which can take raw english language text input and give the base forms of words their parts of speech whether they are names of companies people etc normalize dates times and numeric quantities mark up the structure of sentences in terms of phrases and word dependencies and indicate which noun phrases refer to the same entities it provides the foundational building blocks for higher level text understanding applications library home page a href path to dependency file tools reminder bot pom xml path to vulnerable library repository edu stanford nlp stanford corenlp stanford corenlp models jar dependency hierarchy x stanford corenlp jar vulnerable library found in head commit a href found in base branch spring bot master vulnerability details an incorrect access control vulnerability exists in corenlp via the classifier in nerservlet java lines and publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree edu stanford nlp stanford corenlp isminimumfixversionavailable true minimumfixversion isbinary false packagetype java groupid edu stanford nlp packagename stanford corenlp packageversion packagefilepaths istransitivedependency false dependencytree edu stanford nlp stanford corenlp isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails an incorrect access control vulnerability exists in corenlp via the classifier in nerservlet java lines and vulnerabilityurl
0
20,792
27,535,840,611
IssuesEvent
2023-03-07 03:20:18
open-telemetry/opentelemetry-collector
https://api.github.com/repos/open-telemetry/opentelemetry-collector
closed
Extract user's browser/device/platform information from user agent
enhancement Stale priority:p3 release:after-ga area:processor
**Is your feature request related to a problem? Please describe.** I have a use case where I need browser/device/platform information from useragent. useragent field is available as a tag inside span For ex: a user-agent value of `Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Safari/537.36` is parsed to get below values ``` deviceType: mac browser: chrome 84 platformOS; macOS (Catalina) ``` **Describe the solution you'd like** A processor which can parse the user-agent (configurable) field and add them as additional tags will be great. Somewhat similar to attributes processor but with the flexibility to add instrumentation details on how to extract data. I am willing to work on this and contribute back. Please let me know if this makes sense and should be part of the opentelemetry-collector repo.
1.0
Extract user's browser/device/platform information from user agent - **Is your feature request related to a problem? Please describe.** I have a use case where I need browser/device/platform information from useragent. useragent field is available as a tag inside span For ex: a user-agent value of `Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Safari/537.36` is parsed to get below values ``` deviceType: mac browser: chrome 84 platformOS; macOS (Catalina) ``` **Describe the solution you'd like** A processor which can parse the user-agent (configurable) field and add them as additional tags will be great. Somewhat similar to attributes processor but with the flexibility to add instrumentation details on how to extract data. I am willing to work on this and contribute back. Please let me know if this makes sense and should be part of the opentelemetry-collector repo.
process
extract user s browser device platform information from user agent is your feature request related to a problem please describe i have a use case where i need browser device platform information from useragent useragent field is available as a tag inside span for ex a user agent value of mozilla macintosh intel mac os x applewebkit khtml like gecko chrome safari is parsed to get below values devicetype mac browser chrome platformos macos catalina describe the solution you d like a processor which can parse the user agent configurable field and add them as additional tags will be great somewhat similar to attributes processor but with the flexibility to add instrumentation details on how to extract data i am willing to work on this and contribute back please let me know if this makes sense and should be part of the opentelemetry collector repo
1
61,334
3,144,594,792
IssuesEvent
2015-09-14 14:11:46
fusioneng/mrss-to-facebook-video-app
https://api.github.com/repos/fusioneng/mrss-to-facebook-video-app
closed
Log upload success and failure to Slack
priority:low
Three environment variables: * `SLACK_WEBHOOK_URL` * `SLACK_CHANNEL_SUCCESS` - Reporting successful uploads * `SLACK_CHANNEL_ERROR` - Reporting errored uploads, and other error statuses
1.0
Log upload success and failure to Slack - Three environment variables: * `SLACK_WEBHOOK_URL` * `SLACK_CHANNEL_SUCCESS` - Reporting successful uploads * `SLACK_CHANNEL_ERROR` - Reporting errored uploads, and other error statuses
non_process
log upload success and failure to slack three environment variables slack webhook url slack channel success reporting successful uploads slack channel error reporting errored uploads and other error statuses
0
20,067
26,557,462,383
IssuesEvent
2023-01-20 13:16:05
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
Redaction of SpanEvents
enhancement priority:p2 processor/redaction
### Is your feature request related to a problem? Please describe. SpanEvents have the potential to leak sensitive data, particularly when exception stack traces are automatically attached. Currently, the redaction plugin only has the ability to act upon attribute values. ### Describe the solution you'd like I would like the ability to apply blocked value regex to span events as well. ### Describe alternatives you've considered _No response_ ### Additional context _No response_
1.0
Redaction of SpanEvents - ### Is your feature request related to a problem? Please describe. SpanEvents have the potential to leak sensitive data, particularly when exception stack traces are automatically attached. Currently, the redaction plugin only has the ability to act upon attribute values. ### Describe the solution you'd like I would like the ability to apply blocked value regex to span events as well. ### Describe alternatives you've considered _No response_ ### Additional context _No response_
process
redaction of spanevents is your feature request related to a problem please describe spanevents have the potential to leak sensitive data particularly when exception stack traces are automatically attached currently the redaction plugin only has the ability to act upon attribute values describe the solution you d like i would like the ability to apply blocked value regex to span events as well describe alternatives you ve considered no response additional context no response
1
19,465
25,758,809,154
IssuesEvent
2022-12-08 18:35:21
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
Add standard deviation aggregation support to mongo
Database/Mongo Querying/Processor Type:New Feature .Completeness
[`$stdDevPop` and `$stdDevSamp`](https://docs.mongodb.org/manual/reference/operator/aggregation/stdDevPop/#grp._S_stdDevPop) have been added as aggregation pipeline operators as of Mongo 3.2, which came out in December 2015.
1.0
Add standard deviation aggregation support to mongo - [`$stdDevPop` and `$stdDevSamp`](https://docs.mongodb.org/manual/reference/operator/aggregation/stdDevPop/#grp._S_stdDevPop) have been added as aggregation pipeline operators as of Mongo 3.2, which came out in December 2015.
process
add standard deviation aggregation support to mongo have been added as aggregation pipeline operators as of mongo which came out in december
1
145,512
22,704,908,370
IssuesEvent
2022-07-05 13:54:13
EscolaDeSaudePublica/DesignLab
https://api.github.com/repos/EscolaDeSaudePublica/DesignLab
closed
Organizar histórias do DataLab que precisam de revisão de escopo
PROJ: Soluções Digitais Prioridade Design: Alta
## **Objetivo** **Como** designer **Quero** organizar as histórias do DataLab que precisam de revisão de escopo **Para** otimizar o processo de desenvolvimento da tarefa ## **Contexto** Após a conclusão do sequenciador da Inception de Projetos, é preciso mediar a relação entre as histórias das tarefas do GitHub com as mapeadas no processo principal ## **Escopo** - [ ] Organizar as tarefas do DataLab que necessitam de revisão de Escopo ## Observações https://www.figma.com/file/ab0B64s5aAxB79NLyhZuX0/Inception-Pr%C3%A9-Oficina-ESP-2022
1.0
Organizar histórias do DataLab que precisam de revisão de escopo - ## **Objetivo** **Como** designer **Quero** organizar as histórias do DataLab que precisam de revisão de escopo **Para** otimizar o processo de desenvolvimento da tarefa ## **Contexto** Após a conclusão do sequenciador da Inception de Projetos, é preciso mediar a relação entre as histórias das tarefas do GitHub com as mapeadas no processo principal ## **Escopo** - [ ] Organizar as tarefas do DataLab que necessitam de revisão de Escopo ## Observações https://www.figma.com/file/ab0B64s5aAxB79NLyhZuX0/Inception-Pr%C3%A9-Oficina-ESP-2022
non_process
organizar histórias do datalab que precisam de revisão de escopo objetivo como designer quero organizar as histórias do datalab que precisam de revisão de escopo para otimizar o processo de desenvolvimento da tarefa contexto após a conclusão do sequenciador da inception de projetos é preciso mediar a relação entre as histórias das tarefas do github com as mapeadas no processo principal escopo organizar as tarefas do datalab que necessitam de revisão de escopo observações
0
93
2,535,030,493
IssuesEvent
2015-01-25 16:28:11
ChelseaStats/issues
https://api.github.com/repos/ChelseaStats/issues
closed
sidcelery January 14 2015 at 12:53AM
process
<blockquote class="twitter-tweet"> <p><a href="http://u.thechels.uk/1joEUnX">@ChelseaStats</a> I saw an odd stat the other day you may like. City's result at Everton is the 1st fixture where our result compares favourably</p> &mdash; Sid Celery (@sidcelery) <a href="http://u.thechels.uk/1BYDmew">January 14, 2015</a> </blockquote> <br><br> January 14, 2015 at 12:53AM<br> via Twitter
1.0
sidcelery January 14 2015 at 12:53AM - <blockquote class="twitter-tweet"> <p><a href="http://u.thechels.uk/1joEUnX">@ChelseaStats</a> I saw an odd stat the other day you may like. City's result at Everton is the 1st fixture where our result compares favourably</p> &mdash; Sid Celery (@sidcelery) <a href="http://u.thechels.uk/1BYDmew">January 14, 2015</a> </blockquote> <br><br> January 14, 2015 at 12:53AM<br> via Twitter
process
sidcelery january at mdash sid celery sidcelery january at via twitter
1
120,317
15,727,292,756
IssuesEvent
2021-03-29 12:29:54
carbon-design-system/carbon-for-ibm-dotcom
https://api.github.com/repos/carbon-design-system/carbon-for-ibm-dotcom
closed
[TOC] Design spec, Storybook QA, Abstract
Airtable Done design
### QA the following component in storybook, and create issues if bugs are present - [ ] ToC vertical (default) - [ ] ToC horizontal ### Reminder of clean up fixes we are looking for across the board: - grid should be applied - line length - ~spacing~ skip spacing for now since an AEM layout token change is happening this sprint - naming (should be correct capitalization) - knobs
1.0
[TOC] Design spec, Storybook QA, Abstract - ### QA the following component in storybook, and create issues if bugs are present - [ ] ToC vertical (default) - [ ] ToC horizontal ### Reminder of clean up fixes we are looking for across the board: - grid should be applied - line length - ~spacing~ skip spacing for now since an AEM layout token change is happening this sprint - naming (should be correct capitalization) - knobs
non_process
design spec storybook qa abstract qa the following component in storybook and create issues if bugs are present toc vertical default toc horizontal reminder of clean up fixes we are looking for across the board grid should be applied line length spacing skip spacing for now since an aem layout token change is happening this sprint naming should be correct capitalization knobs
0
4,011
6,948,691,958
IssuesEvent
2017-12-06 01:44:16
PHPSocialNetwork/phpfastcache
https://api.github.com/repos/PHPSocialNetwork/phpfastcache
closed
Mongodb driver + itemDetailedDate option generates driverUnwrapCdate error
6.0 6.1 7.0 >_< Working & Scheduled [*_*] Debugging [-_-] In Process ~_~ Issue confirmed
### Configuration: PhpFastCache version: ` 6.0.7 ` PHP version: `7.09 (x86) TS ` Operating system: `Windows 10` mongoc version : `1.8.2` ext-mongodb : `1.3.4` #### Issue description: Creating the CacheManager Instance : ```php $cacheInstance = CacheManager::getInstance('Mongodb',[ 'itemDetailedDate'=>true, 'host' => '127.0.0.1', 'port' => '27017', 'username' => '', 'password' => '', 'timeout' => '1' ]); ``` Caching example : ```php $item=$cacheInstance->getItem('key1'); $item->set('data'); $item->addTag('tag'); $cacheInstance->save($item); ``` Fetches data stored for the given key : ```php $value=$cacheInstance->getItem($key)->get(); ``` Fetching works well, but 2 notices are generated by php : ``` Notice: Undefined index: c in ...\src\phpFastCache\Core\Pool\DriverBaseTrait.php on line 201 Notice: Undefined index: m in ...\src\phpFastCache\Core\Pool\DriverBaseTrait.php on line 211 ``` In one of the methods involved: **driverUnwrapCdate** `var_dump` on $wrapper parameter displays: ``` array(3) { ["d"]=> string(810) "data" ["g"]=> array(1) { [0]=> string(6) "tag" } ["e"]=> object(DateTime)#33 (3) { ["date"]=> string(26) "2017-12-04 02:48:51.000000" ["timezone_type"]=> int(3) ["timezone"]=> string(13) "Europe/Berlin"} } ``` Indeed there is no key `c` or `m` in this associative array! I activated the **itemDetailedDate** option because I just need the creationDate of each item (I do not need to set the expirationDate) did I make a mistake somewhere?
1.0
Mongodb driver + itemDetailedDate option generates driverUnwrapCdate error - ### Configuration: PhpFastCache version: ` 6.0.7 ` PHP version: `7.09 (x86) TS ` Operating system: `Windows 10` mongoc version : `1.8.2` ext-mongodb : `1.3.4` #### Issue description: Creating the CacheManager Instance : ```php $cacheInstance = CacheManager::getInstance('Mongodb',[ 'itemDetailedDate'=>true, 'host' => '127.0.0.1', 'port' => '27017', 'username' => '', 'password' => '', 'timeout' => '1' ]); ``` Caching example : ```php $item=$cacheInstance->getItem('key1'); $item->set('data'); $item->addTag('tag'); $cacheInstance->save($item); ``` Fetches data stored for the given key : ```php $value=$cacheInstance->getItem($key)->get(); ``` Fetching works well, but 2 notices are generated by php : ``` Notice: Undefined index: c in ...\src\phpFastCache\Core\Pool\DriverBaseTrait.php on line 201 Notice: Undefined index: m in ...\src\phpFastCache\Core\Pool\DriverBaseTrait.php on line 211 ``` In one of the methods involved: **driverUnwrapCdate** `var_dump` on $wrapper parameter displays: ``` array(3) { ["d"]=> string(810) "data" ["g"]=> array(1) { [0]=> string(6) "tag" } ["e"]=> object(DateTime)#33 (3) { ["date"]=> string(26) "2017-12-04 02:48:51.000000" ["timezone_type"]=> int(3) ["timezone"]=> string(13) "Europe/Berlin"} } ``` Indeed there is no key `c` or `m` in this associative array! I activated the **itemDetailedDate** option because I just need the creationDate of each item (I do not need to set the expirationDate) did I make a mistake somewhere?
process
mongodb driver itemdetaileddate option generates driverunwrapcdate error configuration phpfastcache version php version ts operating system windows mongoc version ext mongodb issue description creating the cachemanager instance php cacheinstance cachemanager getinstance mongodb itemdetaileddate true host port username password timeout caching example php item cacheinstance getitem item set data item addtag tag cacheinstance save item fetches data stored for the given key php value cacheinstance getitem key get fetching works well but notices are generated by php notice undefined index c in src phpfastcache core pool driverbasetrait php on line notice undefined index m in src phpfastcache core pool driverbasetrait php on line in one of the methods involved driverunwrapcdate var dump on wrapper parameter displays array string data array string tag object datetime string int string europe berlin indeed there is no key c or m in this associative array i activated the itemdetaileddate option because i just need the creationdate of each item i do not need to set the expirationdate did i make a mistake somewhere
1
2,641
5,415,486,746
IssuesEvent
2017-03-01 21:42:57
jlm2017/jlm-video-subtitles
https://api.github.com/repos/jlm2017/jlm-video-subtitles
closed
[subtitles] [fr] #RDLS19 - SPIRULINE, CONSTRUIRE EN TERRE, ÉMISSION POLITIQUE, NUCLÉAIRE AMÉRICAIN
Language: French Process: [6] Approved
# Video title #RDLS19 - SPIRULINE, CONSTRUIRE EN TERRE, ÉMISSION POLITIQUE, NUCLÉAIRE AMÉRICAIN # URL https://www.youtube.com/watch?v=fyrAg_pReTY # Youtube subtitles language French # Duration 21:37 # Subtitles URL https://www.youtube.com/timedtext_editor?bl=vmp&ref=player&lang=fr&action_mde_edit_form=1&tab=captions&v=fyrAg_pReTY&ui=hd&captions-r=1
1.0
[subtitles] [fr] #RDLS19 - SPIRULINE, CONSTRUIRE EN TERRE, ÉMISSION POLITIQUE, NUCLÉAIRE AMÉRICAIN - # Video title #RDLS19 - SPIRULINE, CONSTRUIRE EN TERRE, ÉMISSION POLITIQUE, NUCLÉAIRE AMÉRICAIN # URL https://www.youtube.com/watch?v=fyrAg_pReTY # Youtube subtitles language French # Duration 21:37 # Subtitles URL https://www.youtube.com/timedtext_editor?bl=vmp&ref=player&lang=fr&action_mde_edit_form=1&tab=captions&v=fyrAg_pReTY&ui=hd&captions-r=1
process
spiruline construire en terre émission politique nucléaire américain video title spiruline construire en terre émission politique nucléaire américain url youtube subtitles language french duration subtitles url
1
4,037
6,971,796,307
IssuesEvent
2017-12-11 15:08:13
ontop/ontop
https://api.github.com/repos/ontop/ontop
closed
Ontop native mapping parser: decouple the grammar from source code generation
status: fixed topic: mapping processing type: enhancement
The antlr 3 grammar used to parse ontop native mappings contains embedded Java code, used for source generation. This makes debugging/refactoring more complex. Solution: switch to anlr4, which allows decoupling the grammar from code generation, and refactor the grammar accordingly.
1.0
Ontop native mapping parser: decouple the grammar from source code generation - The antlr 3 grammar used to parse ontop native mappings contains embedded Java code, used for source generation. This makes debugging/refactoring more complex. Solution: switch to anlr4, which allows decoupling the grammar from code generation, and refactor the grammar accordingly.
process
ontop native mapping parser decouple the grammar from source code generation the antlr grammar used to parse ontop native mappings contains embedded java code used for source generation this makes debugging refactoring more complex solution switch to which allows decoupling the grammar from code generation and refactor the grammar accordingly
1
77,891
7,606,916,907
IssuesEvent
2018-04-30 14:52:11
appium/appium
https://api.github.com/repos/appium/appium
closed
Unable to execute on real device; Inconsistent facing [MJSONWP] Encountered internal error running command: ProxyRequestError: Could not proxy command to remote server. Original error: Error: ESOCKETTIMEDOUT
ThirdParty XCUITest
## The problem I am facing the error when I am running the scripts on a real device. The same is observed with other team members. ## Environment * Appium version (or git revision) that exhibits the issue: 1.7.2 ; 1.8.0beta3 ; Appium destop * Mobile platform/version under test: Android * Real device or emulator/simulator: One plus 3t ## Details We don't receive any error. it hangs for a long time and gives ESOCKETTIMEDOUT error. ## Link to Appium logs https://gist.github.com/Rakeshsp05/8aaffbeea974082c4d17178f83737e3e.js
1.0
Unable to execute on real device; Inconsistent facing [MJSONWP] Encountered internal error running command: ProxyRequestError: Could not proxy command to remote server. Original error: Error: ESOCKETTIMEDOUT - ## The problem I am facing the error when I am running the scripts on a real device. The same is observed with other team members. ## Environment * Appium version (or git revision) that exhibits the issue: 1.7.2 ; 1.8.0beta3 ; Appium destop * Mobile platform/version under test: Android * Real device or emulator/simulator: One plus 3t ## Details We don't receive any error. it hangs for a long time and gives ESOCKETTIMEDOUT error. ## Link to Appium logs https://gist.github.com/Rakeshsp05/8aaffbeea974082c4d17178f83737e3e.js
non_process
unable to execute on real device inconsistent facing encountered internal error running command proxyrequesterror could not proxy command to remote server original error error esockettimedout the problem i am facing the error when i am running the scripts on a real device the same is observed with other team members environment appium version or git revision that exhibits the issue appium destop mobile platform version under test android real device or emulator simulator one plus details we don t receive any error it hangs for a long time and gives esockettimedout error link to appium logs
0
12,069
7,776,503,693
IssuesEvent
2018-06-05 08:17:29
SketchUp/rubocop-sketchup
https://api.github.com/repos/SketchUp/rubocop-sketchup
opened
Catch string comparison of class names
SketchupPerformance cop enhancement
``` ent.class.name !="Sketchup::Group" ``` This is just as slow as using `typename`.
True
Catch string comparison of class names - ``` ent.class.name !="Sketchup::Group" ``` This is just as slow as using `typename`.
non_process
catch string comparison of class names ent class name sketchup group this is just as slow as using typename
0
427,534
12,396,426,041
IssuesEvent
2020-05-20 20:30:55
eBay/ebayui-core
https://api.github.com/repos/eBay/ebayui-core
closed
Carousel: Pagination dots do not have a visible focus outline (IE11) (CoreUI v4)
aspect: IE11 aspect: a11y priority: 4 status: backlog type: bug
<!-- Delete any sections below that are not relevant. --> # Bug Report ## eBayUI Version: 4.4.3 (currently upgrading to 4.5.10) + IE11 ## Description <!-- What's the bug? Include steps to reproduce, actual vs. expected behavior, etc. --> Action Performed 1. Tab to focus the navigation icons of any carousel Expected Result Each interactive element has a visible focus outline. Actual Result The navigation icons of the carousel do not have a visible focus outline. Notes: This is a keyboard navigation issue. It is not observed using Firefox, Chrome or macOS Safari ## Workaround <!-- Is there a known workaround? If so, what is it? --> ## Screenshots <!-- Upload screenshots if appropriate. -->
1.0
Carousel: Pagination dots do not have a visible focus outline (IE11) (CoreUI v4) - <!-- Delete any sections below that are not relevant. --> # Bug Report ## eBayUI Version: 4.4.3 (currently upgrading to 4.5.10) + IE11 ## Description <!-- What's the bug? Include steps to reproduce, actual vs. expected behavior, etc. --> Action Performed 1. Tab to focus the navigation icons of any carousel Expected Result Each interactive element has a visible focus outline. Actual Result The navigation icons of the carousel do not have a visible focus outline. Notes: This is a keyboard navigation issue. It is not observed using Firefox, Chrome or macOS Safari ## Workaround <!-- Is there a known workaround? If so, what is it? --> ## Screenshots <!-- Upload screenshots if appropriate. -->
non_process
carousel pagination dots do not have a visible focus outline coreui bug report ebayui version currently upgrading to description action performed tab to focus the navigation icons of any carousel expected result each interactive element has a visible focus outline actual result the navigation icons of the carousel do not have a visible focus outline notes this is a keyboard navigation issue it is not observed using firefox chrome or macos safari workaround screenshots
0
295,549
25,482,093,104
IssuesEvent
2022-11-25 23:29:30
mozilla-mobile/focus-android
https://api.github.com/repos/mozilla-mobile/focus-android
closed
Intermittent UI test failure - SwitchLocaleTest.FrenchLocaleTest
eng:ui-test eng:intermittent-test eng:disabled-test
Was disabled on https://github.com/mozilla-mobile/focus-android/pull/7699 because of compose scrolling issue: https://github.com/mozilla-mobile/focus-android/issues/7282
3.0
Intermittent UI test failure - SwitchLocaleTest.FrenchLocaleTest - Was disabled on https://github.com/mozilla-mobile/focus-android/pull/7699 because of compose scrolling issue: https://github.com/mozilla-mobile/focus-android/issues/7282
non_process
intermittent ui test failure switchlocaletest frenchlocaletest was disabled on because of compose scrolling issue
0
187,933
14,434,468,361
IssuesEvent
2020-12-07 07:08:09
pachyderm/pachyderm
https://api.github.com/repos/pachyderm/pachyderm
opened
Local pachd fails to come up, error: "unable to write to object storage: The specified bucket does not exist"
test flake
When running in dev/local mode, pachd uses local disk in lieu of object storage. In Travis, though, I have a test failure where local pachd is failing to come up, printing the error: `error setting up Internal Pachd GRPC Server: error setting up Block API GRPC Server: unable to write to object storage: The specified bucket does not exist`, which doesn't make sense as Pachd shouldn't be connected to any object storage service. It's possible that a directory that pachd expects to exist doesn't exist, but I'm not sure which directory is missing or why Full Travis logs: [travis_logs.txt](https://github.com/pachyderm/pachyderm/files/5650967/travis_logs.txt)
1.0
Local pachd fails to come up, error: "unable to write to object storage: The specified bucket does not exist" - When running in dev/local mode, pachd uses local disk in lieu of object storage. In Travis, though, I have a test failure where local pachd is failing to come up, printing the error: `error setting up Internal Pachd GRPC Server: error setting up Block API GRPC Server: unable to write to object storage: The specified bucket does not exist`, which doesn't make sense as Pachd shouldn't be connected to any object storage service. It's possible that a directory that pachd expects to exist doesn't exist, but I'm not sure which directory is missing or why Full Travis logs: [travis_logs.txt](https://github.com/pachyderm/pachyderm/files/5650967/travis_logs.txt)
non_process
local pachd fails to come up error unable to write to object storage the specified bucket does not exist when running in dev local mode pachd uses local disk in lieu of object storage in travis though i have a test failure where local pachd is failing to come up printing the error error setting up internal pachd grpc server error setting up block api grpc server unable to write to object storage the specified bucket does not exist which doesn t make sense as pachd shouldn t be connected to any object storage service it s possible that a directory that pachd expects to exist doesn t exist but i m not sure which directory is missing or why full travis logs
0
8,153
11,354,867,817
IssuesEvent
2020-01-24 18:39:31
googleapis/java-os-login
https://api.github.com/repos/googleapis/java-os-login
closed
Promote to Beta
type: process
Package name: **google-cloud-os-login** Current release: **alpha** Proposed release: **beta** ## Instructions Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue. ## Required - [x] Server API is beta or GA - [x] Service API is public - [x] Client surface is mostly stable (no known issues that could significantly change the surface) - [x] All manual types and methods have comment documentation - [x] Package name is idiomatic for the platform - [x] At least one integration/smoke test is defined and passing - [x] Central GitHub README lists and points to the per-API README - [x] Per-API README links to product page on cloud.google.com - [x] Manual code has been reviewed for API stability by repo owner ## Optional - [ ] Most common / important scenarios have descriptive samples - [ ] Public manual methods have at least one usage sample each (excluding overloads) - [ ] Per-API README includes a full description of the API - [ ] Per-API README contains at least one “getting started” sample using the most common API scenario - [ ] Manual code has been reviewed by API producer - [ ] Manual code has been reviewed by a DPE responsible for samples - [ ] 'Client LIbraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
1.0
Promote to Beta - Package name: **google-cloud-os-login** Current release: **alpha** Proposed release: **beta** ## Instructions Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue. ## Required - [x] Server API is beta or GA - [x] Service API is public - [x] Client surface is mostly stable (no known issues that could significantly change the surface) - [x] All manual types and methods have comment documentation - [x] Package name is idiomatic for the platform - [x] At least one integration/smoke test is defined and passing - [x] Central GitHub README lists and points to the per-API README - [x] Per-API README links to product page on cloud.google.com - [x] Manual code has been reviewed for API stability by repo owner ## Optional - [ ] Most common / important scenarios have descriptive samples - [ ] Public manual methods have at least one usage sample each (excluding overloads) - [ ] Per-API README includes a full description of the API - [ ] Per-API README contains at least one “getting started” sample using the most common API scenario - [ ] Manual code has been reviewed by API producer - [ ] Manual code has been reviewed by a DPE responsible for samples - [ ] 'Client LIbraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
process
promote to beta package name google cloud os login current release alpha proposed release beta instructions check the lists below adding tests documentation as required once all the required boxes are ticked please create a release and close this issue required server api is beta or ga service api is public client surface is mostly stable no known issues that could significantly change the surface all manual types and methods have comment documentation package name is idiomatic for the platform at least one integration smoke test is defined and passing central github readme lists and points to the per api readme per api readme links to product page on cloud google com manual code has been reviewed for api stability by repo owner optional most common important scenarios have descriptive samples public manual methods have at least one usage sample each excluding overloads per api readme includes a full description of the api per api readme contains at least one “getting started” sample using the most common api scenario manual code has been reviewed by api producer manual code has been reviewed by a dpe responsible for samples client libraries page is added to the product documentation in apis reference section of the product s documentation on cloud site
1
9,918
12,958,568,907
IssuesEvent
2020-07-20 11:36:19
modi-w/AutoVersionsDB
https://api.github.com/repos/modi-w/AutoVersionsDB
closed
Add Spinner to Winform NotificationsControl
area-UI process-ready-for-implementation type-enhancement
**The Problem** While the system working on process, some steps could take some long time. So, the user sometime experience that nothing change on the screen and may think that the process completed or stuck. **Solution** Add a spinner to the the NotificationsControl, that shown only on working process.
1.0
Add Spinner to Winform NotificationsControl - **The Problem** While the system working on process, some steps could take some long time. So, the user sometime experience that nothing change on the screen and may think that the process completed or stuck. **Solution** Add a spinner to the the NotificationsControl, that shown only on working process.
process
add spinner to winform notificationscontrol the problem while the system working on process some steps could take some long time so the user sometime experience that nothing change on the screen and may think that the process completed or stuck solution add a spinner to the the notificationscontrol that shown only on working process
1
80,942
10,076,456,016
IssuesEvent
2019-07-24 16:16:50
sul-cidr/noh
https://api.github.com/repos/sul-cidr/noh
opened
Reduce Libretto space
design low priority
Remove singing style from the Libretto and make it take less vertical space to give narrative more room.
1.0
Reduce Libretto space - Remove singing style from the Libretto and make it take less vertical space to give narrative more room.
non_process
reduce libretto space remove singing style from the libretto and make it take less vertical space to give narrative more room
0
171,880
14,346,621,308
IssuesEvent
2020-11-29 01:44:12
blazingbulldogs/travis-the-chimp
https://api.github.com/repos/blazingbulldogs/travis-the-chimp
closed
Lack of info
documentation
You didn't specify exactly how does this work, the thresholds or how to trigger it. It's working rn (tysm for the heroku link) but its jsut saying "watching you". what now?
1.0
Lack of info - You didn't specify exactly how does this work, the thresholds or how to trigger it. It's working rn (tysm for the heroku link) but its jsut saying "watching you". what now?
non_process
lack of info you didn t specify exactly how does this work the thresholds or how to trigger it it s working rn tysm for the heroku link but its jsut saying watching you what now
0
8,831
11,941,095,537
IssuesEvent
2020-04-02 17:50:08
MicrosoftDocs/vsts-docs
https://api.github.com/repos/MicrosoftDocs/vsts-docs
closed
What about custom groups?
Pri1 devops-cicd-process/tech devops/prod
"If you create an environment within a YAML, contributors and project administrators will be granted Administrator role. This is typically used in provisioning Dev/Test environments." =&gt; We have custom groups in our projects - When env is created automatically, these groups are added as readers. It would be nice if we have a possibility to change the standard behaviour of the automatic creation - so that it is possible to set own standard permissions (like template or sth.) --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 77d95db6-9983-7346-d0eb-4b7443e4e252 * Version Independent ID: 0a22cccc-318d-592f-d1ab-09ec01d88087 * Content: [Environment - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops#feedback) * Content Source: [docs/pipelines/process/environments.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/environments.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
What about custom groups? - "If you create an environment within a YAML, contributors and project administrators will be granted Administrator role. This is typically used in provisioning Dev/Test environments." =&gt; We have custom groups in our projects - When env is created automatically, these groups are added as readers. It would be nice if we have a possibility to change the standard behaviour of the automatic creation - so that it is possible to set own standard permissions (like template or sth.) --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 77d95db6-9983-7346-d0eb-4b7443e4e252 * Version Independent ID: 0a22cccc-318d-592f-d1ab-09ec01d88087 * Content: [Environment - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops#feedback) * Content Source: [docs/pipelines/process/environments.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/environments.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
what about custom groups if you create an environment within a yaml contributors and project administrators will be granted administrator role this is typically used in provisioning dev test environments gt we have custom groups in our projects when env is created automatically these groups are added as readers it would be nice if we have a possibility to change the standard behaviour of the automatic creation so that it is possible to set own standard permissions like template or sth document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
1,375
3,932,024,540
IssuesEvent
2016-04-25 14:31:26
CGAL/cgal
https://api.github.com/repos/CGAL/cgal
closed
Non-reproducible runtime error of test/Point_set_processing/vcm_all_test.cpp
bug Pkg::Point_set_processing
While comparing the test results from today and yesterday, I found that the test `vcm_all_test` sometimes has runtime errors, without any visible reason (nothing in the history of the branch can explain why the test suddenly failed). For example: * https://cgal.geometryfactory.com/CGAL/testsuite/CGAL-4.8-Ic-160/Point_set_processing_3/TestReport_afabri_x64_Cygwin-Windows8_MSVC2012-Release-32bits.gz * https://cgal.geometryfactory.com/CGAL/testsuite/CGAL-4.8-I-146/Point_set_processing_3/TestReport_cgaltester_x86-64_Darwin-13.0_Apple-clang-5.0_Release.gz Is that test random? Do you see a reason why the test can exit with an error without displaying any useful message? Cc: @afabri, @palliez, @sgiraudot
1.0
Non-reproducible runtime error of test/Point_set_processing/vcm_all_test.cpp - While comparing the test results from today and yesterday, I found that the test `vcm_all_test` sometimes has runtime errors, without any visible reason (nothing in the history of the branch can explain why the test suddenly failed). For example: * https://cgal.geometryfactory.com/CGAL/testsuite/CGAL-4.8-Ic-160/Point_set_processing_3/TestReport_afabri_x64_Cygwin-Windows8_MSVC2012-Release-32bits.gz * https://cgal.geometryfactory.com/CGAL/testsuite/CGAL-4.8-I-146/Point_set_processing_3/TestReport_cgaltester_x86-64_Darwin-13.0_Apple-clang-5.0_Release.gz Is that test random? Do you see a reason why the test can exit with an error without displaying any useful message? Cc: @afabri, @palliez, @sgiraudot
process
non reproducible runtime error of test point set processing vcm all test cpp while comparing the test results from today and yesterday i found that the test vcm all test sometimes has runtime errors without any visible reason nothing in the history of the branch can explain why the test suddenly failed for example is that test random do you see a reason why the test can exit with an error without displaying any useful message cc afabri palliez sgiraudot
1